Deepfake Research Paper (ResNET)
Deepfake Research Paper (ResNET)
Deepfake Research Paper (ResNET)
ABSTRACT:
Under the aegis of Department Of Artificial Intelligence, a new emerging techniques has
introduced that anyone can make highly realistic but fake videos, images even can
manipulates the voices. This technology is widely known as Deepfake Technology. Although
it seems interesting techniques to make fake videos or image of something or some
individuals but it could spread as misinformation via internet. Deepfake contents could be
dangerous for individuals as well as for our communities, organizations, countries religions
etc. As Deepfake content creation involve a high level expertise with combination of several
algorithms of deep learning, it seems almost real and genuine and difficult to differentiate. In
this paper, a wide range of articles have been examined to understand Deepfake technology
more extensively. We have examined several articles to find some insights such as what is
Deepfake, who are responsible for this, is there any benefits of Deepfake and what are the
challenges of this technology. We have also examined several creation and detection
techniques. Our study revealed that although Deepfake is a threat to our societies, proper
measures and strict regulations could prevent this.
Keywords: Deep learning, Deepfake, review, Deepfake generation, Deepfake creation
Introduction:
DEEPFAKE, combination of deep learning and fake con- tents, is a process that involves
swapping of a face from a person to a targeted person in a video and make the face ex-
pressing similar to targeted person and act like targeted person saying those words that
actually said by another person. Face swapping specially on image and video or manipulation
of facial expression is called as Deepfake methods [1]. Fake videos and images that go over
the internet can easily exploit some individuals and it becomes a public issue recently [24].
Creating false contents by altering the face of an individual referring as source in an image or
a video of another person referred as target is something that called DeepFake, which was
named after a social media site Reddit account name “Deepfake” who then claimed to
develop a machine learning technique to transfer celebrity faces into adult contents [5].
Furthermore, this technique is also used to create fake news, fraud and even spread
hoaxes.Several Deepfake algorithms have been proposed using generative adversarial
networks to copy movements and facial expressions of a person and swap it with another
person [14]. Political person, public figure, celebrity are the main victims of Deepfake. To
spread false message of world leaders Deepfake technology used several times and it could
be threat to world peace [15]. It can be used to mislead military personnel by providing fake
image of maps and that could create serious damage to anyone [16].
To know Deepfakes properly we have to know about it deeply such as what Deepfake
actually is, how it comes, how to create it and how to detect it etc. As this field is almost new
to researcher just introduced in 2017 there are not enough resources on this topics. Although
several research have introduced recently to deal with social media misinformation related to
Deepfake [19]. In this paper after the introduction ,we discussed more about Deepfake
technology and its uses. And then we discussed possible threats and challenges of this
technology, thereafter, we discussed different articles related to Deepfake generation and
Deepfake detection and also its positive and negative sides. Finally, we discussed the
limitations of our work, suggestions and future thoughts.
1. Deepfake:
Deepfake, a mixtures of deep learning and fake, are imitating contents where targeted
subject‟s face was swapped by source person to make videos or images of target person
[20-21]. Although, making of fake content is known to all but Deepfake is something beyond
someone‟s thoughts which make this techniques more powerful and almost real using ML
and AI to manipulate original content to make fraud one [22-24]. Deepfake has huge range of
usages such as making fake pornography of well known celebrity, spreading of fake news,
fake voices of politicians, financial fraud even many more [25-27]. Although face swapping
technique is well known in the film industry where several fake voice or videos were made as
their requirement but that took huge time and certain level of expertise. But through deep
learning techniques, anybody having sound computer knowledge and high configuration
GPU computer can make trustworthy fake videos or images.
3. Applications of Deepfake
Deepfake technology has a huge range of applications which could use both positively or
negatively, however most of the time it is used for malicious purposes. The unethical uses of
Deepfake technology has harmful consequences in our society either in short term or long
term. People regularly using social media are in a huge risk of Deepfake. However, proper
use of this technology could bring many positive results. Below both negative and positive
applications of Deepfake technology are described in details.
A. Negative Application : Deepfake and technology related to this are expanding rapidly
in current years. It has ample applications that use for malicious work against any
human being especially against celebrity and political leaders. There are several
reasons behind making Deepfake content that could be out of fun but sometimes it is
used for taking revenge, blackmailing, stealing identity of someone and many more.
There are thousands videos of Deepfake and most of them are adult videos of women
without their permission [28]. Most common use of Deepfake technology is to make
pornography of well known actress and it is rapidly increasing day by day specially of
Hollywood actresses [29]. Moreover, in 2018 a software was built that make a women
nude in a single click and it widely went viral for malicious purposes to harass women
[30]. The another most malicious use of Deepfake is to exploit world leaders and
politician by making fake videos of them and sometimes it could have been great risk
for world peace. Almost all world leader including Barack Obama, former president
of USA, Donald trump, running president of USA, Nency Pelosi, USA based
politician, Angela Merkel, German chancellor all are exploited by fake videos
somehow and even Facebook founder Mark Zukerberg have faced similar occurrence
[31]. There are also vast use of Deepfake in Art, film industry and in social media.
B. Positive Application: Although most of the time this technology is used for malicious
work with bad intention still it has some positive uses also in several sectors. The
Deepfake creation is no longer remain limited to experts, it is now become much
more easier and accessible to anyone. Nowadays constructive uses of this technology
widely increased. To create new art work, engage audiences and give them unique
experiences this technology was use [32]. Deepfake technology now used both in
advertising and business purposes too. Technologists now are using the Deepfake to
make copy of famous artwork such as creating video of famous Monalisa artwork
using the image [34]. Deepfake technology can save huge money and time of film
industry by using the capabilities of Deepfake technology for editing videos rather
than re- shot and there are many more positive examples such as famous footballer
David Becham spoke in 9 different languages to run a campaign against malaria and it
has also positive aspects in education sector [35].
C. GANS can be used in various field to give realistic experiences such as in retail
sector, it might be possible to see the real product what we see in shop going
physically [36]. Recently Reuters collaborated with AI startup Synthesis has made
first ever synthesized news presenter by using artificial intelligence techniques and it
was done using same techniques that is used in Deepfakes and it would be helpful for
personalized news for individuals [37]. Deep generative models also has shown great
possibilities of development in health care industry. To protect real data of patients
and research work instead of sharing real data imaginary data could be generated via
this technology [38]. Additionally, this technology has great potential in fundraising
and awareness building by making videos of famous personality who is asking for
help or fund for some novel work [39].
ResNET Algorithm:
In 2015, a deep residual network (ResNet) was proposed by the authors in [1] for image
recognition. It is a type of convolutional neural network (CNN) where the input from the
previous layer is added to the output of the current layer. This skip connection makes it easier
for the network to learn and results in better performance. The ResNet architecture has been
successful in a number of tasks, including image classification, object detection, and semantic
segmentation. Additionally, since ResNets are made up of layers, these networks can be
arbitrarily deep for an arbitrary level of spatial representation. There are various reasons for
the success of the model: the large receptive fields that capture more information about each
pixel in an image; the separation between the localization and classification stages; the
computational efficiency at higher levels; the efficient encoding schemes with
low-complexity arithmetic operations; and there is increased accuracy as features are
extracted deeper into the network.
Despite these advantages, current ResNets are computationally very expensive. While
modern GPUs can perform over one hundred million operations per second (Giga ops), a
commonly used architecture of a fully connected layer with ten million weights takes more
than two hours to train. This is why the authors in [9] propose to replace some fully
connected layers by stochastic pooling layers and to reduce it from a 5 × 5 filter size to a 3 ×
3 filter size.
This architecture formed to defeat quandaries in deep learning training because deep learning
training, in general, takes quite a lot of time and is limited to a certain number of layers. The
explication to the intricacy introduced by ResNet is to apply to skip connection or shortcut.
The advantage of the ResNets model compared to other architectural models is that the
performance of this model does not decrease even though the architecture is getting deeper.
Besides, computation calculations are made lighter, and the ability to train networks is better.
The ResNet model is implemented by skipping connections on two to three layers containing
ReLU and batch normalization among the architectures 11. He et al. showed that the ResNet
model performs better in image classification than other models, indicating that the image
features were extracted well by ResNet 11. He et al. adopts residual learning to be applied to
multiple layers of layers. The residual block on ResNet is defined as follows 11:
𝑦𝑦 = 𝐹𝐹(𝑥𝑥, 𝑊𝑊 + 𝑥𝑥)
where x is input layer; y is output layer; and F function is represented by the residual map.
Residual block on ResNet can be accomplished if the input data dimensions are identical to
the output data dimensions. In addition, each ResNet block consists of two layers (for
ResNet-18 and ResNet-34 networks) or three layers (for ResNet-50 and ResNet-101
networks). The two initial layers of the ResNet architecture resemble GoogleNet by doing
convolution 7 × 7 and max-pooling with size 3 × 3 with stride number 227. In this study, we
used ResNet-18 and ResNet-50 models. The authors resize the fundus image into a 224 x 224
grid. The weights of ResNet are initialised using Stochastics Gradient Descent’s (SGD) with
standard momentum parameters.
The structure of the ResNet representation explicated in Figure 1.
Figure 1
Figure 2
In summary, deep residual learning for image recognition has been shown to be an effective
method for image classification tasks. However, similar architectures have not yet been
explored for other computer vision tasks such as semantic segmentation or object detection.
There are several open problems that need further exploration when doing so, including
computational efficiency at higher levels and training stability; adding skip connections;
network depth versus complexity; biasing nonlinearities during training; input preprocessing
issues such as batch normalization, data augmentation algorithms for improving accuracy for
underrepresented classes such as at nighttime images versus daytime images using same
classifier neural network through exploiting spatio-temporal coherence; practicality of
architectures; stability while small local minima not having significant impact upon
generalization performance since big changes happen early vs. late in the sequence;this
would allow for the concurrent tuning of different regions of parameters instead of
completely independent ones.
Result:
We have used ResNET V2 algorithm on our image recognition and deepfake detection
project considering its useability and advantages. We have adopted the 80-20 part in our
modelling ; 80% data is used in training and other 20% is used in testing the model
performance.We have applied 20 number of epochs in our training period. The training
accuracy is 99.57% and the validation accuracy is 92.12%.
Figure 3
Figure 4
Figure 5
Acknowledgement:
We want to acknowledge our respected institute Madhav Institute Of Technology And
Science Gwalior for their support. We also want to acknowledge Prof. R.R Sir, Head Of
Department of Artificial Intelligence for their continuous support and guidance throughout
the project. We want to thanks all the department members of Artificial Intelligence.
Also we want to acknowledge our families and colleagues for their encouragement and
support throughout the completion of this research paper.
References
Duet GANs for Multi-View Face Image Synthesis. IEEE Transactions on Information
Forensics and Security, 14(8), pp.2028-2042.
13. Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X. and Metaxas, D., 2019.
StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial
Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8),
pp.1947-1962.
14. Lyu, Lyu, S., 2020. Detecting 'Deepfake' Videos In The Blink Of An Eye. [online]
The Conversation.Available at: <https://theconversation.com/
detecting-deepfake-videos-in- the-blink-of-an-eye-101072> .
15. Chesney, R. and Citron, D., 2020. Deepfakes And The New Disinformation War.
[online] Foreign Affairs. Available at:
<https://www.foreignaffairs.com/articles/world/2018-12-
11/deepfakes-and-new-disinformation-war>
16. Fish, T. (2019, April 4). ”Deep fakes: AI-manipulated media will be weaponised to
trick military.”[Online] Retrieved from
https://www.express.co.uk/news/science/1109783/deep- fakes-ai-
artificial-intelligencephotos-video-weaponised- china.
17. Marr, B., 2019. The Best (And Scariest) Examples Of AI- Enabled Deepfakes.
[online] Forbes.Available at:
<https://www.forbes.com /sites/bernardmarr/2019/07/22/the-
best-and-scariest-examples-of-ai-enabled- deepfakes/#1697860d2eaf> [Accessed 5 March
2020].
18. De keersmaecker, J., Roets, A. 2017. „Fake news‟: Incorrect, but hard to correct. The
role of cognitive ability on the impact of false information on social impressions.
Intelligence, 65: 107–110. https://doi.org/10.1016/j.intell.2017.10.005
19. Anderson, K. E. 2018. Getting acquainted with social networks and apps: combating
fake news on social media. Library HiTech News, 35(3): 1–6
20. Brandon, John , ”Terrifying high-tech porn: Creepy
21. ”Prepare, Don‟t Panic: Synthetic Media and Deepfakes”. June 2019. [On- line].
Available : https://lab.witness.org/projects/synthetic-media-and- deep-
fakes/.[Accessed:10-march-2020]
22. Schwartz, Oscar,” You thought fake news was bad? Deep fakes are where truth goes
to die”. The Guardian. 14 November 2018. [Online].Avaiable:
https://www.theguardian.com/technology/2018/nov/12/deep- fakes-
fake-news-.[Accessed:15-march-2020]
23. PhD, Sven Charleer . ”Family fun with deepfakes. Or how I got my wife onto the
Tonight Show”. 17 May 2019.[Online].Avaiable:
https://towardsdatascience.com/family-fun-with-deepfakes-
or-how-i-got-my-wife-onto-the-tonight-show-
a4454775c011.[Accessed:17-march-2020]
24. Clarke, Yvette D, ”H.R.3230 - 116th Congress (2019-2020): De- fending Each and
Every Person from False Appearances bKeeping Exploitation Subject to
Accountability Act of 2019”. 28 June
2019.[Online].Avaiable:https://www.congress.gov/bill/116th-
congress/house-bill/3230.[Accessed:17-march-2020]
25. ”What Are Deepfakes and Why the Future of Porn is Terrifying”. Highsnobiety. 20
February 2018.[Online]. Avaiable:https://www.highsnobiety.com/p/what-are-
deepfakes-ai- porn/.
26. Roose, K., 2018. Here Come The Fake Videos, Too. [online]
Nytimes.com.Available:<https://www.nytimes.com/2018/03/
04/technology/fake-videos-deepfakes.html> [Accessed 24
March 2020].
27. Ghoshal, Abhimanyu ,”Twitter, Pornhub and other platforms ban AI-generated
celebrity porn”. The Next Web. 7 February
2018.[Online].Avaiable:https://thenextweb.com/insider/2018/ 02/07/twitter-
pornhub-and-other-platforms-ban-ai-generated- celebrity-
porn/.[Accessed:25-march-2020]
28. Khalid, A., 2019. Deepfake Videos Are A Far, Far Bigger Problem For Women.
[online] Quartz. Available at:
<https://qz.com/1723476/ deepfake-videos-feature-mostly-
porn-according-to-new-study-from-deeptrace-labs/> [Accessed 25 March 2020].
29. Dickson, E. J. Dickson, ”Deepfake Porn Is Still a Threat, Particularly for K-Pop
Stars”. 7 October 2019.[Online].
Avaiable:https://www.rollingstone.com/culture/culture- news/deepfakes-
nonconsensual-porn-study-kpop- 895605/.[Accessed:25-march-2020]
30. James Vincent,”New AI deepfake app creates nude images of women in
seconds”.June 27,2019.[Online].Avaiable:
https://www.theverge.com/2019/6/27/18760896/deepfake-
nude-ai-app-women-deepnude-non-consensual-
pornography.[Accessed:25-march-2020]
31. Joseph Foley,”10 deepfake example that terrified and amused the internet”. 23March,
2020.[Online]. Available: https://www.creativebloq.com/features/deepfake- videos
examples.[Accessed:30-march-2020]
32. “3 THINGS YOU NEED TO KNOW ABOUT AI- POWERED “DEEP FAKES” IN
ART CULTURE” .17
33. “dal ´ı lives (via artificial intelligence)” .11 May, 2019,[Online]. Avaiable:
https://thedali.org/exhibit/dali- lives/.[Accessed:30-march-2020] [34] “New deepfake
AI tech creates videos using one image”. 31May 2019.[Online]. Avaiable:
https://blooloop.com/news/samsung-ai- deepfake-
video-museum-technology/.[Accessed:2-April-2020]
34. “Positive Applications for Deepfake Technology.12 Novem- ber,2019.[Online]
Avaiable:”https://hackernoon.com/the- light-
side-of-deepfakes-how-the-technology-can-be-used- for-good-
4hr32pp.[Accessed:2-april-2020]
35. Jedidiah Francis, Don‟t believe your eyes: Exploring the positives andnegatives of
deepfakes. 5 August 2019.[Online]. Avaiable:https://artificialintelligence-
news.com/2019/08/05/dont-believe-your-eyes-exploring-the- positives-
and-negatives-of-deepfakes/.[Accessed:2-April- 2020]
36. Simon Chandler, Why Deepfakes Are A Net Positive For Humanity.
37. 9 March 2020.[Online].
Available:https://www.forbes.com/sites/simonchandler/2020/03/09/
why-deepfakes-are-a-net-positive-for- humanity/334b. [Accessed:2-April-2020]
38. Geraint Rees, “Here‟s how deepfake technology can actually be a good thing”.25
November 2019.[Online]. Avaiable:
https://www.weforum.org/agenda/2019/11/advantages-of- artificial-
intelligence/.[Accessed:5-April-2020]
39. Patrick L. Plaisance Ph.D., “Ethics and “Synthetic Media” ”. 17 Septem- ber 2019.
Avaiable: https://www.psychologytoday.com/sg/blog/virtue- in-the-
media-world/201909/ethics-and-synthetic- media.[Accessed:5- April-2020]
40. Cheng, Z., Sun, H., Takeuchi, M., and Katto, J. (2019). Energy compaction-based
image compression using convolutional autoencoder. IEEE Transactions on
Multimedia. DOI: 10.1109/TMM.2019.2938345.
41. Chorowski, J., Weiss, R. J., Bengio, S., and Oord, A. V. D. (2019). Un- supervised
speech representation learning using wavenet autoencoders. IEEE/ACM Transactions
on Audio, Speech, and Language Processing. 27(12), pp. 2041-2053
42. “FakeApp 2.2.0.”.[Online]Avaiable: https://www.malavida.com/en/soft/fakeapp/
[Accessed:5- April-2020]
43. Alan Zucconi, “A Practical Tutorial for FakeApp”.18. March,2018 [Online].
Avaiable:https://www.alanzucconi.com/2018/03/14/a- practical-tutorial-
for-fakeapp/.[Accessed:7-April-2020]
44. Garrido, P., Valgaerts, L., Wu, C., Theobalt, C. 2013. Reconstructing Detailed
Dynamic Face Geometry from Monocular Video. ACM Trans. Graph. 32, 6, Article
158
46. Cao, C., Hou, Q., Zhou, K. 2014. Displaced Dynamic Expression Regression for
Real-time Facial Tracking and Animation. ACM Trans. Graph. 33, 4, Article 43 (July
2014), 10 pages. DOI = 10.1145/2601097.2601204
http://doi.acm.org/10.1145/2601097.2601204.
47. Cao, C., Bradley, D., Zhou, K., Beeler, T. 2015. Real-Time High-Fidelity Facial
Performance Capture. ACM Trans. Graph. 34, 4, Article 46 (August 2015), 9 pages.
DOI = 10.1145/2766943 http://doi.acm.org/10.1145/2766943.
48. BEELER, T., HAHN, F., BRADLEY, D., BICKEL, B., BEARDSLEY, P.,
GOTSMAN, C., SUMNER, R. W., AND
GROSS, M. 2011. High-quality passive facial performance capture using anchor frames.
ACM Trans. Graphics (Proc. SIGGRAPH) 30, 75:1–75:10.
49. Thies, J., Zollh o¨fer, M., Nießner, M., Valgaerts, L., Stamminger, M., Theobalt, C.
2015. Real-time Expression Transfer for Facial Reenactment. ACM Trans. Graph. 34,
6, Article 183 (November 2015), 14 pages. DOI = 10.1145/2816795.2818056
http://doi.acm.org/10.1145/2816795.2818056.
50. WEISE, T., BOUAZIZ, S., LI, H., AND PAULY, M. 2011.
51. Thies, J., Zollh o¨fer, M., Stamminger, M., Theobalt, C. Nießner, M. (2019).
Face2Face: Real-time face capture and reenactment of RGB Commun. ACM, 62(1),
96–104. doi:10.1145/3292039.
52. F. Shi, H.-T. Wu, X. Tong, and J. Chai. Automatic acquisition of high-fidelity facial
performances using monocular videos. ACM TOG, 33(6):222, 2014.
53. [53]P. Garrido, L. Valgaerts, H. Sarmadi, I. Steiner, K. Varanasi, P. Perez, and C.
Theobalt. Vdub: Modifying face video of actors for plausible visual alignment to a
dubbed audio track. In Computer Graphics Forum. Wiley-Blackwell, 2015.
54. Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher- Shlizerman. 2017.
Synthesizing Obama: Learning Lip Sync from Audio. ACM Trans. Graph. 36, 4,
Article 95 (July 2017), 13 pages. DOI: http://dx.doi.org/10.1145/3072959.3073640
55. Justus Thies, Michael Zollh o¨fer, Christian Theobalt, Marc Stamminger, and
Matthias Nießner. 2018. HeadOn: Real- time Reenactment of Human Portrait Videos.
ACM Trans. Graph. 37, 4, Article 164 (August 2018), 13 pages.
https://doi.org/10.1145/3197517.3201350
56. deepfakes/faceswap.[Online]. Avaiable:https://github.com/deepfakes/faceswap.
[Accessed:9-April-2020]
57. FaceSwap-GANS .[Online].Avaiable: https://github.com/shaoanlu/faceswap-
GAN.[Accessed:9- April-2020]
58. Keras-VGGFace: VGGFace implementation with Keras framework.[Online]
Avaiable: https://github.com/rcmalli/keras- vggface.[Accessed:9-April- 2020]
59. ipazc/mtcnn.https://github.com/ipazc/mtcnn.[Online]. Avaiable:
https://github.com/ipazc/mtcnn.[Accessed:15- April-2020]
60. An Introduction to the Kalman Filter.[Online] Available : http://www.cs.unc.edu/
welch/kalman/kalmanIntro. html
61. jinfagang/faceswap-pytorch.[Online]. Avaiable
73. Yuezun Li and Siwei Lyu. Exposing deepfake videos by detecting face warping
artifacts. arXiv preprint arXiv:1811.00656, 2018
74. Xin Yang, Yuezun Li, and Siwei Lyu. Exposing deep fakes using inconsistent head
poses. In ICASSP, 2019. 2, 4, 5
75. Li, M. Chang and S. Lyu, "In Ictu Oculi: Exposing AI Created Fake Videos by
Detecting Eye Blinking," 2018 IEEE International Workshop on Information
Forensics and Security (WIFS), Hong Kong, Hong Kong, 2018, pp. 1-7, doi:
10.1109/WIFS.2018.8630787.
76. Agarwal et al., 2019] Shruti Agarwal, Hany Farid, Yuming Gu, Ming- ming He, Koki
Nagano, and Hao Li. Protecting world leaders against deep fakes. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages
38–45, 2019
77. Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. Capsule-forensics: Using
capsule networks to detect forged images and videos. arXiv preprint
arXiv:1810.11215, 2018.
78. Andreas R o¨ssler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies,
and Matthias Nießner. 2019. FaceForensics++: Learning to Detect Manipulated Facial
Images. arXiv (2019)
79. Güera and E. J. Delp, "Deepfake Video Detection Using Recurrent Neural Networks,"
2018 15th IEEE International Conference on Advanced Video and Signal Based
Surveillance (AVSS), Auckland, New Zealand, 2018, pp. 1-6, doi:
10.1109/AVSS.2018.8639163.
80. Du, M., Pentyala, S.K., Li, Y., & Hu, X. (2019). Towards Generalizable Forgery
Detection with Locality-aware AutoEncoder. ArXiv, abs/1909.05999.
81. Davide Cozzolino, , Justus Thies, Andreas Rössler, Christian Riess, Matthias
Nie\ssner, and Luisa Verdoliva. "ForensicTransfer: Weakly-supervised Domain
Adaptation for Forgery Detection".arXiv (2018).
82. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, "Densely Connected
Convolutional Networks," 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), Honolulu, HI, 2017, pp. 2261-2269, doi:
10.1109/CVPR.2017.243.
83. Cho, Kyunghyun, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi
Bougares, Holger Schwenk and Yoshua Bengio. “Learning Phrase Representations
using RNN Encoder-Decoder for Statistical Machine Translation.” EMNLP (2014)
84. Simonyan, K., and Zisserman, A. (2014). Very deep convolu- tional networks for
large-scale image recognition. arXiv preprint arXiv:1409.1556.
85. He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image
recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (pp. 770-778).
86. Sabour, S., Frosst, N., and Hinton, G. E. (2017). Dynamic routing be tween capsules.
In Advances in Neural Information Processing Systems (pp. 3856-3866).
87. Chih-Chung Hsu, Yi-Xiu Zhuang, & Chia-Yen Lee (2020). Deep Fake Image
Detection based on Pairwise LearningApplied Sciences, 10, 370.
88. S. Chopra, R. Hadsell and Y. LeCun, "Learning a similarity metric discriminatively,
with application to face verification," 2005 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR'05), San Diego, CA, USA, 2005,
pp. 539-546 vol. 1, doi: 10.1109/CVPR.2005.202.
89. Afchar, V. Nozick, J. Yamagishi and I. Echizen, "MesoNet: a Compact Facial Video
Forgery Detection Network," 2018 IEEE International Workshop on Information
Forensics and Security (WIFS), Hong Kong, Hong Kong, 2018, pp. 1-7, doi:
10.1109/WIFS.2018.8630761.
90. Xuan X., Peng B., Wang W., Dong J. (2019) On the Generalization of GAN Image
Forensics. In: Sun Z., He R.,Feng J., Shan S., Guo Z. (eds) Biometric Recognition.
CCBR 2019. Lecture Notes in Computer Science, vol 11818. Springer, Cham.
https://doi.org/10.1007/978-3-030-31456- 9_15
91. Y. Zhang, L. Zheng and V. L. L. Thing, "Automated face swapping and its detection,"
2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP),
Singapore, 2017, pp. 15-19, doi: 10.1109/SIPROCESS.2017.8124497.
92. Matern, C. Riess and M. Stamminger, "Exploiting Visual Artifacts to Expose
Deepfakes and Face Manipulations," 2019 IEEE Winter Applications of Computer
Vision Workshops (WACVW), Waikoloa Village, HI, USA, 2019, pp. 83-92, doi:
10.1109/WACVW.2019.00020.
93. arissa Koopman, Andrea Macarulla Rodriguez, and Zeno Geradts.Detection of
deepfake video manipulation. In Conference: IMVIP, 2018 .
94. Ruben Tolosana, DeepFakes and Beyond A Survey of Face Manipulation and Fake
Detection, pp. 1-15, 2020
95. Brian Dolhansky, The Deepfake Detection Challenge (DFDC) Preview Dataset,
Deepfake Detection Challenge, pp. 1-4, 2019.
96. Chesney, R. and K. Citron, D., 2018. Disinformation On Steroids: The Threat Of
Deep Fakes. [online] Council on Foreign Relations. Available at:
<https://www.cfr.org/report/deep-fake-disinformation- steroids> [Accessed 2 May
2020].
97. Figueira, A., Oliveira, L. 2017. The current state of fake news: challenges and
opportunities. Procedia Computer Science, 121: 817–825.
https://doi.org/10.1016/j.procs.2017.11.106
98. Zannettou, S., Sirivianos, M., Blackburn, J., Kourtellis, N. 2019. The Web of False
Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans.
Journal of Data and Information Quality, 1(3): Article No. 10.
https://doi.org/10.1145/3309699