0% found this document useful (0 votes)
317 views25 pages

Deep Learning For Intelligent Wireless Networks: A Comprehensive Survey

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 25

Deep Learning for Intelligent

Wireless Networks: A
Comprehensive Survey
Published by:
Qian Mao
Fei Hu
Qi Hao,
Member, IEEE

Presented by:
Ms. Neha Choudhary
INDEX Index
1. Introduction
2. Literature Review
3. Objective
4. Wireless platform implementation
5. Future research trends
6. Conclusion
7. References
INTRODUCTION
•Deep learning (DL) is a powerful method to add
intelligence to wireless networks with large-scale
topology and complex radio conditions.
• DL uses many neural network .
•DL can analyze extremely complex wireless networks.
•A comprehensive survey of the applications of DL
algorithms for different layers.
•The use of DL to enhance other network functions,
such as network security, sensing data compression, etc
INTRODUCTION (CONT..)
•In Deep Learning process, first, computers need to learn
from experiences and build up certain training model.
•DL is a subclass of machine learning.
•The application of DL should consider four aspects:
(1) How to represent the state of the environment in
suitable numerical formats.
(2) How to represent/interpret the recognition results.
(3) How to compute/update the reward value.
(4) The structure of the DL system
INTRODUCTION (CONT..)
•DL systems are tied with Reinforcement Learning (RL)
models , which comprises three parts:
• 1) An environment
• 2) An agent
•3) An interpreter

Fig. 1. Schematic Diagram of Reinforcement Learning.


INTRODUCTION (CONT..)
•Compared to machine learning technique, DL provides
higher prediction accuracy .
•In DL no need to pre-process input data.
• Similarities between DL and human brain:
1.Tolerance of incomplete or even erroneous input raw
data.
2.Capability of handling large amount of input
information.
3.Capability of making control decisions
LITERATURE REVIEW
Ref Author Title Outcome Gap
.No.
1 G. Hinton et “Deep neural networks for It provides an The biggest
al. acoustic modeling in speech overview of this disadvantage of DNNS
recognition,” progress and compared with GMMS
represents the is that it is much harder
shared views of to make good use of
four research large cluster machines to
groups . train them on massive
data sets.
2 Shuying Liu Very deep convolutional neural CNN can be Normalization and
,Weihong network based image classification used to fit small strong dropout setting
Deng using small training sample size datasets with are the only way to
simple and arouse the power of
proper deep model on small
modifications dataset. More methods
and don't need to will require future work.
re-design
specific small
networks.
3 Michael Geometric Deep Learning: Going Different Requiring different ways
Bronstein beyond Euclidean data examples of to achieve efficient
, Joan Bruna geometric deep- computations.
OBJECTIVE
•DL originated from Machine Learning (ML). A brief
introduction towards DL principles is presented.
•From Machine Learning to Deep Learning
•Deep Learning Framework
•Deep Learning for Graph-Structured Data
OBJECTIVE (CONT…)
•DL applications for Physical Layer function control.
a) DL for Interference Alignment
b) DL for Jamming Resistance
c) DL for Modulation Classification
d) DL for Physical Coding
OBJECTIVE (CONT…)
DL FOR DATA LINK LAYER
•DL for Spectrum Allocation
•DL for Traffic Prediction
•DL for Link Evaluation
OBJECTIVE (CONT…)
Modern routing protocols developed for wireless
networks are basically categorized into four types:
• Routing-table-based proactive protocols
•On-demand reactive protocols
•Geographical protocols
•ML/DL-based routing protocols
OBJECTIVE (CONT…)
DL FOR OTHER NETWORK FUNCTIONS
•Vehicle Network Scheduling
Vehicle-to-Vehicle (V2V) & Vehicle-to-Infrastructure (V2I) of
communications in Vehicular Ad-Hoc Network (VANET)
• Sensor Data Reduction
Implanted Medical Devices (IMDs) are usu- ally constrained
by power consumption.
•Hardware Resource Allocation
Operating system supports some application layer tasks for the
network communications.
•Network Security
Traffic inference and intrusion detection are crucial issues for
cyber security
WIRELESS PLATFORM IMPLEMENTATION
Following methods have been used in wireless testbeds.
1)MATLAB Neural Network Toolbox
2)TensorFlow
3)Caffe (Convolutional Architecture for Fast Feature
Embedding)
4)Theano (Keras)
5)Keras
6)WILL
7)Customized models
FUTURE RESEARCH TRENDS
• 10 challenging issues and point out the future research
trends.
A. DL for Transport Layer Optimizations.
B. Using DL to Facilitate Big Data Transmissions.
C. DL-Based Network Swarming.
D. Pairing DL With Software-Defined Network (SDN).
E. Distributed DL Implementation in Wireless Nodes.
F.DL-Based Cross-Layer Design.
G.DL-Based Application Layer Enhancement.
H.DL-Based Dew-Fog-Cloud Computing Security.
I. From DL to DRL: Applications for Cognitive Radio Network
Control.
J. Efficient DL/DRL Implementations in Practical Wireless
Platforms.
FUTURE RESEARCH TRENDS (CONT….)
G. (Challenge 7) DL-Based Application Layer
Enhancement
H. (Challenge 8) DL-Based Dew-Fog-Cloud Computing
Security
I. (Challenge 9) From DL to DRL: Applications for
Cognitive Radio Network Control
J. (Challenge 10) Efficient DL/DRL Implementations in
Practical Wireless Platforms
CONCLUSION
This paper has comprehensively reviewed the
methodologies of applying DL schemes for wireless
network performance enhancement.
(1)DL/DRL is very useful for intelligent wireless network
management due to its human-brain-like pattern
recognition capability. With the hardware performance
improvement of today’s wireless products, its adoption
becomes easier.
CONCLUSION(CONT….)
(2)It plays critical roles in multiple protocol layers. We have
summarized its applications in physical, MAC and
routing layers. It makes the network more intelligently
realize the change of the entire topology and link
conditions, and helps to generate more appropriate
protocol parameter controls.
CONCLUSION(CONT….)
(3) It can be integrated with today’s various wireless
networking schemes, including CRNs, SDNs, etc., to
achieve either centralized or distributed resource
allocation and traffic balancing functions. This article
also lists ten important research issues that need to be
solved in the near future in this field. They cover some
promising wireless applications. This paper will help
readers to understand the state-of-the-art of DL-enhanced
wireless networking protocols and find some interesting
and challenging research topics to pursue in this critical
field.
 
REFERENCES
•J. Patterson and A. Gibson, Deep Learning: A Practitioner’s Approach. Sebastopol,
CA, USA: O’Reilly Media, 2017.
•G. Hinton et al., “Deep neural networks for acoustic modeling in speech
recognition,” IEEE Signal Process. Mag., vol. 29, no. 6, pp. 82–97, Nov. 2012.
•T. N. Sainath, A.-R. Mohamed, B. Kingsbury, and B. Ramabhadran, “Deep
convolutional neural networks for LVCSR,” in Proc. IEEE Int. Conf. Acoust.
Speech Signal Process. (ICASSP), Vancouver, BC, Canada, May 2013, pp. 8614–
8618.
•A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classifica- tion with deep
convolutional neural networks,” in Proc. 25th Int. Conf. Neural Inf. Process. Syst.
(NIPS), vol. 1, Dec. 2012, pp. 1097–1105.
•C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learning hierar- chical features
for scene labeling,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1915–
1929, Oct. 2013.
•J. Tompson, A. Jain, Y. LeCun, and C. Bregler, “Joint training of a convolutional
network and a graphical model for human pose estima- tion,” in Proc. 27th Int. Conf.
Neural Inf. Process. Syst. (NIPS), vol. 1. Montreal, QC, Canada, Dec. 2014, pp.
1799–1807.
REFERENCES (CONT….)
•Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp.
436–444, May 2015.
•R. Collobert et al., “Natural language processing (almost) from scratch,” J.
Mach. Learn. Res., vol. 12, pp. 2493–2537, Aug. 2011.
•A. Bordes, J. Weston, and S. Chopra, “Question answering with sub- graph
embeddings,” in Proc. Conf. Empir. Methods Nat. Lang. Process. (EMNLP),
Doha, Qatar, Oct. 2014, pp. 615–620.
•I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with
neural networks,” in Proc. 27th Int. Conf. Neural Inf. Process. Syst. (NIPS),
Montreal, QC, Canada, Dec. 2014, pp. 3104–3112.
•S. Jean, K. Cho, R. Memisevic, and Y. Bengio, “On using very large target
vocabulary for neural machine translation,” in Proc. 53rd Annu. Meeting Assoc.
Comput. Linguist. (ALC), Beijing, China, Jul. 2015, pp. 1–10.
•D. Silver et al., “Mastering the game of go with deep neural networks and tree
search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016.
•J. Ma, R. P. Sheridan, A. Liaw, G. E. Dahl, and V. Svetnik, “Deep neural nets
as a method for quantitative structure-activity relationships,” J. Chem. Inf.
Model., vol. 55, no. 2, pp. 263–274, Jan. 2015.
REFERENCES (CONT….)

•M. Helmstaedter et al., “Connectomic reconstruction of the inner plexiform layer in


the mouse retina,” Nature, vol. 500, no. 7461, pp. 168–174, Oct. 2014.
•M. K. K. Leung, H. Y. Xiong, L. J. Lee, and B. J. Frey, “Deep learning of the tissue-
regulated splicing code,” Bioinformatics, vol. 30, no. 12, pp. 121–129, Jun. 2014.
•H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, and D. Merico, “The human
splicing code reveals new insights into the genetic determinants of disease,” Science,
vol. 347, no. 6218, pp. 144–151, Jan. 2015.
•R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. Cambridge,
MA, USA: MIT Press, 1998.
•M. A. Alsheikh, S. Lin, D. Niyato, and H.-P. Tan, “Machine learning in wireless
sensor networks: Algorithms, strategies, and applica- tions,” IEEE Commun. Surveys
Tuts., vol. 16, no. 4, pp. 1996–2018, 4th Quart., 2014.
REFERENCES (CONT….)
•M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Machine learning for
wireless networks with artificial Intelligence: A tutorial on neural networks,” eprint
arXiv: 1710.02913, Oct. 2017.
•J. Chen, U. Yatnalli, and D. Gesbert, “Learning radio maps for UAV- aided wireless
networks: A segmented regression approach,” in Proc. IEE Int. Conf. Commun. (ICC)
Signal Process. Commun. Symp., Paris, France, May 2017, pp. 1–6.
•Y. Xiao, Z. Han, D. Niyato, and C. Yuen, “Bayesian reinforcement learning for
energy harvesting communication systems with uncer- tainty,” in Proc. IEEE Int.
Conf. Commun. (ICC) Next Gener. Netw. Symp., London, U.K., Jun. 2015, pp. 5398–
5403.
•M. Bennis and D. Niyato, “A Q-learning based approach to interference avoidance in
self-organized femtocell networks,” in Proc. IEEE Glob. Commun. Conf.
(GLOBECOM) Workshops Femtocell Netw., Miami, FL, USA, Dec. 2010, pp. 706–
710.
•M. Chen et al., “Caching in the sky: Proactive deployment of cache- enabled
unmanned aerial vehicles for optimized quality-of-experience,” IEEE J. Sel. Areas
Commun., vol. 35, no. 5, pp. 1046–1061, May 2017.
•M. Chen, W. Saad, C. Yin, and M. Debbah, “Echo state networks for proactive
caching in cloud-based radio access networks with mobile users,” IEEE Trans.
Wireless Commun., vol. 16, no. 6, pp. 3520–3535, Jun. 2017.
REFERENCES (CONT….)
•T. Serre, L. Wolf, and T. Poggio, “Object recognition with features inspired by visual
cortex,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), , CA,
USA, Jun. 2005, pp. 994–1000.
•T. V. Maia, “Reinforcement learning, conditioning, and the brain: Successes and
challenges,” Cognit. Affective Behav. Neurosci., vol. 9, no. 4, pp. 343–364, Dec. 2009.
•V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol.
518, no. 7540, pp. 529–533, Feb. 2015.
•C. J. C. H. Watkins and P. Dayan, “Q-learning,” Mach. Learn., vol. 8, nos. 3–4, pp. 279–
292, May 1992.
•S. S. Sonawane and P. A. Kulkarni, “Graph based representation and analysis of text
document: A survey of techniques,” Int. J. Comput. Appl., vol. 96, no. 19. pp. 1–8, Jun.
2014.
•M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, “Geometric deep
learning: Going beyond Euclidean data,” IEEE Sig. Proc. Mag., vol. 34, no. 4, pp. 18–42,
May 2017.
•J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and deep locally
connected networks on graphs,” in Proc. 2nd Int. Conf. Learn. Represent. (ICLR), Banff,
AB, Canada, Apr. 2014, pp. 1–14.
•Y. Hechtlinger, P. Chakravarti, and J. Qin, “A generalization of convolutional
neural networks to graph-structured data,” eprint arXiv:1704.08165, Apr. 2017.
REFERENCES (CONT….)
•M. Henaff, J. Bruna, and Y. LeCun, “Deep convolutional networks on graph-structured data,” eprint
arXiv:1506.05163, Jun. 2015.
•M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neu- ral networks on graphs with
fast localized spectral filtering,” in Proc. Conf. Adv. Neural Inf. Process. Syst. (NIPS), vol. 29.
Barcelona, Spain, Dec. 2016, pp. 3837–3845.
•T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in
Proc. 5th Int. Conf. Learn. Represent. (ICLR), Toulon, France, Apr. 2017, pp. 1–14.
•J. Lee, H. Kim, J. Lee, and S. Yoon, “Transfer learning for deep learn- ing on graph-structured data,”
in Proc. 31st AAAI Conf. Artif. Intell. (AAAI), San Francisco, CA, USA, Feb. 2017, pp. 1–7.
•S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no.
10, pp. 1345–1359, Oct. 2010.
•V. R. Cadambe and S. A. Jafar, “Interference alignment and degrees of freedom of the K -user
interference channel,” IEEE Trans. Inf. Theory, vol. 54, no. 8, pp. 3425–3441, Aug. 2008.
•N. Zhao, F. R. Yu, M. Jin, Q. Yan, and V. C. M. Leung, “Interference alignment and its applications: A
survey, research issues, and chal- lenges,” IEEE Commun. Surveys Tuts., vol. 18, no. 3, pp. 1779–
1803, 3rd Quart., 2016.
•Y. He, C. Liang, F. R. Yu, N. Zhao, and H. Yin, “Optimization of cache-enabled opportunistic
interference alignment wireless networks: A big data deep reinforcement learning approach,” in Proc.
IEEE Int. Conf. Commun. (ICC), Paris, France, May 2017, pp. 1–6.
•G. Han, L. Xiao, and H. V. Poor, “Two-dimensional anti-jamming com- munication based on deep
reinforcement learning,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), New
Orleans, LA, USA, Mar. 2017, pp. 2087–2091.
Thank You!

You might also like