Machine Learning: Fundamentals and Applications
By Fouad Sabry
()
About this ebook
What Is Machine Learning
Machine learning (ML) is a subfield of computer science that focuses on the study and development of methods that enable computers to "learn." These are methods that make use of data in order to enhance a computer's performance on a certain set of tasks.
How You Will Benefit
(I) Insights, and validations about the following topics:
Chapter 1: Machine learning
Chapter 2: Big data
Chapter 3: Self-driving car
Chapter 4: Unsupervised learning
Chapter 5: Supervised learning
Chapter 6: Statistical learning theory
Chapter 7: Computational learning theory
Chapter 8: Automated machine learning
Chapter 9: Differentiable programming
Chapter 10: Reinforcement learning
(II) Answering the public top questions about machine learning.
(III) Real world examples for the usage of machine learning in many fields.
(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of machine learning' technologies.
Who This Book Is For
Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of machine learning.
Read more from Fouad Sabry
Emerging Technologies in Transport
Related to Machine Learning
Titles in the series (100)
Recurrent Neural Networks: Fundamentals and Applications from Simple to Gated Architectures Rating: 0 out of 5 stars0 ratingsHebbian Learning: Fundamentals and Applications for Uniting Memory and Learning Rating: 0 out of 5 stars0 ratingsLong Short Term Memory: Fundamentals and Applications for Sequence Prediction Rating: 0 out of 5 stars0 ratingsBackpropagation: Fundamentals and Applications for Preparing Data for Training in Deep Learning Rating: 0 out of 5 stars0 ratingsStatistical Classification: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation Rating: 0 out of 5 stars0 ratingsLearning Intelligent Distribution Agent: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsRadial Basis Networks: Fundamentals and Applications for The Activation Functions of Artificial Neural Networks Rating: 0 out of 5 stars0 ratingsArtificial Immune Systems: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories Rating: 0 out of 5 stars0 ratingsRestricted Boltzmann Machine: Fundamentals and Applications for Unlocking the Hidden Layers of Artificial Intelligence Rating: 0 out of 5 stars0 ratingsNouvelle Artificial Intelligence: Fundamentals and Applications for Producing Robots With Intelligence Levels Similar to Insects Rating: 0 out of 5 stars0 ratingsControl System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAttractor Networks: Fundamentals and Applications in Computational Neuroscience Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsPerceptrons: Fundamentals and Applications for The Neural Building Block Rating: 0 out of 5 stars0 ratingsCompetitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition Rating: 0 out of 5 stars0 ratingsHybrid Intelligent System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBlackboard System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsMultilayer Perceptron: Fundamentals and Applications for Decoding Neural Networks Rating: 0 out of 5 stars0 ratingsCircumscription Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsSituated Artificial Intelligence: Fundamentals and Applications for Integrating Intelligence With Action Rating: 0 out of 5 stars0 ratingsConvolutional Neural Networks: Fundamentals and Applications for Analyzing Visual Imagery Rating: 0 out of 5 stars0 ratingsSubsumption Architecture: Fundamentals and Applications for Behavior Based Robotics and Reactive Control Rating: 0 out of 5 stars0 ratingsCognitive Architecture: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNeuroevolution: Fundamentals and Applications for Surpassing Human Intelligence with Neuroevolution Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Systems Integration: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsLogic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHybrid Neural Networks: Fundamentals and Applications for Interacting Biological Neural Networks with Artificial Neuronal Models Rating: 0 out of 5 stars0 ratings
Related ebooks
Deep Learning Guide for Beginners Rating: 0 out of 5 stars0 ratingsData Mining: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAutomated Reasoning: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsBash Shell Scripting for Pentesters: Master the art of command-line exploitation and enhance your penetration testing workflows Rating: 0 out of 5 stars0 ratingsF# for Machine Learning Essentials Rating: 0 out of 5 stars0 ratingsSynthetic Data Generation: A Beginner’s Guide Rating: 0 out of 5 stars0 ratingsA text-book of sociology: With detailed table of contents Rating: 0 out of 5 stars0 ratingsData Structures and Algorithms for Beginners Rating: 0 out of 5 stars0 ratingsMarket Power: Mastering Market Power, Unraveling Economics for Informed Decisions Rating: 0 out of 5 stars0 ratingsAlgorithmic Market Making: Strategies for Liquidity and Profitability Rating: 0 out of 5 stars0 ratingsRisk-Adjusted Returns: Maximizing Profit with Smart Investment Strategies Rating: 0 out of 5 stars0 ratingsHands-On Deep Learning for Finance: Implement deep learning techniques and algorithms to create powerful trading strategies Rating: 0 out of 5 stars0 ratingsBig Data: Statistics, Data Mining, Analytics, And Pattern Learning Rating: 0 out of 5 stars0 ratingsReal-time Analytics with Storm and Cassandra Rating: 0 out of 5 stars0 ratingsEconometrics: Econometrics Unleashed, Mastering Data-Driven Economics Rating: 0 out of 5 stars0 ratingsStructural Equation Modeling: Applications Using Mplus Rating: 0 out of 5 stars0 ratingsMQL A Complete Guide - 2021 Edition Rating: 0 out of 5 stars0 ratingsPytorch Rating: 0 out of 5 stars0 ratingsBayesian Analysis of Stochastic Process Models Rating: 0 out of 5 stars0 ratingsData Structure in Python: From Basics to Expert Proficiency Rating: 0 out of 5 stars0 ratingsAI and Machine-Learning Algorithms Second Edition Rating: 0 out of 5 stars0 ratingsProcess Mining A Complete Guide - 2021 Edition Rating: 0 out of 5 stars0 ratingsEconomies of Scale: Mastering Economies of Scale, a Practical Guide to Economic Efficiency Rating: 0 out of 5 stars0 ratingsDeep Learning for Time Series Cookbook: Use PyTorch and Python recipes for forecasting, classification, and anomaly detection Rating: 0 out of 5 stars0 ratingsMathematical Economics: Mastering Mathematical Economics, Navigating the Complexities of Economic Phenomena Rating: 0 out of 5 stars0 ratings
Intelligence (AI) & Semantics For You
The Alignment Problem: How Can Machines Learn Human Values? Rating: 4 out of 5 stars4/5Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 4 out of 5 stars4/5The Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5The Business Case for AI: A Leader's Guide to AI Strategies, Best Practices & Real-World Applications Rating: 0 out of 5 stars0 ratingsScary Smart: The Future of Artificial Intelligence and How You Can Save Our World Rating: 4 out of 5 stars4/5Advances in Financial Machine Learning Rating: 5 out of 5 stars5/5Algorithms to Live By: The Computer Science of Human Decisions Rating: 4 out of 5 stars4/5Python for Beginners: A Crash Course to Learn Python Programming in 1 Week Rating: 0 out of 5 stars0 ratingsDeep Utopia: Life and Meaning in a Solved World Rating: 0 out of 5 stars0 ratingsFutureproof: Amplifying Agility with AI and Insightful Business Analysis Rating: 0 out of 5 stars0 ratingsCreating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5Midjourney Mastery - The Ultimate Handbook of Prompts Rating: 5 out of 5 stars5/5The Instant AI Agency: How to Cash 6 & 7 Figure Checks in the New Digital Gold Rush Without Being A Tech Nerd Rating: 0 out of 5 stars0 ratingsArtificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5TensorFlow in 1 Day: Make your own Neural Network Rating: 4 out of 5 stars4/5Dancing with Qubits: How quantum computing works and how it can change the world Rating: 5 out of 5 stars5/5AI Literacy Fundamentals Rating: 0 out of 5 stars0 ratingsThe Creativity Code: How AI is learning to write, paint and think Rating: 4 out of 5 stars4/5Faking It: Artificial Intelligence in a Human World Rating: 0 out of 5 stars0 ratingsGrokking Machine Learning Rating: 0 out of 5 stars0 ratingsChatGPT For Dummies Rating: 4 out of 5 stars4/5Data Science Bookcamp: Five real-world Python projects Rating: 5 out of 5 stars5/5AI Investing For Dummies Rating: 0 out of 5 stars0 ratingsPrompt Engineering ; The Future Of Language Generation Rating: 3 out of 5 stars3/5Unity 3D Game Development: Designed for passionate game developers—Engineered to build professional games Rating: 0 out of 5 stars0 ratingsMachine Learning: Adaptive Behaviour Through Experience: Thinking Machines Rating: 4 out of 5 stars4/5
Reviews for Machine Learning
0 ratings0 reviews
Book preview
Machine Learning - Fouad Sabry
Chapter 1: Machine learning
Machine learning (ML) is a subfield of computer science that focuses on the study and development of techniques that enable computers to learn,
or more specifically, techniques that make use of data in order to enhance a computer's performance on a certain set of tasks.
Machine learning, when used to the solution of business challenges, is also known as predictive analytics.
Learning algorithms are based on the hypothesis that methods, algorithms, and judgments that were successful in the past are likely to continue to be successful in the future as well. These deductions might sometimes be self-evident, such as when one says that because the sun has risen every morning for the last 10,000 days, there is a good chance that it will rise again tomorrow morning.
On other occasions, these explanations may be more subtle, such as X percent of families have geographically different species with color variations, which means there is a Y percent probability that unknown black swans exist.
.
Arthur Samuel, a worker at IBM and a pioneer in the fields of computer games and artificial intelligence, is credited as being the one who first used the term machine learning
in 1959.
The search for artificial intelligence led to the development of the field of machine learning as a field of research (AI). During the early stages of AI's development as an academic field, a number of academics expressed an interest in teaching computers to learn from data. They attempted to solve the issue using a variety of symbolic methods, in addition to what were then known as neural networks.
These neural networks
mainly consisted of perceptrons and various other models that were later discovered to be reinventions of the generalized linear models of statistics.
While machine learning and data mining frequently use the same methods and have a significant amount of overlap, the primary focus of machine learning is prediction, based on the known properties learned from the training data, whereas the primary focus of data mining is the discovery of (previously unknown) properties in the data (this is the analysis step of knowledge discovery in databases). Data mining makes use of numerous machine learning techniques, but its objectives are distinct from those of machine learning. On the other hand, machine learning makes use of data mining methods in the form of unsupervised learning
or as a preprocessing step in order to increase learner accuracy. These two research groups (which do frequently have different conferences and separate publications, with ECML PKDD being a significant example) are often confused with one another as a result of the fundamental presumptions that both operate under, which are as follows: In the field of machine learning, performance is often measured in terms of an individual's capacity to repeat previously learned material, but in the field of knowledge discovery and data mining (KDD), the primary objective is to unearth information that was not previously known. An uninformed (unsupervised) technique will easily be surpassed by other supervised methods when evaluated with regard to existing information; however, supervised methods can't be employed in a typical KDD job since there isn't enough training data.
Many learning issues are framed as the minimization of some loss function on a training set of examples, which is an example of one of the close linkages that exist between machine learning and optimization. The variance that exists between the predictions of the model that is being trained and the actual occurrences of the issue is represented by the loss function (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples).
The objective of generalization is what differentiates optimization from machine learning; whereas optimization techniques may reduce the loss on a training set, machine learning is concerned with decreasing the loss on samples that have not been seen before. Research is being conducted on a number of hot topics at the moment, one of which is characterizing the generalization of different learning algorithms, particularly deep learning algorithms.
The primary objective of statistics is to draw conclusions about a population based on information gleaned from a sample, whereas the objective of machine learning is to identify generalizable predictive patterns. Although the two fields share a close relationship in terms of the methods they use, they are fundamentally different. whereby algorithmic model
refers, more or less, to the machine learning algorithms such as Random Forest.
Some statisticians have embraced techniques from the subject of machine learning, which has led to the creation of a hybrid discipline that these statisticians term statistical learning.
Analytical and computational techniques that are derived from the deep-rooted physics of disordered systems can be extended to large-scale problems, such as those involving machine learning. For instance, these techniques can be used to analyze the weight space of deep neural networks. Moreover, these techniques can be applied to large-scale problems.
To be able to draw broad conclusions from specific examples is one of the primary goals of a learner. Generalization is the capacity of a learning machine to perform properly on new instances or tasks after having encountered a learning data set. In this context, generalization refers to the ability of a learning machine to perform accurately. The training examples are taken from some generally unknown probability distribution, which is thought to be representative of the space of occurrences. The learner is tasked with developing a general model about this space, which will enable it to make predictions in new cases that are sufficiently accurate.
Computational learning theory is a field of theoretical computer science that analyzes the performance of machine learning algorithms through the lens of the Probably Approximately Correct Learning (PAC) model. This kind of study is known as computational analysis of machine learning algorithms. In most cases, learning theory does not provide assurances on the performance of algorithms. This is due to the fact that training sets are limited and the future is unknowable. In its place, probabilistic limits on the performance are often used. One technique to measure generalization mistake is by the use of the bias–variance decomposition.
It is important that the complexity of the hypothesis be comparable to the complexity of the function that is underlying the data in order to get the greatest performance possible in the context of generalization. In the event that the hypothesis is simpler than the function, it indicates that the model has inadequately matched the data. When the complexity of the model is raised in response, there is a corresponding reduction in the amount of training error. However, if the hypothesis is very complicated, the model may be prone to overfitting, and the results of the generalization will be less accurate.
Learning theorists investigate a variety of topics, including performance constraints, the temporal complexity of learning, and the feasibility of learning. In the field of computational learning theory, the feasibility of a computation is determined by whether or not it can be completed in polynomial time. There are two distinct types of findings in terms of their temporal complexity: The successful completion of the experiment demonstrates that a certain class of functions may be learnt in a polynomial amount of time. According to the negative findings, some classes cannot be learnt in a polynomial amount of time.
Traditional methods to machine learning may be broadly classified into one of three groups, which correspond to different learning paradigms. These categories are determined by the kind of signal
or feedback
that is made accessible to the learning system:
The purpose of supervised learning is for the computer to learn a general rule that maps inputs to outputs by being shown examples of inputs and the intended outputs for those inputs. These examples are delivered to the computer by a teacher.
.
Unsupervised learning is a kind of machine learning in which the learning algorithm is not provided with any labels and is instead left to discover structure on its own within the data it is fed. Discovering previously hidden patterns in data may be a purpose of unsupervised learning in and of itself, or it might be a means to an end (feature learning).
Learning via reinforcement occurs when a computer program interacts with a dynamic environment in which it is required to accomplish a certain objective (such as driving a vehicle or playing a game against an opponent). The software is given input that is comparable to prizes as it works its way across the issue space, and it strives to make the most of these opportunities.
A mathematical model of a collection of data is constructed using supervised learning algorithms. This model includes both the data inputs and the outputs that are intended. Classification algorithms are used in situations in which the outputs can only take on a certain set of values, while regression algorithms are used in situations in which the outputs may take on any numerical value within a given range. In the case of a classification algorithm that sorts incoming emails, for instance, the input would be an email that has just been received, and the output would be the name of the folder in which the email should be saved.
Similarity learning is a subfield of supervised machine learning that is closely related to regression and classification. The objective of this subfield, however, is to learn from examples by employing a similarity function that evaluates the degree to which two things are comparable or related to one another. It may be used in ranking, recommendation systems, visual identification tracking, face verification, and speaker verification, among other applications.
A collection of data that merely comprises inputs