Backward Chaining: Fundamentals and Applications
By Fouad Sabry
()
About this ebook
What Is Backward Chaining
The process of making an inference known as "working backward from the goal" is another name for the technique known as "backward chaining." It is implemented in automated theorem provers, inference engines, proof assistants, and other applications that fall under the umbrella of artificial intelligence.
How You Will Benefit
(I) Insights, and validations about the following topics:
Chapter 1: Backward Chaining
Chapter 2: Automated Theorem Proving
Chapter 3: Inference Engine
Chapter 4: Game Theory
Chapter 5: Backward Induction
Chapter 6: Retrograde Analysis
Chapter 7: Logic Programming
Chapter 8: SLD Resolution
Chapter 9: Forward Chaining
Chapter 10: Prolog
(II) Answering the public top questions about backward chaining.
(III) Real world examples for the usage of backward chaining in many fields.
(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of backward chaining' technologies.
Who This Book Is For
Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of backward chaining.
Read more from Fouad Sabry
Emerging Technologies in Neuroscience
Related to Backward Chaining
Titles in the series (100)
Convolutional Neural Networks: Fundamentals and Applications for Analyzing Visual Imagery Rating: 0 out of 5 stars0 ratingsMultilayer Perceptron: Fundamentals and Applications for Decoding Neural Networks Rating: 0 out of 5 stars0 ratingsAttractor Networks: Fundamentals and Applications in Computational Neuroscience Rating: 0 out of 5 stars0 ratingsArtificial Neural Networks: Fundamentals and Applications for Decoding the Mysteries of Neural Computation Rating: 0 out of 5 stars0 ratingsCompetitive Learning: Fundamentals and Applications for Reinforcement Learning through Competition Rating: 0 out of 5 stars0 ratingsLogic Programming: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsRestricted Boltzmann Machine: Fundamentals and Applications for Unlocking the Hidden Layers of Artificial Intelligence Rating: 0 out of 5 stars0 ratingsLong Short Term Memory: Fundamentals and Applications for Sequence Prediction Rating: 0 out of 5 stars0 ratingsNeuroevolution: Fundamentals and Applications for Surpassing Human Intelligence with Neuroevolution Rating: 0 out of 5 stars0 ratingsRadial Basis Networks: Fundamentals and Applications for The Activation Functions of Artificial Neural Networks Rating: 0 out of 5 stars0 ratingsHybrid Neural Networks: Fundamentals and Applications for Interacting Biological Neural Networks with Artificial Neuronal Models Rating: 0 out of 5 stars0 ratingsDescription Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsPerceptrons: Fundamentals and Applications for The Neural Building Block Rating: 0 out of 5 stars0 ratingsHebbian Learning: Fundamentals and Applications for Uniting Memory and Learning Rating: 0 out of 5 stars0 ratingsBio Inspired Computing: Fundamentals and Applications for Biological Inspiration in the Digital World Rating: 0 out of 5 stars0 ratingsControl System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHopfield Networks: Fundamentals and Applications of The Neural Network That Stores Memories Rating: 0 out of 5 stars0 ratingsSituated Artificial Intelligence: Fundamentals and Applications for Integrating Intelligence With Action Rating: 0 out of 5 stars0 ratingsRecurrent Neural Networks: Fundamentals and Applications from Simple to Gated Architectures Rating: 0 out of 5 stars0 ratingsLearning Intelligent Distribution Agent: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNaive Bayes Classifier: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsSupport Vector Machine: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAlternating Decision Tree: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHybrid Intelligent System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsPerceptual Computing: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBackpropagation: Fundamentals and Applications for Preparing Data for Training in Deep Learning Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Systems Integration: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Immune Systems: Fundamentals and Applications Rating: 0 out of 5 stars0 ratings
Related ebooks
Default Reasoning: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAutomated Theorem Proving: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsComputer Assisted Proof: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsForward Chaining: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsInductive Logic Programming: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsState Space Search: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsRule of Inference: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsGeneral Problem Solver: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAutomated Reasoning: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Diagnosis: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsNon Monotonic Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAbductive Reasoning: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFuzzy Logic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsBrute Force Search: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAlgorithmic Probability: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsFeedforward Neural Networks: Fundamentals and Applications for The Architecture of Thinking Machines and Neural Webs Rating: 0 out of 5 stars0 ratingsAlgorithmic Information Theory: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsSoft Computing: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Myths: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsExpert System: Fundamentals and Applications for Teaching Computers to Think like Experts Rating: 0 out of 5 stars0 ratingsPhysical Symbol System: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsPerceptrons: Fundamentals and Applications for The Neural Building Block Rating: 0 out of 5 stars0 ratingsTuring Test: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsAnt Colony Optimization Algorithms: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsMarkov Models Supervised and Unsupervised Machine Learning: Mastering Data Science And Python Rating: 2 out of 5 stars2/5Functionalism: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsEvent Calculus: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsHow to Create Machine Superintelligence (Second Edition) Rating: 0 out of 5 stars0 ratingsGroup Method of Data Handling: Fundamentals and Applications for Predictive Modeling and Data Analysis Rating: 0 out of 5 stars0 ratingsBeam Search: Fundamentals and Applications Rating: 0 out of 5 stars0 ratings
Intelligence (AI) & Semantics For You
Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 4 out of 5 stars4/5Artificial Intelligence: A Guide for Thinking Humans Rating: 4 out of 5 stars4/5Algorithms to Live By: The Computer Science of Human Decisions Rating: 4 out of 5 stars4/5Deep Utopia: Life and Meaning in a Solved World Rating: 0 out of 5 stars0 ratingsBuild a Career in Data Science Rating: 5 out of 5 stars5/5Creating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5The Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5Advances in Financial Machine Learning Rating: 5 out of 5 stars5/5Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World Rating: 4 out of 5 stars4/5Python for Beginners: A Crash Course to Learn Python Programming in 1 Week Rating: 0 out of 5 stars0 ratingsAI Literacy Fundamentals Rating: 0 out of 5 stars0 ratingsArtificial Intelligence Programming with Python: From Zero to Hero Rating: 0 out of 5 stars0 ratingsThe Future of No Work Rating: 0 out of 5 stars0 ratingsAfrican Artificial Intelligence: Discovering Africa's AI Identity Rating: 0 out of 5 stars0 ratingsClassic Computer Science Problems in Python Rating: 0 out of 5 stars0 ratingsThe Alignment Problem: How Can Machines Learn Human Values? Rating: 4 out of 5 stars4/5Midjourney Mastery - The Ultimate Handbook of Prompts Rating: 5 out of 5 stars5/5101 Midjourney Prompt Secrets Rating: 3 out of 5 stars3/5Superagency: What Could Possibly Go Right with Our AI Future Rating: 0 out of 5 stars0 ratingsAI Investing For Dummies Rating: 0 out of 5 stars0 ratingsMachine Learning: Adaptive Behaviour Through Experience: Thinking Machines Rating: 4 out of 5 stars4/5Geospatial Data Engineering Rating: 0 out of 5 stars0 ratingsHands-On System Design: Learn System Design, Scaling Applications, Software Development Design Patterns with Real Use-Cases Rating: 0 out of 5 stars0 ratingsGrokking Deep Reinforcement Learning Rating: 5 out of 5 stars5/5The ChatGPT Revolution: How to Simplify Your Work and Life Admin with AI Rating: 0 out of 5 stars0 ratings
Reviews for Backward Chaining
0 ratings0 reviews
Book preview
Backward Chaining - Fouad Sabry
Chapter 1: Backward chaining
Working backward from the desired outcome is a colloquial way to explain the inference procedure known as backward chaining, also known as backward reasoning. In artificial intelligence applications such as automated theorem provers, inference engines, proof helpers, and others, it is used in these applications.
Backward induction is a technique used in game theory that involves researchers taking the overall game and applying it to several (simpler) subgames in order to discover a solution to the overall game. It is referred to as retrograde analysis in the game of chess, and it is used to produce table bases for chess endgames on computer chess programs.
In logic programming, the process of backward chaining is handled via SLD resolution. The modus ponens inference rule is the foundation for both of these principles. It is one of the two techniques of reasoning using inference rules and logical implications that are employed the most often, the other being forward chaining. The depth-first search approach is often used by backward chaining systems, such as when looking for a file. Prolog.
The process of backward chaining begins with a list of objectives (or a hypothesis) and then works its way backwards from the consequent to the antecedent in order to determine whether or not any of these consequents are supported by any evidence. An inference engine that uses backward chaining would go through all of the possible inference rules until it found one that had a consequent (Then clause) that matched the required aim. It is added to the list of objectives if the antecedent of that rule, which is known as the If clause, is not known to be true. However, in order for a goal to be verified, an individual must also supply data that verifies the new rule.
Take, for instance, the scenario in which a new pet named Fritz arrives in a package along with two pieces of information pertaining to Fritz:
Fritz croaks
Fritz eats flies
On the basis of a rule set that includes the following four rules, the objective here is to determine whether or not Fritz is green:
If thing X eats flies and thing x croaks, then thing x is a frog.
If X is a bird that both chirps and sings, then X is a canary.
If X is a frog, then it is a given that X is green.
Canaries are yellow, hence X must be yellow if X is a canary.
An inference engine may decide in four easy steps whether or not Fritz is green by using the technique of backward reasoning. To begin, the inquiry is framed as an objective claim that has to be shown, and that assertion is that Fritz is green.
.
1. In rule number three, X is replaced with Fritz in order to determine whether or not the aim is satisfied; hence, rule number three is rewritten as:
If Fritz is a frog – Then Fritz is green
The rules engine must now determine if the antecedent, which states that Fritz is a frog, can be shown since the consequent, which states that Fritz is green, meets the aim. As a result, the antecedent transforms into the new target:
Fritz is a frog
2. Using Fritz as an example once again, rule number one is rewritten as:
If Fritz croaks and Fritz eats flies – Then Fritz is a frog
Since the consequent (that Fritz is a frog) is consistent with the present objective (that Fritz croaks and eats flies), the inference engine must now determine whether or not the antecedent (that Fritz croaks and eats flies) can be shown. As a result, the antecedent transforms into the new target:
Fritz croaks and Fritz eats flies
3. Because this objective is a combination of two assertions, the inference engine divides it into two subsidiary objectives, each of which must be shown to be true:
Fritz croaks
Fritz eats flies
4. The inference engine observes that both of these sub-goals were presented as beginning facts. This is the first step towards proving both of these sub-goals. Because of this, the conjunction is correct:
Fritz croaks and Fritz eats flies
because of this, the antecedent of rule number one is accurate, and its consequent must also be accurate:
Fritz is a frog
because of this, the antecedent of rule number three must also be true in order for the consequent to be true:
Fritz is green
Because of this deduction, the inference engine is able to demonstrate that Fritz is an earthy color. The second and fourth rules were not followed.
Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in modus tollens), and even then, their antecedents are then considered as the new goals (and not the conclusions as in affirming the consequent), which ultimately must match known facts (usually defined as consequents whose antecedents are always true); as a result, the inference rule that is being utilized is modus ponens.
This technique is referred to as goal-driven, as opposed to data-driven forward-chaining inference, which gets its name from the fact that the list of objectives is what decides which rules are chosen and put into use. The method of backward chaining is used rather often by expert systems.
Backward chaining is a feature that may be found inside the inference engines of programming languages such as Prolog, Knowledge Machine, and ECLiPSe.
{End Chapter 1}
Chapter 2: Automated theorem proving
Automated theorem proving, sometimes referred to as ATP or automated deduction, is an area of automated reasoning and mathematical logic that deals with proving mathematical theorems by computer programs. Other names for this topic include automated deduction and automated reasoning. A significant driving force behind the development of computer science was the application of computerized logic to the verification of mathematical theorems.
While Aristotle is often credited as being the father of formal logic,, In the latter half of the 19th century and the early part of the 20th century, modern logic and formalized mathematics were developed.
Frege's Begriffsschrift (1879) was the first publication that included both a comprehensive propositional calculus as well as what is basically contemporary predicate logic.
However, immediately after obtaining this encouraging outcome, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), demonstrating that every axiomatic framework that is sufficiently robust contains true propositions that cannot be proven using the framework itself.
In the 1930s, Alonzo Church and Alan Turing made significant contributions to the discussion of this subject, who, on the one hand, provided two different definitions of computability that were identical to one another, and, on the other hand, provided tangible instances for issues that could not be answered.
The first computers designed for civilian use were commercially accessible not long after the end of World War II. At the Institute for Advanced Study in Princeton, New Jersey, in the year 1954, Martin Davis was the one who first coded Presburger's algorithm onto a JOHNNIAC vacuum tube computer. Davis claims that the organization's most significant achievement was demonstrating that the sum of any two even integers is also even.
Depending on the reasoning that lies underneath it, Determining the correctness of a formula may range from being an easy task to an insurmountable obstacle.
Regarding the most common instance of propositional logic, The challenge can be solved, however it is co-NP-complete, Consequently, it is assumed that only algorithms with exponentially increasing runtimes exist for generic proving jobs.
For a first order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, Therefore, it is possible to enumerate valid formulae in a recursive fashion: given an unlimited supply of resources, Any correct formula may be shown to exist at some point.
However, invalid formulae (those that are not entailed by a given theory), may not usually allow for recognition.
All of the above applies to theories of the first order, include things like Peano arithmetic.
However, with reference to a particular model that may be explained using a first order theory, In the theory that is being used to define the model, there may be certain claims that are true but cannot be decided.
For example, by Gödel's incompleteness theorem, We are aware that any theory for which the relevant axioms hold for the natural numbers cannot prove all first order propositions to hold for the natural numbers if those axioms hold, Even if the list of correct axioms is permitted to be infinitely enumerable, this won't change anything.
Therefore, an automated theorem prover will not be able to conclude its search for a proof when the statement that is being studied is undecidable in the theory that is being employed, regardless of whether or not it holds true in the model of interest.
Despite the fact that this restriction is theoretical, in practice, Theorem provers have the ability to tackle a variety of challenging situations, even in models that cannot be completely characterized by any first order theory (such as the integers).
Proof verification is a less complicated but similar topic, in which an existing proof for a theorem is checked to ensure that it is valid. In order to do this, it is often essential that each individual proof step be verifiable by a simple recursive function or program; hence, the issue is always capable of being solved.
The subject of proof compression is critical because the proofs that are created by automated theorem provers are often rather extensive. Various strategies have been developed in an effort to make the prover's output smaller and, as a result, more readily comprehensible and checkable.
Proof helpers need the participation of a human user in order to receive and process tips. The prover may be simply reduced to a proof checker, with the user giving the