Discover millions of ebooks, audiobooks, and so much more with a free trial

Only €10,99/month after trial. Cancel anytime.

A Window on Intelligence
A Window on Intelligence
A Window on Intelligence
Ebook380 pages9 hours

A Window on Intelligence

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The human mind is the single most powerful entity in the universe. Yet we have made no progress in our efforts to simulate it as artificial general intelligence. Why is that?

In this captivating book, software engineer and philosopher Dennis Hackethal explains the mistakes intelligence researchers have been making – and how to fix them. Based out of Silicon Valley, he proposes a research program for building truly intelligent software, while arguing for a fundamental unification of software engineering and reason generally. Building on the theory of evolution, epistemology, psychotherapy, and astronomy, Hackethal presents a bold new explanation of how people evolved and provides unparalleled insight into the unlimited potential of artificial general intelligence that may one day take us to the stars.

A Window on Intelligence is your field guide to the exciting world of your mind.

LanguageEnglish
Release dateJun 28, 2020
ISBN9781734696141
A Window on Intelligence

Related to A Window on Intelligence

Related ebooks

Intelligence (AI) & Semantics For You

View More

Reviews for A Window on Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    A Window on Intelligence - Dennis Hackethal

    Contents

    Acknowledgments

    I thank my friends and colleagues Thatchaphol Saranurak, Bruce Nielson, and Ella Hoeppner for reading parts of this book before its publication, providing criticism, and suggesting improvements, and David Deutsch, whose books were the main inspiration for this book, for answering my many questions. Thank you to my sister and illustrator Carla Lagemann for making some of the illustrations in this book. Last but not least, I want to thank my high-school teacher Frau Dr. Kant for recommending Popper to me.

    Introduction

    The story of how the human mind evolved is almost too good to be true. Contrary to most evolutionary changes, this change was not gradual. It was a jump from a cold, automated machine to a sentient, creative mind, and was, therefore, a momentous change in the history of the universe. For the first time, a part of the world could understand itself – through people. They were the first to experience what they saw whenever they opened their eyes. In front of them was a bleak, hostile world. Yet the very genetic mutation that allowed them to become aware of their surroundings also enabled them to explain the world, and, therefore, to control and improve it.

    From this crucial moment in our evolutionary history onward, human progress chugged along. Then, the enlightenment sped it up significantly (Deutsch 2012a, 29). People have enjoyed rapid progress and technological innovations ever since. And even though hundreds of thousands of years have passed since this seemingly miraculous mutation occurred, we can still trace today’s innovations and abilities back to it. It is this ability to be intelligent and create new knowledge that is also known as creativity (Deutsch 2012a, 30).

    One of the most impactful innovations is the computer. Ever since this invention, people have tried to write intelligent programs. When the late Alan Turing, the father of modern computer science, wrote about AI or artificial intelligence, the term meant something different from today. He used it to describe a program that has human intelligence. Since then, the use of the term has changed to refer to specific types of software with a certain level of sophistication, particularly those that do not need to be explicitly programmed, such as self-driving cars, master chess players, and text-prediction systems. Thus the term AGI – "artificial general intelligence" – had to be introduced to refer to the original concept of human intelligence again (Deutsch 2012b).

    Some people have put forward other terms such as strong AI (Searle 2009) and real intelligence (Hawkins and Blakeslee, 2004), but I use AGI as a stand-in for all of them because, in reality, they are all equivalent. Likewise, when I say artificial, I do not mean that it is not genuine intelligence – it is intelligence, only running on a computer. Hardware differences aside, it is still the real deal.

    Current AI research is narrow. AI can only work for specific applications and cannot have a mind of its own. It cannot learn anything its programmers have not designed it to learn. For these reasons, I shall refer to this kind of AI as narrow AI, whereas I will refer to human intelligence as AGI. It refers to an instance of the creative program on a computer other than the human brain. It is this creative program that makes people, people. The project of AGI attempts to replicate it and all other cognitive abilities of people that creativity enables, such as consciousness and free will.

    To develop an AGI has been the goal for decades – and it would be a great achievement. While we have made a substantial amount of progress in narrow-AI research, we have made none toward AGI (Deutsch 2012a, 152). Why is that? It has been the toughest problem to crack since those days Alan Turing wrote his Computing Machinery and Intelligence and has attracted some of the brightest minds in computer science. Yet they have all failed to build AGI. It is not for lack of trying or thinking. Following the physicist, philosopher, and father of quantum computing David Deutsch, in this book I argue that it is because they have so far had the wrong philosophy.

    There is a widespread prejudice that anything to do with philosophy is hand-wavy, pointless navel-gazing. It is this prejudice that has prevented progress in AGI research. I worry that many software engineers have it because most people in most fields do. While there is indeed much bad philosophy out there, those who fear that philosophy is generally a waste of time, fear not: there exist real philosophical problems that require solving. How to build an AGI is one of them, and it is soluble.

    Philosophy is crucial: it tells us how to think. It determines our every endeavor and our chances for success. Everyone has a philosophy because each of us has a way of going about solving problems. That way is one’s philosophy.

    Software engineers, in particular, should not dismiss philosophy for the simple reason that they routinely contribute to the philosophical field of epistemology without realizing it (Temple 2010): by discovering rules governing the improvement of the structure of programs. Our understanding of what makes some implementations of a program better than others has dramatically improved since David Deutsch recently made important epistemological discoveries (2012a), which I will go over. This book is an application of some of his discoveries to software engineering, and, in turn, an application of software-engineering principles to progress in all domains. For, the search for good explanations is what drives human progress (Deutsch 2012a, vii), and it can be stated as a software-engineering principle.

    Progress in software engineering depends on a single activity: writing good implementations in response to problems. How to write good implementations cannot be known without a good epistemology. It is a programmer’s philosophy that tells him how to write his programs. All software engineers are philosophers, and all people are software engineers, whether they realize this or not.

    Epistemology is the study of knowledge. It provides answers to questions such as: how is knowledge created? How does it grow? These questions are crucial because, by definition, an AGI is a program that can create knowledge of any kind: it is a universal explainer (Deutsch 2012a, 146). It can explain anything people can; indeed, it is a person (Deutsch 2012a, 157). And the study of epistemology is the study of AGI. Therefore, any intelligence research not directed toward epistemology is futile (Deutsch 2012b).

    We know of only one approach to building AGI. Despite appearances, the industry is not pursuing it because it is the victim of bad philosophy. A programmer with the wrong epistemology will eventually fall prey to it, but armed with a good one, he is unstoppable. Thus, I guess that programmers will continue to be significant contributors to the human project, only more so after the invention of AGI.

    Where does current AGI research go wrong and how might one build it? How did creativity evolve in humans? Are our computers capable of running AGI? Is it safe, or instead man’s last invention? Once built, what might the future hold – would it just make life a little more convenient, or can it take us to the stars?

    These are some of the questions this book is intended to answer. It is suited for anyone interested in intelligence and provides insight into the latest AGI research. No previous programming experience or knowledge of philosophy is required. Code samples are given here and there but explained thoroughly for laypeople. Do not worry about understanding the technical parts of this book – there are not many. You should know, however, that you are already a seasoned programmer, even if you have never written a single line of code. You will learn why in this book. We will explore the epistemologies of the philosophers Karl Popper and David Deutsch and apply them to software engineering in hopes that this approach may one day help build an AGI. Indeed, the title of this book is a reference to chapter 8 of David Deutsch’s book The Beginning of Infinity called A Window on Infinity. While this book heavily draws from Deutsch’s and Popper’s work, errors are mine. And while I have tried to reference all source material as thoroughly as possible, I am sure some things have fallen through the cracks.

    Software is not just a tool to solve problems, but it shapes the universe and exerts causal power on the physical world around us. Many philosophical problems, including AGI, will not be solved until software engineers adopt the requisite philosophy, and until philosophers realize the importance of software engineering. Likewise, many software-engineering problems cannot be solved without philosophy. Both philosophy and software engineering are two sides of the same coin, and I believe this realization will yield fruit in our attempts to build this amazing thing.

    1

    A Brief Account of the Origin of Computer Programs

    Before I can explain my theory of the mind, and with it, how we may one day build AGI, I must first introduce you to the concepts of computation, knowledge, and universality. The first few chapters are my attempt at a summary of these topics as the greatest minds in the corresponding fields, i.e., Karl Popper and David Deutsch, have explained them, with some of my thoughts (and, surely, errors) added in.

    To "compute is often used synonymously with calculate," but that is misleading. This confusion may be responsible for the popular myth that those hoping to become programmers need to be good at math, which in turn deters many from pursuing the craft. While computers did indeed inherit their name from people who manually compiled logarithm tables, they do something far more profound than that.

    Deutsch explains the concept of computation in an interview (2014b): A theory of computation […] is the theory of how you can use physical objects to represent abstract objects. One does not need to build a machine to do this – as he explains in the same interview, if you want to count to three, you can use fingers to do so. In this example, every finger represents an integer. (All thinking is computation (Deutsch 2012a, 186), but I will get to that later.)

    We loosely model our present-day computers after universal computers. A universal computer takes this ability to represent abstract objects using physical ones to the universal level. It can simulate any computable abstraction in arbitrarily fine detail, and all universal computers share this repertoire (that is the kind of universality referred to in the name). Via those abstractions, a universal computer can simulate any computable physical process (Nielsen 2004). When we speak of a simulation, whatever is being simulated is not being faked. The simulation results in the same information processing (Deutsch 2018a).

    The set of all possible motions in a universal Turing machine, which is equal to the programs that one can run on it, is in one-to-one correspondence with the set of all possible motions of anything. (Deutsch 2014b) So when a software engineer writes and then runs a computer program, he instructs that computer to move its internals in precisely such a way as to simulate a particular set of abstractions. He instantiates abstract objects and their relationships in physical objects and their motion. (Deutsch 2014b) Therefore, writing programs is always about simulating abstractions and finding those instructions that will cause the computer to move its internals physically as is required. In a way, software engineers are physicists.

    Why do software engineers write programs? Programming, like any creative endeavor, always starts with problems. Perhaps one of the first programming problems was how to automatically calculate logarithm and cosine tables for use in navigation and other areas. Large numbers of people – computers – compiled these tables. They contained errors, which cost lives. If this process could be automated using a machine, fewer errors might find their way into the tables.

    A tentative solution to this problem was the Difference Engine. It was first conceived of by the German engineer Johann Elfrich Müller in the late 1700s, and again by the English polymath Charles Babbage to address the problem of dying seafarers in the early 1800s. Unfortunately, neither of them ever built it. Later on, Babbage envisioned the first universal computer, which he called the Analytical Engine. The English mathematician and associate of Babbage’s, Ada Lovelace, even wrote one of the first computer programs.

    A problem is not necessarily negative. It is a conflict between two or more ideas (Deutsch 2012a, 31). Babbage recognized conflicts between the actual and desired accuracy of logarithm tables used for seafarers; and also, on a more tragic note, between the desire to save their lives and the reality of their deaths. Today, we write programs to solve all kinds of problems: instant messengers solve the conflict between the desire for real-time communication and the reality of the slow speed of snail mail. Facial recognition solves the conflict between the need for faster identification and the reality of clunkier mechanisms of identification, such as usernames. The universality of computation implies that one can solve all soluble problems by writing the requisite software.

    While developers write all of their programs to solve problems, merely solving a problem is not enough. Not every solution will do. It has to be a good solution: a good program implementation. There is an objective difference between a good and a bad implementation: a program has a good implementation when it is adapted to solving the problem it purports to solve. For it to be adapted means that few changes would make it perform better at solving its problem, and most changes would make it perform worse at that purpose. Every part of the implementation plays a vital role in solving the problem, and changing any part would break its ability to do that. In other words, the program is hard to vary while still accounting for what it purports to account for. (Deutsch 2012a, 31)¹ This means a good implementation resists change. All popular software-engineering principles, such as modularity and reusability, are special cases of the globally applicable principle of being hard to vary.

    The origin of every program is the programmer’s creativity. To solve a problem, he must first conjecture a solution. Afterward, he can try to improve it. He does this many times and conjectures and criticizes many different approaches in an effort to eliminate bad ones. We know of no other way to do solve problems (Popper 2002b, 74). This process is evolution: trial and error correction (Popper 1983, 256-284). The philosopher Karl Popper also referred to it as conjectures and refutations (though he did not only apply this to software engineering but knowledge creation in general). A programmer guesses solutions, criticizes them, and picks the best remaining one – if there is one. If there is not, he has to guess more candidate solutions and criticize them once more, until he has found a tentative solution. That is creativity, and it fuels all human problem solving and knowledge creation, not just software engineering. Since a good program implementation is literally the result of evolution, it is no metaphor to consider it well adapted to solving a problem.

    A bad implementation is easy to vary while still accounting for what it purports to account for. It may solve a problem, but it does not solve it well. The worse the implementation, the easier it is to make improvements to it, though this always requires additional creativity. Being easy to vary can go two ways. Either a program is easy to vary internally, meaning parts of the program can easily be omitted or changed to improve it, or it is easy to vary externally, meaning it is not adapted to any particular purpose. It does not solve any particular problem, which can sometimes mean it solves several problems poorly.

    Why choose good implementations over bad ones? Because there is a truth of the matter about what constitutes a solution to a problem. As such, programming is an effort to create explanatory knowledge, which is a continuation of an age-old cosmological endeavor: the project of understanding reality, which is unique to people. When there is a problem, meaning a contradiction between two ideas, it tells us that at least one of them is false, because, in reality, there is no such contradiction. Hard-to-vary programs are preferred because choosing one of countless variants without any functional advantage is irrational (Deutsch 2012a, 21). Since a good solution to a problem has sufficient explanatory power to explain everything the conflicting theories do, it usually has a conserving as well as unifying character (Popper 1983, 369). But do hard-to-vary programs contain truth, or are they merely useful? The answer is the former, and I will examine the connection between programs and explanatory knowledge in more detail in chapter 3.

    Good programs have the interesting property of contributing causally to their replication. First, they will come out on top of rival programs during the programmer’s attempt to solve a problem: it is against his interest to discard good implementations. Moreover, they will spread across computers and gain popularity. A good implementation that solves real-world problems will, for example, find users by spreading across the internet as a website or as an app that one can download, and users who like it will tell others about it and encourage them to download it, too. This way, the program ensures² that it keeps running. It is a replicator. Social networks are a prime example of this effect. Moreover, since replicators are one of the primary ingredients of evolution, the evolution of programs is already happening across computers. As I will discuss, building an AGI requires replicating this effect within a computer.

    I should point out that when I say evolution, I am not only referring to a biological concept. For, evolution is fundamentally a theory about abstract replicators. (This fact is reflected in what is now called neo-Darwinism.) Genes merely happen to be instances of such replicators. Nonetheless, the origin of computer programs does closely resemble the origin of life (more on that in chapter 5). Evolution has so far only happened in a minimal sense on computers, in the form of evolutionary algorithms. They are limited because, like all other computer programs, they reliably carry out a particular purpose and then terminate. In a sense, they are pessimistic: they only solve a problem or set of problems without finding new ones. Current computer programs do not make mistakes, but mistakes are a vital ingredient of creativity because they make error correction possible in the first place. Thus, present-day programs resist change and cannot keep growing in an unbounded fashion. To facilitate unbounded growth, we still require the programmer’s creativity. AGI, on the other hand, will be able to make, find, and correct mistakes and keep solving ever newer and better problems in an unbounded fashion just like people do.

    Terminology

    To compute – To instantiate abstract objects and their relationships in physical objects and their motion. (Deutsch 2014b)

    Problem – A conflict between two or more ideas (Deutsch 2012a, 31)

    Computer program – A set of instructions that command a computer to move its internals in a way that instantiates the desired abstractions

    Good/bad program implementation – An implementation of a program that is hard/easy to vary internally and/or externally while still accounting for what it purports to account for

    Well/poorly adapted to a purpose – Being hard/easy to vary while fulfilling that purpose

    Internal variability – When it is easy to change the internals of something without breaking its ability to fulfill its purpose

    External variability – When something is not adapted to any particular purpose

    Replication – Self-induced copying

    Replicator – Something that contributes to its copying

    Creativity – The ability to create new explanations

    Evolution – The creation of knowledge through alternating variation and selection

    AGI – Artificial General Intelligence; a running instance of the creative algorithm

    Summary

    Programs originate in the creative minds of programmers who write them to solve problems. They instruct a computer to move its internals in such a way it will instantiate abstractions that will solve the problem at hand.

    Programs are the result of alternating conjecture and criticism in the programmer’s mind: evolution. They replicate, meaning they contribute causally to their copying.

    A good implementation is hard to vary internally and externally.

    Once programmers create knowledge to solve a problem, they can make this knowledge explicit by translating it into computer programs. Now, how do programmers – or anyone, for that matter – know anything? What is knowledge, and where does it come from?

    2

    How Do We Know?

    The question of how we know is an old problem. Our best answers to this question were given by Popper (1983, 2002b) and Deutsch (2012a). This chapter roughly summarizes their findings.

    The problem of how we know arose in combination with the problem of change.³ Philosophers in ancient Greece noticed that some changes escape our senses, and yet we know that they occur. "We do not see our children grow up, and change, and grow old, but they do." (Popper 2002b, 194) This realization is surprisingly simple yet profound. It hints at how limited a role the senses play in the creation of knowledge and how they can sometimes even hinder it: if we focused only on what we see, we would never notice our children growing up. That requires additional conjecture.

    To solve problems, we need explanations. They answer questions of why? and how?. In doing so, they describe reality: they explain what is really out there, how it works, and why (Deutsch 2012a, 30). So knowledge is not just an accumulation of statements or facts – for example, a history book containing only facts about a war and the corresponding dates by no means explains the war’s causes or its lasting effects.

    We create explanations when we solve problems. As I said previously, a problem is a conflict between two or more existing explanations; for example, when a theory makes a prediction that is expected to come true but does not. The result is a conflict between the interpretation of an observation – also a theory – and the original explanation. For example, a conjurer may perform a trick that makes it look as though a ball is floating in mid-air. Such a trick would violate one of the more basic explanations in your mind about the world: your best (mostly intuitive) explanation of gravity says that the ball should fall to the ground. The problem is that you do not observe it falling. You will want to explain it. To do so, you require a new explanation that does not have the conflict, and yet still explains everything the conflicting theories explain. So the new theory (the solution) has to conserve the previous theory’s explanatory power, plus it needs to explain the problematic observation.

    One viable – though in this case not very detailed – explanation is that the conjurer is performing a trick and that the laws of physics and gravity still hold. As I stated in the previous chapter, the reason we look for explanations that do not have the conflict is that our theories are accounts of how reality works, and no such conflicts exist in reality. Therefore, any conflict between theories shows that at least one of them is false. Sometimes, all of them are.

    All this is as true for magic tricks as it is for the inner workings of stars, the mechanisms of curing a disease, and how to write computer programs. We need explanations for all of them. Where do explanations come from? For much of history, people thought that we derive them from our senses. This idea is known as empiricism. However, empiricism cannot be true, because we could not possibly derive knowledge about empiricism itself from the senses, and so empiricism rules itself out. Though a simple and powerful refutation, empiricists ignore it; they prefer to marvel at the mysteries of how we receive knowledge from our senses anyway, instead of looking for a better theory.

    According to our best theories, we guess knowledge. The origin of all human knowledge is bold conjectures. We create new knowledge by rearranging, combining, altering and adding to existing ideas with the intention of improving upon them. (Deutsch 2012a, 4) This activity is creativity.

    Senses do play a role – but only when we test our theories, and tests are only a small part of the criticism to which we can subject our theories. Because theories are the result of guesswork, we should only ever adopt them tentatively. All people make mistakes – we are fallible – so we should expect even our best knowledge to contain mistakes in addition to truth. There are no authoritative sources of knowledge, nor is there a way to establish a theory’s truth or likelihood. The acknowledgment of these facts, which acknowledgment is a key ingredient of Popper’s epistemology, is known as fallibilism. We should always expect to find more problems with our theories, and even better explanations to supersede those theories. As long as we continue to look for problems, this process can continue forever. This way, we can make unbounded progress, and that is why science and philosophy are both unended quests.

    Knowledge is adapted information (Deutsch 2015, 5:06). Just like computer programs, it is information that is adapted to the purpose of solving a problem. Only few changes would make it perform better at that purpose, and changing it would be difficult without making it perform worse.

    When Popper worked on finding the demarcation between science and non-science, he suggested that for a theory to be considered scientific, it needs to make testable predictions. But testability alone cannot be sufficient (Deutsch 2012a, 14). Consider the following bad explanation of lightning: If somebody told you that the Greek god Zeus caused thunder by angrily throwing bolts of lightning down onto Earth from up on Mount Olympus, would you consider it a scientific theory? It explains the cause of lightning in the sense that we can deduce the occurrence of lightning from it. It is testable as well: it predicts that if we climbed up Mount Olympus, we could see and meet Zeus there; presumably, we could even talk to him. But no-one in his right mind would send an expedition.

    Popper understood this. For example, he said that origin myths, which often invoke gods and other supernatural entities, whimsical as they may be, are fine starting points to explain the origin of the world as long as we are willing to ask some awkward questions (n.d.).

    What is the purpose of awkward questions? It is criticism, which almost always precedes tests. We can criticize the Zeus theory of lightning because it is too easy to change some of its details without destroying the entire theory. For example, why Zeus, and not a wizard? Why is he angry? Why is he on Mount Olympus and not another mountain? Greek myths may have placed him there, but the theory still works if he throws lightning off of other mountains.

    Just like bad programs, this explanation is easy to vary. We can easily change its components without hurting its explanatory power. This explanation, by definition, is not adapted to the purpose of explaining lightning. Therefore, it is a bad explanation and not scientific. Consider the following variant, which is just as bad: Whenever Zeus is especially happy with humans, he celebrates them by causing a lightning show similar to today’s fireworks. The reason both explanations are so easily variable is that their details have no bearing on whether the phenomena we are trying to explain will occur (Deutsch 2012a, 21): it does not matter if Zeus is happy or angry, he may cause lightning either way. There is no reason to prefer either theory because they both make the same predictions while being easily variable, and they allow us to change their internals to account for problematic observations should we need to do so.

    Ultimately, all bad explanations are variants of a single, very bad explanation, that is both easy to vary internally as well as externally: the gods did it. (Deutsch 2012a, 21) It is easy to vary internally because you can replace gods with any other supernatural being, such as angels or wizards, and it still explains whatever it purports to explain. Likewise, it is easy to vary externally because it accounts for any problem whatsoever: it explains anything at all. When we refute an explanation, it becomes easy to vary: there is a problem it cannot account for, making it easy to vary while still not accounting for that problem. Through the ages, there have always been charlatans. Today, they include advocates of homeopathic remedies, chiropractors, acupuncturists, and the like. Thankfully, we can quickly identify them by the bad explanations they propose.

    Our best explanation of lightning is hard to vary: roughly, it explains lightning as an electric current that is the result of electrical charge building up in a cloud due to friction between ice crystals. High altitudes are sufficiently cold for ice crystals to form. You will fail to change the internals of this explanation without it falling apart. Remove the ice crystals, and you do not have friction. Without friction, you do not get the necessary electrical charge. And without

    Enjoying the preview?
    Page 1 of 1