Academia.eduAcademia.edu

39.P.15 - AI CHALLENGES.pdf

A profound irony now engulfs that AI software industry. In order to create AI programs to meet universal ethical standards, they must install those standards into the minds and belief systems of AI research program managers, AI architects, developers, and program code writers before they write the code for moral and ethical AI software programs. The sticking point of that irony is that the AI industry currently does not have a logical, universal, and timeless set of values, morality, and ethic for advanced AI decision-making. The values, morality, and ethics that are discussed in the following chapters does provide the universal and timeless standard. The resolution to AI’s predicament will be found in this short paper. If AI is to become humane in its decision-making, then it must take on the values that are innate to our species, and react by presenting moral and ethical options that erupt from those values.

ARTIFICIAL INTELLIGENCE A Protocol for Setting Moral and Ethical Operational Standards By Daniel Raphael, PhD “If AI is to become humane in its decision-making, Then it must take on the values that are innate to our species, And react by presenting moral and ethical options that erupt from those values.” daniel.raphaelphd@gmail.com https://sites.google.com/view/danielraphael/free-downloads https://independent.academia.edu/DanielRaphael1 39.P.151 © Copyright Daniel Raphael 2018 US \Books \39.P- AI Challenges | 12.20.2018 | 10:05 | Words: 10,189 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 2 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds ● (p 19) Hidden within Musk’s and Hawking’s quotes is an unconscious awareness that undirected social change is the most dangerous element now threatening all existing societies, cultures, and nations. ● (p 27) If Musk and Hawking are right, then a moral authority is needed, one that can weigh the best interests of humanity and the quality of life of communities, societies, nations, and of all civilization without self-interest. ● (p 27) Because of the logic-relationship between the seven values and their characteristics, which extends to the morality and ethics that emanate from them, future AI programs that are embedded with those values will arrive at rational, ethical, and moral conclusions with the sureness of ones and zeros. ● (p 33) The material you have read so far may lead you to believe I have created a bubble of moral and ethical idealism that is not connected to the realities of today. Ironically, the reality is that most people are not consciously aware that most of the world continues to use an archaic morality that is not capable of pointing the way forward to sustain families, organizations, governments, and cultures into a long and prospering future. ● (p 35) Bad Code. From a contemporary technological perspective, the traditional morality of western civilization for the last 4,000 years is a form of morality that in computer terms is “bad code.” ● (p 37) If we are to grasp the existential angst of Robert Oppenheimer, Father of the Atomic Bomb, whose famous quote is largely unappreciated, “Now I am become Death, the destroyer of worlds,” from the Hindu sacred scripture the Bhagavad-Gita, then AI architects are walking in the existential shoes of Dr. Oppenheimer, but without his consciousness. ● (p 40) The danger of AI development is that most people have not been taught the basic elements of discernment; and do not have the ability to make competent, let alone cogent, distinctions of discernment. Think of discernment as an app of human intelligence. 3 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Artificial Intelligence A Protocol for Setting Moral and Ethical Operational Standards © Copyright Daniel Raphael 2018 USA. [Formatted as a printed document.] ARTIFICIAL INTELLIGENCE A Protocol for Setting Moral and Ethical Operational Standards Daniel Raphael, PhD — opus unius hominis vitae — No Broken Hearts is an Imprint of Daniel Raphael Publishing ~ Daniel Raphael Consulting PO Box 2408, Evergreen, Colorado 80437 USA 4 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Table of Contents Introduction .......................................................................................................................................... 7 1. The Fundamentals of Decision-Making ................................................................ 11 Characteristics of the Seven Values, Succinctly Stated The Four Primary Values Succinctly Stated The Three Secondary Value-Emotions That Make Us Human Human Motivation Human Motivation, the Power Behind Social Change. Priorities of Decision-Making A Comment on Elon Musk’s and Stephen Hawking’s Quotes 2. Values, Morality, and Ethics of Socially Sustaining Decision-Making . 21 Succinct Moral and Ethical Logic-Sequences for the Seven Values The Logic of a Proactive Morality and Ethic Follow this Sequence Values The Critical Position of AI 3. Understanding Why We Are Concerned about the Future ....................... 29 Introduction Sustainability — Bedrock for Moral and Ethical Decision-Making The Durations of Existence The Durations of “Sustaining” Brief Summary 4. The Morality and Ethics of Today ............................................................................ 33 Traditional Morality 5. Crisis and Opportunity BIO ........................................................................... 37 ..................................................................................................................................................... 41 5 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 6 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds INTRODUCTION This paper introduces a theory OF ethics, not a discussion about ethics. As such, almost no references to other sources are used as this is a work of original authorship. TRUTH: Questions are basic for the distillation of experience into wisdom. QUESTION: What challenges does this paper offer to the moral and ethical development of AI? ANSWER: The challenge for AI is at two very distinct levels. The simple challenge is for operational AI programs to conform to the moral and ethical standards as described in this text. The higher level of challenge is to develop an AI program that can then monitor and validate other AI programs as meeting those moral and ethical standards. Doing so would remove the necessity of centralized, authoritarian, human-based judgment that has always eventually been fraught with self-interest and unethical compromise. If, as Elon Musk suggests, “AI [artificial intelligence] is humanity’s biggest threat,” 1 then it is timely in this early era of AI development that moral and ethical standards of operation are put into place to guide AI’s development and evolution. Musk’s thoughts were echoed by Stephen Hawking on CNBC, Monday, November 6, 2017, "Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it. "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." 1 A quote given by Mr. Musk at the National Governors Association meeting in Rhode Island, as reported by the Wall Street Journal, July 16, 2017. 7 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Musk’s and Hawking’s insights contain an unconscious historic reference to the failure of ALL human organizations and hierarchies, whether military, political, economic, religious, and others. The proof lies in the archeological detritus of 30,000 years of human social existence of former cities, nations, empires, dynasties, and cultures. It is fair to say that the “original cause” for those failures still exists today. The “original cause” of most societal failures have been due to what leaders think will work, doesn’t. Will AI work against those original causes of failure, or work for a sustaining, thriving future for humanity, or both? What Musk’s insight does not reveal is that the Homo sapiens species has sustained its survival for over 200,000 years. What is revealed in the pages ahead are the fundamentals that will reconcile the irony of the survival of our species and the failure of the organizational social existence of our species as they now intersect the arrival of AI to begin a new phase of the Technological Revolution. Logically, what has given our species its adaptability and survival capability was lacking in the organizational structures that rose, crested, declined, collapsed and disappeared during the course of those 30,000 years. Being aware of this, we also know that ALL contemporary organizational structures and hierarchies of all organizations, corporations, and governments today will also fail unless proactive measures are taken at the outset. As an insight of what is to come, the decisions made by individuals of our species were made without the conscious intention to support the survival of our species. In contrast, in the last 30,000 years organizations have been brought into existence with what appears to be a haphazard intention to design organizations, cities, administrations, and other governing bodies, but without the conscious intention of those organizational designs to become self-sustaining into the centuries ahead. This distinction is highly significant to our own organizations today, whether as a village council, corporation, non-profit foundation, national government, and all other organizations. All will eventually fail if we do not consciously and intentionally improve the decision-making processes of our organizations, and obviously those of AI programs as well. On the macro-scale of the long arc of human existence, what humans consciously think will work has almost always failed. As a contrast, what unconsciously drives the sustainability of our species does work. 8 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Our challenge ahead is to design organizations to become socially sustainable 2 with the capability to become mutually self-sustaining into the decades and centuries ahead. The challenge for AI is to make effective moral and ethical contributions to the long arc of human social existence, and the survival of our civilization. ABOUT AI’s Moral and Ethical Predicament The moral and ethical predicament of AI extends far beyond AI to include all present and historic presentations of morality and ethics. Morality and ethics have always been taught, discussed, argued, and debated because those efforts have always been ABOUT THE THEORIES of morality and ethics. In comparison, no one really seriously argues ABOUT the metric system of weights and measurements because everyone has accepted the universal standards upon which the metric system is founded. Not so with morality and ethics. The discussions, classroom instruction materials, dissertations, theses, conferences, workshops, meetings, associations, and journals for example all have one thing in common. They are all ABOUT morality and ethics theories but not theories OF morality and ethics. The reason being that until now the values that underlie moral and ethical decision-making have never been identified and named. Further, the values that have been used in arguments about morality and ethics do not exist in a context of moral and ethical behavior that can be taught. For over 4,000 years our awareness of morality and ethics has been experienced much like looking at a photographic negative to interpret a picture. Four thousand years of proscriptive statements have not helped anyone reveal a set of values that can initiate proactive moral and ethical decision-making and behavior. In very humble terms, talking about morality and ethics is much like talking about cake. Talking about cake can reveal many facets of discussion about cake that may include texture, density, flavor, consistency and so on ad infinitum, but you will never KNOW cake until you have a recipe and all of the necessary ingredients to make cake, 2 Definition: Social sustainability is a process and ideology that integrates the disparate parts of society into a congruent system. UNDERSTANDING Social Sustainability is available from the author‘s Google website. 9 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds and then verify cake in your life by actually having the EXPERIENCE of making a cake and then eating it. It is the same for morality and ethics. Until now there has never existed an identifiable “recipe of ingredients” that support a philosophy of morality and ethics to truly know what is moral and what is ethical, and what is not. This has been due to the absence of an identifiable, integrated, timeless, and universal set of values. ● The predicament of AI is a predicament for all of humanity — how will it ever be possible to write AI programs that are logical and rational that empower AI to form moral and ethical decisions and recommendations if the AI architects, program developers, and code writers do not know how to discern what is moral and what is ethical, and what is not, and how to discern their own biases. 3  Hempel, Jessi. 2018. “The Human In The Machine.” WIRED, “Less Artificial, More Intelligent,” December, 91-95 Raphael, Daniel. 2018. Making Sense of Ethics — A Unique, Unified Normative Theory of Ethics, Morality and Values. No Broken Hearts, Daniel Raphael Publishing. Available online at the author’s website. https://sites.google.com/view/danielraphael/free-downloads 3 10 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 1 THE FUNDAMENTALS O F S U S T A I N I N G D E C I S I O N -M A K I N G TRUTH: The competence of the question determines the competence of the answer. TRUTH: Values always underlie all decisions. TRUTH: Decisions always underlie all actions. FACT: Homo sapiens exist today because 8,000 generations of our ancestors learned how to sustain their survival. QUESTION: What made the survival of our species possible? ANSWER: Logically many individuals of our species generally made decisions that supported the continuing survival of our species and our presence today. QUESTION: What values, then, supported the survival-decisions of our species? DISCUSSION: The author’s investigation into those values began in 2007 in an experimental “Design Team” that had the intention to discover the causes of personal disappointment in intimate relationships. Through the Team’s discussion and an “Ah-ha!” insight by the author, those values are succinctly illustrated below. The team also discovered that failure and disappointment, whether in intimate or business relationships, are outcomes of erroneous expectations, beliefs, assumptions, and personally interpreted values of the seven values that are innate to our species 11 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds CHARACTERISTICS OF THE SEVEN VALUES, SUCCINCTLY STATED DISCUSSION: The values and characteristics become evident from one fact, Homo sapiens have survived for over 200,000 years with our presence today as evidence. The chain of logic then begins to unfold: Our species’ survival includes all races, cultures, ethnicities, nations, and genders, meaning that the values that underlie the survival-decisions made by our ancient ancestors are universal to all people. Logically, we personally know that those values are not learned behaviors, but have been existent as unconscious motivators from the earliest times of our species. Logically, those values exist in each of us today, though most people are not consciously aware of them. Self-Evident. The self-evident nature of these values is only one of several characteristics that have obscured their presence while in plain sight. These values have remained outside of our conscious awareness until recently. Universal. These values are universal to all people of all races, cultures, ethnicity, nations, and genders. Innate / Timeless. Being universal to all people, logically these values have every appearance of being embedded in the DNA of our species. In order for AI programs to become moral and 12 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds ethical, those same values, morality, and ethics must also be embedded in their (DNA) code. Irreducible / Immutable. LIFE, the three primary values, and the three secondary values are the superordinate values of our species and are not subordinate to any other values. THE FOUR PRIMARY VALUES, SUCCINCTLY STATED Life is the ultimate value. Life and the three primary values, and the three secondary values create an integral system of values. Equality is inherent in the value of life — everyone’s life is valuable. Growth is essential for improving our quality of life. To be human is to strive to grow into our innate potential. Only a proactive morality and ethic has the capability to support the growth of others. Quality of Life While life is fundamental to survival and continued existence, it is the quality of life that makes life worth living and gives life meaning. In a democracy, access to the quality of life is provided when a person not only has an equal right to life, but that person also has an equal right to growth as anyone else. THE THREE SECONDARY VALUE-EMOTIONS THAT MAKE US HUMAN Equality → Empathy, Compassion, and “Love” The three secondary value-emotions emanate from the primary value equality. The reason we are so sensitive to issues of equality is that we have the innate capacity of empathy – to “feel” or put our self in the place of another person and sense what that is like, whether that is in anguish or in joy. Feeling that, our empathy urges us to act in compassion, to reach out to the other person and assist them in their plight. We generalize empathy and 13 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds compassion for all of humanity with the term “Love,” the capacity to care for another person or all of humanity, as we would for our self. The secondary value-emotions are innate to our species and exist in us as an impulse to do good. They are proof that people are innately good. DISCUSSION The discovery of the seven values and their characteristics provide a permanent, rock-solid foundation for the development of moral and ethical standards for all AI programs. The validation of the existence of these values and their subsequent morality and ethics lies within each individual who is reading this. Because of that, these values and their characteristics provide the standards for the development of morally and ethically reliable AI Di programs, much as geometric constants provide for the development of reliable geometric programs. Consider the geometric constant 2πr π = the circumference of a circle. Until now there has never existed a set of constant values to weigh moral and ethical decision-making. Until now there has never existed constant values to write computer code to deal with human decision-making involving personal and collective social relationships. Until now there has never existed an integrated system of values to guide the development of AI programs, to embed those values as code in those programs, and to write an AI program that would become the validator for the moral and ethical decision-making of other AI programs. Now we do. Once these values become accepted for what they are, then the morality and ethics that emanate from them will someday become as well accepted as mathematical and geometric constants. 14 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds HUMAN MOTIVATION The pursuit of equality, growth, and an improving quality of life provide the foundation for human motivation as interpreted by the individual, and express themselves in a personal hierarchy of needs. These values motivate all people — as they interpret them! Our interpretations of those seven values give rise to a hierarchy of needs (Abraham Maslow). Human motivation is at the core of all human activity, for good or bad. By understanding the fundamentals of human motivation social scientists and economists, for example, will have a huge advantage for more accurately predicting human behavior. AI and advanced computer technologies will figure pivotally in that process, and will open the way for a species-specific level of AI decision-making. Now that we appreciate the logic-connection between the characteristics and the seven values, we can begin the elemental phases of AI development that will protect the survival of our civilization. What will make those programs humane is the necessary inclusion of the three secondary values that give humans their humanity, and indispensable for of the morality and ethics that will be discussed in the next chapter. Because humans have been unaware of the innate values within themselves that have motivated them in their lives, a uniform and unified theory of human motivation has never come into existence, until now. Together, the innate seven values of our species provide us with a unified, values-based theory of human motivation. Eponymously, it becomes the Raphael Unified Theory of Human Motivation. CAVEAT. The historic failure to predict the course of social change has been a result of not understanding the original causes of social change. Embedding the seven values into AI programs will give organizations the capability to anticipate the course of social change beforehand. Predictability is preparatory to successful adaptability. HUMAN MOTIVATION, THE POWER BEHIND SOCIAL CHANGE. Human motivation is the cause of social change. The key to understanding social change begins with understanding human motivation. That begins with understanding the power of the four primary values. They provide us with incessant urgings and yearnings to survive and strive to grow into our innate potential equally as anyone else would or could. 15 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds What we define as social change is the collective movement of vast numbers of people who are striving to satisfy their evolving personal interpretations of the values that have sustained our species. Their personal interpreted values provide the basis for an evolving hierarchy of needs described by Dr. Abraham Maslow, and the mischief that has led to the eventual demise of most societies and nations. Our personal hierarchy of needs evolve as our interpretations of the seven innate values evolve — we are still using the same value system as our ancestors did tens of thousands of years ago, but we interpret them in new ways. Collectively, as individuals improve the quality of their life, i.e., satisfy their needs and grow into their innate potential, they create social change through their “demand” for new means to fulfill their evolving needs. Perceptive marketers strive to be in touch and in tune with the “demand” of the public to assess any changes in the market for the potential of new services and products. While individual interpretations of the four primary values may vary wildly from one person to the next, vast numbers of people provide slowmoving, ongoing trends that stabilize the movement of a society over time. Social instability occurs when vast numbers of people sense that their ability to satisfy their needs is being threatened; and occurs rapidly and violently when they simultaneously sense that their ability is imminently threatened and there is no hope of preventing the threat. PRIORITIES OF DECISION-MAKING What is less obvious regarding the unconscious and unintentional decision-making of all human survival-decisions is the priority of decisionmaking that was involved. In order to support the social sustainability of all human organizations, we today must become very conscious and very intentional in our decision-making as individuals and as executives of organizations in order to support the elemental factors of functional, sustainable societies and nations. The basis for the illustration below is the seven innate values used by our species for its survival. The logic-tree was expanded to illustrate a logical and rational process for reframing human motivation collectively from the simple task of reproduction to sustain our species, to the far more consciously responsible task of sustaining the social existence of 16 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds our communities and societies. The illustration makes it clear that there is a reciprocal and symbiotic relationship involved between the individual/family and organizations to jointly support the sustainability of communities and societies in which they both exist. The socially sustainable survival of communities and societies is dependent upon all individuals/families and organizations faithfully using the seven values as the criteria for their decisions. The benefit will be the development of stable and peaceful communities and societies. The First Priority is always to sustain the species because it holds the genetic program of our species. The primal motivation of the individual is to reproduce to sustain the continuation of the species. At the early animal survival level of our species that does not require a family, community, society, organizations, or morality and ethics. For organizations to sustain the species, that means not polluting or endangering the species in any way that would cause damage to the genetic program. For families that means teaching children how to live in a functional loving family, and how to live peacefully in the community and the larger society. That may seen as though I have stated the obvious, but the other side of that statement is raising children without any direction for establishing their own functional family, and raising children who do not know how to live peacefully in their community and society. When that occurs, that is the initiation of the disintegration of families, communities, and societies. The second priority is to sustain the social fabric (functional families) that holds communities and societies together. Because individuals/families and organizations are the only decision-makers in the decision-making tree, their individual and joint responsibility is to support the social 17 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds sustainability of their communities and societies. The reason organizations are directly responsible arises because families are the primary socializing and enculturating social institution that can produce well qualified, socially capable, responsible, and competent employees. The source of all future generations of directors, managers, executives, middle managers, supervisors, team leaders, consultants, and the great body of employees come from families. If the quality of the child’s preparation for entering into the work force, whether as a laborer or as a member of a board of directors, is high then those organizations will benefit from the good work of the parents raising that child. This second priority supports the synergistic relationship between the individual/family and organizations. It is a two-way relationship. If families raise children well, then organizations will be managed well. If not, then organizations will make many mistakes. This is recently (20162018) evident with the egregious decisions at the highest corporate executive levels in Wells Fargo and Volkswagen. What is missing from this decision-making tree are the criteria, or rules, for the moral and ethical decisions that will keep (sustain) families and organizations of our communities and societies running smoothly so that everyone arrives in the far distant future with the same or better quality of life as we have today. When that is in place, then the primary elements of social sustainability will be in place. As AI is an intention made by humans, the best results from AI programs will only come about when AI programs are invested with the most reliable moral and ethical decision-making processes to sustain societies and nations into a thriving future. Organizations are an invention of people and therefore dependent upon the quality of decisions made by those who execute decisions for their organization. When we give the illustration above deeper thought some very large insights become visible. Ironically, in developed and complex societies no thought is ever given to sustaining the species. We take that for granted. What we fear is the collapse of our societies and communities that would threaten the collapse of our families and our way of life. The irony of it all is that no one ever really gives any thought to the sustainability of our societies and communities that support the well being and lifestyles of our families. In other words, no one has really 18 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds given much thought for making decisions about the social sustainability of the family AND society. A COMMENT ON ELON MUSK’S AND STEPHEN HAWKING’S QUOTES Hidden within Musk’s and Hawking’s quotes is an unconscious awareness that undirected social change is the most dangerous element now threatening all existing societies, cultures, and nations. AI will become the generator of great and destructive social change if there does not exist a uniform and unified purpose for its existence and a timeless and universal set of ethics for making those decisions. QUESTION: Is Artificial Intelligence to become another exacerbating element that causes undirected social change to become even more dangerous? ANSWER: AI has the potential to become the progenitor of increased and uncontrollable social change that will threaten the survival of civilization. The social survival of our families, communities, societies, and civilization is dependent upon the intention of professional AI managers to embed the seven values of our species and their subsequent morality and ethics into the programs of every AI program. Doing so, AI programs will become the moral and ethical backbone that resists human decision-making that otherwise would be filled with selfinterest from positions of authority, power, and control. From the position of risk management, embedding the values, morality, and ethics into executive, management, and AI decision-making processes is a very sound means for reducing an organization’s exposure to the liability of wrongful decision-making. The four primary values, (life, equality, growth, quality of life), and our adaptive intelligence have given our species the logic to survive. The three secondary values, (empathy, compassion, and a generalized “Love” for humanity), give our species the ethical reasoning capacity to adapt our behavior so that we can act out the principles of ethical behavior that include fairness, justice, integrity, respect, loyalty, truth, trust, accountability, responsibility, and being transparent, authentic, and honest, for example, to support the social existence of humanity. The primary factor to apply humane answers to social problems is conscious intention. The following distinction is important to AI applications. The primary values historically have been acted out by 19 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds UNconscious intention. The secondary values, however, are almost always acted out by conscious intention. The secondary values are the values that sustain the social existence of families, communities, and societies. Now, with AI in the early developmental stages, that distinction becomes a matter of survival for our civilization. If AI programs are designed solely with the four primary values, then the AI program will take on those values for its own survival. But for AI programs to become the perennial helpmate of humanity’s transcendence, those programs must be programmed with the three secondary values as well. We can predict that at some point in the future some AI programs will operate autonomously of human input. What moral and ethical rules must be in place before AI programs evolve to have that capability? The best outcome would be an autonomous AI program that will act with a social conscience using all seven values AND the morality and ethics that emanate from those values to offer a span of moral and ethical options and related considerations. If AI is not invested with the secondary values that will give it a social conscience, then it will become a logic-weapon of civilization’s destruction. To prevent that from happening, AI must value all people equally. Failing to include the secondary values, AI will simply become an intelligent weapon of self-serving people. Transforming Undirected Social Change Using these values in all personal and organizational decision-making processes will proactively guide society in a positive, evolutionary direction — and provide the potential for a democratic nation to transcend the 30,000 year failed history of organizations. The benefit for moving from the non-logical, non-integrated traditional ethical values of decision-making to the integrated system of the seven values, morality, and ethics will be to unconsciously transform undirected social change into directed social change. Consistent use of these values, morality, and ethics will give democratic nations the capability to transcend their history as social change transforms into positive, progressive, and constructive social evolution.  2 20 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 2 VALUES, MORALITY, AND ETHICS OF SOCIALLY S U S T A I N I N G D E C I S I O N -M A K I N G “Survival of our species is not dependent upon our social existence. Our sustainable social existence, however, is dependent upon the conscious and intentional moral and ethical decision-making of individuals and organizations based on the values that have sustained the survival of our species. The same must exist in AI programs.” SUCCINCT MORAL AND ETHICAL LOGIC-SEQUENCES FOR THE SEVEN VALUES 4 A Brief Review Life is the Ultimate Value. Equality, Growth, and Quality of Life are the values that sustain the survival of our species. Empathy, Compassion, and the “Love” for humanity are the values that make it possible to sustain social existence. — 4 This chapter rests upon the shoulders of two prior papers by the author: Making Sense of Ethics — A Unique, Unified Normative Theory of Ethics, Values, and Morality; And, ORGANIC MORALITY, Answering the Critically Important Moral Questions of the 3rd Millennium. Available from the author’s Google website. 21 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds The Logic of a Proactive Morality and Ethic Follows this Sequence SEVEN VALUES ➔ MORAL DEFINITIONS ➔ ETHICS STATEMENT ➔ EXPRESSED ETHICS ➔ THE GRACES OF EXPRESSED ETHICS ● The Four Primary Values underlie the decisions responsible for the survival of our species; ● Moral Definitions provide the rules that guide human decisions and actions to prevent destructive life-altering behavior of human interaction; ● Ethics Statements tell us how to fulfill Moral Definitions; ● Expressed Ethics tell us what to do to fulfill Ethics Statements; ● The Graces of Expressed Ethics are the states of being that smooth social interaction. An example using Growth as the primary value in the logic-sequence. The Proactive Moral Definition for Growth tells us to make decisions and take action for improving the quality of life and unleashing the potential of others as you would for your self. The Ethics Statement tell us how: “Assist others to grow into their innate potential just as you would for your self.” Expressed Ethics tell us what to do to help others grow into their innate potential. For example, be fair, have integrity, acceptance and appreciation for that person. The Graces of Expressed Ethics add a qualitative “texture” to our personal interaction with others. The Graces suggest that being kind, considerate, caring, confident, generous, meek, mild, modest, strong but humble, thoughtful, patient, tolerant, positive, and friendly will go a long way to make that person feel comfortable with the challenges that growth always provides. SEVEN VALUES Life Proactive Moral Definition: protect and value life. Assign value in all of your decisions to Ethics Statement: Protect and give value to all life (Buddhist). Take the life of other species only for your meals. Do not to take the life of species for sport, or to sell protected species. 22 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Expressed Ethics: Acceptance, validation, patience, tolerance, forgiveness, and vulnerability, for example, are necessary to support the social existence of families, communities, and societies. NOTE: The Graces of Expressed Ethics (TGoEE) apply to all values and will not be repeated individually for each value. They are closely associated with Expressed Ethics and take the form of being kind, considerate, caring, confident, generous, meek, mild, modest, strong but humble, thoughtful, patient, tolerant, positive, and friendly for only a very few of many possible examples. These are not necessary to be moral or ethical, but provide a “grace” to ethical living. Equality Proactive Moral Definition: Make decisions and take action for improving the quality of life and unleashing the potential of others as you do for your self. Ethics Statement: Treat others as you do yourself means that you do not treat others less than your self; and it also means that you do not treat yourself less than you would treat others. The value of others is equal to that of your self, and your value is equal to that of others – act accordingly. The importance of this value is that others are not excluded from consideration, and from opportunities to grow and to improve their quality of life; and neither are you. Expressed Ethics: To appreciate Equality at the roots of our humanity that emanate from our DNA, Expressed Ethics tell us “what to do” at the most basic level to fulfill “Equality.” When we see the expression of fairness, integrity, transparency, acceptance, appreciation, validation, worthiness, deservingness, honesty, authenticity, faithfulness, discretion, patience, tolerance, forgiveness, nurturance, and vulnerability we are seeing the expression of our humanness at its very best that supports the equality of others, and our self. Growth Proactive Moral Definition: Make decisions and take action that create opportunities for you to develop your innate potential; and, whenever possible develop opportunities for others, and assist 23 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds them to grow into their innate potential to improve their quality of life as you do for your self. Ethics Statement: Assist others to grow into their innate potential just as you do for your self. Show others, as you are able, to recognize the opportunities that may be of assistance to them to grow and improve their quality of life. Expressed Ethics: Fairness, integrity, transparency, acceptance, appreciation, validation, worthiness, deservingness, patience, tolerance, forgiveness, nurturance, and vulnerability are a few that support the growth of others. Quality of Life Proactive Moral Definition: Make decisions for yourself and others that improve the quality of your lives. Ethics Statement: See others as an equal of your own life to know how to support your efforts to develop their innate potential to grow to improve their quality of life as you would for yourself. When making decisions or writing policies and laws put yourself on the receiving end to see how you would react, and adjust the parameters of your decisions according to the seven values. Expressed Ethics: Fairness, integrity, transparency, acceptance, appreciation, validation, worthiness, deservingness, honesty, authenticity, faithfulness, discretion, patience, tolerance, forgiveness, and vulnerability support the quality of life of others, and our self. Empathy Proactive Moral Definition: life to that of others. Extend your awareness past your own Proactive Ethics Statement: Extend your awareness past your own life to that of others to sense their situation in the seven spheres of human existence: physical, mental, emotional, intellectual, social, cultural, and spiritual. Expressed Ethics: Extend your awareness past your own life to that of others to sense their situation in the seven spheres of human existence: physical, mental, emotional, intellectual, social, cultural, and spiritual. Reflect on what you sense and compare 24 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds that to your own awareness(es) of your own seven spheres of human existence. All Expressed Ethics demonstrate “other-interest” contrasted to selfinterest. “Other-interest” Expressed Ethics are typical of the secondary value-emotions. Self-interest is much more typical of primary values. We see the prevalence of this in the US culture with its great “me-ism” of self-centered arrogance manifested as authority, power, and control. Yes, primary values do have Expressed Ethics attached to them, but it is always a matter of conscious personal choice for expressing self-interest, otherinterest, or a little of both. Neither is “good” nor “bad.” “Otherinterest” works toward social sustainability while self-interest works predominately against it, whether for individual relationships or between nations. Nationalism could be considered a form of “meism” and self-interest. Compassion Proactive Moral Definition: Based on our developed sense of empathy we choose to support the improvement of other’s quality of life and to grow into their innate potential, as we do for our self. Proactive Ethics Statement: Based on your developed sense of empathy, take action to come to the aid of others, to support the improvement of their quality of life, and to grow into their innate potential equally as you do for your self. Expressed Ethics apply equally to the three Secondary Valueemotions because those Secondary Values act together. All Expressed Ethics demonstrate “other-interest” contrasted to selfinterest that we see all too often. “Love” Proactive Moral Definition: Love (noun) in the context of proactive morality is defined as the combined energies of empathy and compassion toward others, as you have for your self. This is truly the most developed definition of equality — to see and value others as you do for your self. 25 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Proactive Ethics Statement: Love (verb), in the context of proactive morality, is defined as projecting the combined energies of empathy and compassion toward others. This is truly the most evolved definition of equality — to see and value others as you do for your self, and choose to act accordingly. Expressed Ethics apply equally to the three Secondary Valueemotions because those Secondary Values act together. All Expressed Ethics demonstrate “other-interest” contrasted to selfinterest that we see all too often. The Graces of Expressed Ethics The Graces of Expressed Ethics apply equally to all Expressed Ethics because they are the natural outgrowth of Expressed Ethics for each value as the name indicates. They are not necessary to be moral or ethical, but provide a “grace” to Expressed Ethics. THE CRITICAL POSITION OF AI When we consider AI’s existence into the future, AI will become a tool that has a very real potential to become applicable to all people of all generations far beyond our own self-interested generation. ● It is very timely, then, that we begin the process of devising a suitable vision for the future of AI as a complement to humanity. Second, we must answer the question, “What is the long arc of intention for AI?” If AI is to become a helpmate of humanity’s survival into the future then it must be take on the mantle of the values that have sustained humanity’s survival; and take on the morality and ethics that erupt out of those values, particularly the three secondary values in order for it to become a humane partner in humanity’s sustaining future and to sustain the social existence of humanity. ● ● Third, to fulfill such a long term vision and intention, we will need to devise an operational philosophy that will be effective to guide AI program development for this and all future generations. ● Fourth, an overarching mission would put into effect the vision, intention, and operating philosophy of AI’s existence and functions. 26 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds ● Fifth, this would result in the development of immediate objectives with measurable outcomes that are consistent with the seven values, interpreted values, beliefs (and assumptions), and expectations that work to fulfill the vision, intention, operational philosophy, and mission. These results would then be further validated by the morality and ethics of those seven values. AI has several critical positions that are occurring now and will continue long into the future. The first and foremost reflects Elon Musk’s statement. Once AI has a firm grip on governments, military, finance, commerce, agriculture, and the major social institutions that support a functional society, it will be too late to go back and correct that fatal problem. 5 If Musk and Hawking are right, then a moral authority is needed, one that can weigh the best interests of humanity and the quality of life of communities, societies, nations, and of all civilization without self-interest. AI’s critical position is to achieve a state of “AI-Consciousness.” That is, the program that is devised has the ability to discern other AI programs as being compliant with the morality and ethics of the seven values then has a collective “self-awareness” of the ethical and moral processing of other programs. Because of the logic-relationship between the seven values and their characteristics, which extends to the morality and ethics that emanate from them, future AI programs that are embedded with those values will arrive at rational, ethical, and moral conclusions with the sureness of ones and zeros. Consider the impact this would have on the compliance industry, for government agencies, consulting firms, and those companies that are subject to compliance rules. When that occurs, AI will have the potential to become the savior of humanity, rather than its nemesis. Something outside of the human penchant for dominance of authority, power, control, and greed must be in place — as a higher moral intelligence — one that guides those who have the best interests of societal existence in mind and which also provides a check against those who would manipulate the mechanisms of corporations, government, and legislatures for their own interests. Tenner, Edward TED TALKS 2011, “Unintended Consequences” (16 minutes) https://www.ted.com/talks/edward_tenner_unintended_consequences?language=en Dörner, Dietrich 1996. THE LOGIC OF FAILURE, Recognizing and Avoiding Error in Complex Situations, Metropolitan Books, ISBN: 0-201-47948-6. p. 8. 5 27 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds The challenge for program designers, application managers, programmers, code writers, and all others who are involved in the development of AI programs is to fulfill the challenge without including their own biases, prejudices, bigotries, and opinions, assumptions, and self-interest in the program.  Do we want AI to help humanity transcend the limitations of being human? Or, do we want AI to transcend humans? The moral position is to make a decision by commission, Rather than the immoral decision by omission. 28 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 3 UNDERSTANDING WHY WE ARE CONCERNED ABOUT THE FUTURE INTRODUCTION The irony of the concerns of environmentalists, futurists, and apocalyptic doomsday followers is that they have become so intensely focused on “the problem” that they have forgotten why they are concerned at all. If I have interpreted their concerns correctly, it seems that they are concerned that IF humanity does not mend its ways that the future will not be a livable place for anyone. The error of their concerns is that they assume that “fixing the problem” will assure a stable and livable future. Simplistic thinking as this never achieves the desired outcomes because too many other factors are involved. That type of thinking does not see “the problem” within a holistic context of the material and social sustainability of societies and civilization. Solving “their problem” will never contribute to a material or socially sustainable future until the solution is integrated into that holism. “The problems” of their fixation are actually symptoms of much larger concerns. The long arc of a developed society’s situation is far more complex, yet can be addressed when the larger parameters of that arc are brought to mind. The connection between the values, morality, and ethics discussed so far is intimate to the sustainability of the social existence of our civilization. “Fixing the problems” of communities and societies will never bring about a sustainable future until the definition of social sustainability is fulfilled, which requires that ethical and moral considerations become paramount. To appreciate the task ahead we will need to understand what “sustaining,” and “sustainability” are all about. Social sustainability is a process and ideology that integrates the disparate parts of society into a congruent system. 29 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds SUSTAINABILITY — BEDROCK FOR MORAL AND ETHICAL DECISION-MAKING This is the simple logic of the seven values: Conscientiously using the values, ethics, and morality in the decision-making processes in families and organizations will result in material and social sustainability of families, communities, societies, and organizations long into the future. If we decide as individuals/families and organizations to embrace both material and social sustainability, we need to know what “sustaining” really means in order to make decisions that support “sustaining.” The table below provides clear definitions of the two branches of sustainability that are necessary for a society to “become sustainable.” THE DURATIONS OF EXISTENCE Survival presents us with the immediate appreciation of life now and the threat of death within this day or the next. 30 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Existence presents us with the necessity of assuring our survival over a period of time with death still being a constant reminder in our daily activities. Maintenance presents us with the necessity of assuring our existence is maintained into an indefinite future. And this is the place where most people and their communities and societies exist — in an indefinite future. Stability. As a society moves toward social sustainability it has begun the process of making decisions that assure it has a definite, peaceful, and stable future. THE DURATIONS OF “SUSTAINING” Sustain To lengthen or extend in duration. This also implies a continuation of what exists already, which may not be sustainable. Sustainable Capable of being sustained in the long term. Sustainability The ability to sustain. Social Sustainability: The ability of a society to be self-sustaining indefinitely…, for 5 years, 50 years, 250 years, 500 years and more because of the intention for its existence, the design of its functions, and the integrity of its decision-making processes. Consciously choosing UNsustainable options is to choose the death of societies and jeopardize the quality of life of all future generations. It is an immoral decision whether made consciously or by the omission to decide. It is an immoral decision because it primarily violates the values of growth and equality of the generations that have not been born. Trying to achieve sustainable growth is first of all an oxymoron — it is contradictory and impossible. Many people in business strive to sustain growth of their corporation’s profits. Eventually, that becomes an impossibility, which at the present time has not yet shown its ugly face. Then an existential moral question will exist. Do we exploit the material and social environments to maintain profits and our high standard of living compared to the rest of the world, or do we begin to practice conservation (decreasing usage, reusing, recycling, and re-purposing) to support the children of our future generations? 31 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds One of the intentions of this book is to make people aware of our moral responsibilities to the billions of people of future generations, and that includes our own children’s children and great-great grand children. When we discuss the primary value “equality,” what we are talking about is designing our material resources and social institutions so that social and material resources are equally available to nurture and support the development of the innate potential of future generations. B RIEF S UMMARY Now the question. “Do we want our societies and our way of life to become sustainable or UNsustainable?” We can make that decision once we appreciate how intimately our decisions today will affect the survival, existence, stability, and sustainability, in their broadest definitions, of those who have yet to be born. As you can see from this chapter, the “rules of engagement” for resolving these difficult situations must come from the Seven Values, their Moral Definitions, Ethics Statements, and Expressed Ethics. Relying upon humanly conceived value systems and personal interpretations of the seven values will only lead to more and more difficult situations, (read, Volatile, Uncertain, Complex, and Ambiguous, “VUCA”), with no final authority to rely upon. If our societies are to be sustained, then we must rely upon the final authority of the seven values and apply their morality and ethics in the decision-making processes of all organizations to give families, communities, and societies the same longevity as our species. Let us plan that AI has a prominent place in those moral and ethical decisionmaking processes that contribute to our great, great grand children’s peace of mind and quality of life.  32 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 4 THE MORALITY AND ETHICS OF TODAY The material you have read so far may lead you to believe I have created a bubble of moral and ethical idealism that is not connected to the realities of today. Ironically, the reality is that most people are not consciously aware that most of the world continues to use an archaic morality that is not capable of pointing the way forward to sustain families, organizations, governments, and cultures into a long and prospering future. This chapter will compare the archaic morality that has been in use for over 4,000 years to the proactive morality that is based on the values that have sustained our species for over 200,000 years. Again, this will present us with a question, “Do we stay with the old reactive morality or do we begin using the proactive morality that points the way forward to a sustainable future?” Moving to accept the proactive morality provides answers to difficult social, political, economic, and environmental problems. Let’s compare the two. TRADITIONAL MORALITY Historically, the moral code of western civilization has changed little over the last 4,000 years 6 from the time that Sumerian King Ur-Nammu of Ur (2112-2095 BC) wrote it. It was later adopted by Hammurabi and Moses, among others. It was written as a means of preserving and maintaining social order and the functioning of society through a uniform standard of social conduct, i.e., a moral code. It was designed as a personal morality within a small community. It was never codified as a social morality to guide the moral conduct of social processes, organizations, governments, or corporations. Neither was it intended as a global moral code for nations of the international 6 http://en.wikipedia.org/wiki/Code_of_Ur-Nammu; http://en.wikipedia.org/wiki/Code_of_Hammurabi 33 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds community. The development of the traditional moral code, however, was an incredible advancement in normalizing social relations at the time. The traditional moral code is man-made using man-made values that King Ur-Nammu and his advisors thought would be of help. Because the traditional moral code was based on man-made values, rather than being based on the innate values of our species, it has not able to keep pace with the social evolution of people. That moral code was not capable of evolving with the evolution of people’s needs to improve the quality of their lives. To improve the conditions (read, “social evolution”) of our lives today, the moral and ethical needs of our evolving contemporary communities and societies also need to evolve. Because the seven values are proactive to encourage our growth, social change is a permanent and inherent aspect of the value system of our species. Invalid Assumptions. King Ur-Nammu’s moral code is retrospective and punitively based. One of its assumptions has been that the punishment of immoral behavior would cause citizens to become moral in order to avoid subsequent punishment. We know all too well from the history of four millennia that punishment is not an effective deterrent to immoral behavior. What is wrong with this moral code? Nothing really, as long as it is applied as an unevolved person-to-person morality in very simple communities. But when it is applied by a social agency (courts of law, juvenile, divorce, and custody litigation for example) its performance comes up short. What is missing is an evolved morality that empowers social agencies as the courts to determine the sustaining needs of litigants and of society. Historical Corrections. Perhaps the greatest fallacious assumption of the traditional moral code is that it tries to correct the behavior of the wrongdoer, a very familiar theory of “modern” criminal corrections. When we look more closely at its “corrective” function, we soon realize that it proposes the ludicrous notion of correcting the faults of the past. Because punishment occurs after the fact of the immoral behavior, it is truly 100% ineffective. Further, Ur-Nammu’s moral code does nothing to proactively improve our societies. It simply punishes the wrongdoer with the victim, family, community, and the public no better for the wrongdoer’s punishment. Said another way, the incarceration of a murderer does not bring about an improvement in the social sustainability of the community from which he or she came. 34 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds Reactive, Not Proactive. The traditional moral code provides the irrational possibility of retroactively righting wrongs, never urging citizens to aspire to higher moral standards of living, or to add to the quality of their life, or the lives of others by the decisions they make. The old morality provides no incentive for proactive good behavior, other than to avoid getting caught. Because the traditional moral code has not been proactive to work toward social sustainability, after centuries of its use we have begun to see the moral and social disintegration of whole communities in our larger cities due to drug use, violence, property crimes, and sexual, physical, emotional, mental, and social abuse of infants, children, and the elderly. Social status and economic elevation have not exempted members from family abuses, community delinquency by adults or fiscal malfeasance by executives with their victims numbering in the tens of thousands and millions. Bad Code. From a contemporary technological perspective, the traditional morality of western civilization for the last 4,000 years is a form of morality that in computer terms is “bad code.” It is “bad code” because it is not based on a logically integrated set of values. It may solve some problems but not others, and it may solve problems inconsistently depending upon who is using it. Grievously, the ethics that emerge from the “bad code” of traditional morality do not provide a universally level playing field for all people of all races, cultures, ethnicity, nationality, and genders for all times. A Conclusion. The traditional morality that all of us have been raised with is based on values that are man-made and not capable of enduring the rigors of time and the vast array of moral challenges that have come about over the centuries and millennia. The proactive morality and ethic that are inherent to the seven values provide a huge incentive to move toward the positive side of ethical and moral decision-making. Accepting them into our daily lives and decision-making will be startling at first, but because they are already in alignment with our innate nature they are already a part of each of us and can be accepted once we acknowledge their place in our lives.  35 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 36 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds 5 CRISIS AND OPPORTUNITY This early era of AI provides us with a rare opportunity in the history of humanity — we have developed the consciousness of our present global situation to compared it to similar eras of the past. We have the advantage of this vicarious view of those experiences to guide our reasoning and judgment for implementing AI as a helpmate to humanity, rather than a “sword of Damocles” as we have experienced since the invention and uncontrolled proliferation of atomic bombs. Will the AI industry be guided by that history and the experiences that we now suffer under? Or will we build a huge new era of IA technology that will aid and guide human decisions for civilization’s survival and benefit? — Isaac Asimov’s “Three Laws of Robotics” that he shared in “I, Robot” in 1950 have a lot to say about AI and AI applications. Consider those three laws. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. After having read through the previous pages, these three laws seem to be very simplistic in nature. If we are to grasp the existential angst of Robert Oppenheimer, Father of the Atomic Bomb, 7 whose famous quote is largely unappreciated, “Now I am become Death, the destroyer of worlds,” from the Hindu sacred scripture the Bhagavad-Gita, then AI architects are walking in the existential shoes of Dr. Oppenheimer, but without his consciousness. What is far different now with AI on civilization’s horizon from Oppenheimer’s situation is past experience. The similarities of the atomic bomb and AI are close with two exceptions. 7 https://www.wired.co.uk/article/manhattan-project-robert-oppenheimer 37 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds In the First Exception, we now know what occurred and what developed in the decades following the first use of atomic bombs. Oppenheimer only surmised the vast destructive power of a fission bomb. The development of AI is very similar. We truly do not know what is ahead, but if it is anything like what happened after the atomic bomb was used, then we should use a very cautious approach for AI’s development. Something more is needed than just those three simple laws that Asimov shared with the world in 1950. Even if Asimov had the working knowledge of the seven innate values of Homo sapiens and also had the morality and ethics that erupt out of those seven values, something more vital is needed. The missing element is the critical distinction between a personal morality and a societal morality. Because AI will become as generic as GPS locators and useful anywhere in the world, its applications and decisions must incorporate the distinction between what will affect groups of individual, thus all of humanity, and decisions that affect only individuals. If the creators of AI, and AI, cannot make that distinction, then its application for offensive and defensive military and other applications will leave civilization with threating consequences. This is an existential distinction that will determine the fate of civilization for good or for its destruction. The illustration below will help us work through this critical distinction. The first priority of all human and AI decision-making is to preserve the material existence of our species. As this is the premier priority for all humans, corporations, and governments, the morality and ethics that are built into AI programs must be as close to fail-safe as possible. 38 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds The second priority must come into play in order to sustain our social existence. The social existence of humanity is dependent upon the conscious development of the symbiotic relationship between the individual/family and organizations. That good working relationship is totally dependent upon conscious and intentional decision-making using the three secondary values and the morality and ethics of all seven values. When that is jeopardized, then it becomes eventual that the short and long arc of society’s existence is also jeopardized. In the case of AI, the risk is too great to dismiss the necessity of a proactive and universal morality and ethic as the bedrock upon which the foundation of AI programs must be built. Robert Oppenheimer died as a relatively young man at age 62, (April 1904 – February 1967). He lived long enough to see the full development of thermonuclear bombs that have the capability to destroy all living beings on this planet forever. What would he say today about the potential outcomes of the undirected development of AI? The Second Exception is the difference between atomic bombs and AI is the “I” — intelligence that directs its use. Atomic bombs are dependent upon human intelligence, decisions, and actions to release their destruction. In the case of AI with its own evolving independent intelligence, what critical parameters of decision-making will restrain AI from arranging the decimation of our species? Nothing. Just because AI can be developed to become self-evolving, does not mean that we should do so without any internal restraints (moral conscience) in ourselves or within the program. What is needed is the forethought to embed a proactive morality and ethic into the basic software of all AI applications. It is eventual that AI software will become self-developing and self-evolutionary. To get a good grip on the potential of what could occur, consider fission and fusion bombs as having AI capability independent of human decisionmaking. Is that where we want AI to go? The question that requires a moral answer from all institutional AI programs is this, “Should AI and lethal military devices be joined in force against humanity?” That question directs the third priority. The third priority of decision-making, whether by humans or an AI program, lies in the distinction between personal morality and societal morality using these seven values. In this priority the foremost concern is the continuing existence (survival) of the social context of human 39 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds existence because it is only within the social context of human existence that social evolution can take place. Only within the sustaining survival of functional families, communities, and societies can an improving quality of life, growth, and equality evolve for the benefit for all future generations. Will AI have the self-awareness to clearly make the distinction between the welfare of the larger society and all future generations, even if that means compromising the existence (lives) of some people who are alive at the time? Can it make the decision to even compromise its own existence to save the lives of the humans who would otherwise be killed. (This scenario has been played out in more than one Sci-Fi movie.) In order for this faculty of an AI program to come into existence, it must first be in existence as a desired outcome in the architecture of AI program development. And, prior to that, it must be in the consciousness of the program designers and code writers to fulfill that specification and vision of AI as humanity’s perennial helpmate. If the desired end result of AI development is to create incredibly capable artificial intelligence, then it must emulate the highest and ennobling intelligence, wisdom, and decisions of humans. AI programming at its best comes down to incredible discernment. The best human intelligence is able to listen to a rational argument, discern the most salient factors, reflect on those factors with the foreknowledge of prior experience of self and others, inquire with cogent questions, and then is able to succinctly state the lessons involved, and then the succinct overarching wisdom of the it all. The danger of AI development is that most people have not been taught the basic elements of discernment; and do not have the ability to make competent, let alone cogent, distinctions of discernment. It is an elemental process of thinking, i.e., intelligence. Think of discernment as an app of the human intelligence. Proceeding with AI development without this process intact in the mind of program developers and coders, and the existential angst of Robert Oppenheimer to foresee what AI may become, will leave all future generations without representation in those decisions. Let us proceed very cautiously and begin by embedding the best of humane decision-making into the fundamental designs of AI.  ●  40 ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds BIO: Daniel Raphael, PhD Daniel Raphael is an independent original thinker and futurist. He is a Viet Nam veteran; with 18 years experience working in adult felony criminal corrections; father of three and grandfather of four children; former volunteer fireman, small business owner, inventor, and manufacturer of a household sewing machine product; self-taught theologian and ethicist; holistic life coach and principal of Daniel Raphael Consulting since 2003; author and publisher of numerous books, papers, and articles. Daniel enjoys public speaking and has taught social sustainability and spirituality classes and workshops nationally and internationally and is well prepared to enlighten and entertain you. Education Bachelor of Science, With Distinction, (Sociology). Arizona State University, Tempe, Arizona. Master of Science in Education (Educationally and Culturally Disadvantaged), Western Oregon University, Monmouth, Oregon. Doctor of Philosophy (Spiritual Metaphysics), University of Metaphysics, Sedona, Arizona. Masters Dissertation: A Loving-God Theology Doctoral Dissertation: A Pre-Creation Theology Writer, Author, Publisher (1992) The Development of Public Policy and the Next Step of Democracy for the 21st Century, NBHCo. (1992) Developing A Personal, Loving-God Theology, NBHCo (1999) Sacred Relationships, A Guide to Authentic Loving, Origin Press (2002) What Was God Thinking?!, Infinity Press (2007) Global Sustainability and Planetary Management (2014) Healing a Broken World, Origin Press ● (2014) Social Sustainability Design Team Process (2015) Social Sustainability HANDBOOK for Community-Builders, Infinity Press ● (2016) The Progressive’s Handbook for Reframing Democratic Values ● (2016) Organic Morality: Answering the Critically Important Moral Questions of the 3rd Millennium ● (2017) Designing Socially Sustainable Democratic Societies ● (2017) A Theology for New Thought Spirituality ● (2017) God For All Religions — Re-Inventing Christianity and the Christian Church — Creating Socially Sustainable Systems of Belief and Organization ● (2017) God For All Children, and Grandchildren ● (2017) Clinics for Sustainable Families and the Millennium Families Program ● (2018) The Values God Gave Us ● (2018) UNDERSTANDING Social Sustainability ● (2017) Pour Comprendre la Viabilité Sociale ● (2017) Entendiendo La Sostenibilidad Social ● (2018) Making Sense of Ethics — A Unique, Unified Normative Theory of Ethics, Values, and Morality ● (2018) Answering the Moral and Ethical Confusion of Uninvited Immigrants ● (2018) Restoring the Greatness of Democratic Nations — A Radically Conservative and Liberal Approach ● (2018) Artificial Intelligence, A Protocol for Setting Moral and Ethical Operational Standards ● = Available as a PDF document at: https://sites.google.com/view/danielraphael/free-downloads Contact Information: Daniel Raphael, PhD Daniel Raphael Consulting ● Social Sustainability Leadership Training and Consulting daniel.raphaelphd@gmail.com ● Cell: + 1 303 641 1115 ● PO Box 2408, Evergreen, CO 80437 USA ARTIFI CI AL IN TELLI GEN CE A P ro toc o l fo r Se t t i n g Mo ra l a nd E th ica l O pe r at io na l S ta n dar ds ## Behind it all is surely an idea so simple, so beautiful, that when we grasp it - in a decade, a century, or a millennium - we will all say to each other, how could it have been otherwise? How could we have been so stupid? - - John Archibald Wheeler 42