Discover millions of ebooks, audiobooks, and so much more with a free trial

Only €10,99/month after trial. Cancel anytime.

Autonomic Computing: Fundamentals and Applications
Autonomic Computing: Fundamentals and Applications
Autonomic Computing: Fundamentals and Applications
Ebook222 pages2 hours

Autonomic Computing: Fundamentals and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

What Is Autonomic Computing


Autonomic computing (AC) refers to the utilization of distributed computing resources that have self-management qualities. These resources can adjust to changes that are unpredictable while masking the fundamental complexity from users and operators. This endeavor, which was started by IBM in 2001, has as its ultimate goal the creation of computer systems that are capable of managing themselves, the overcoming of the constantly increasing complexity of managing computing systems, and the reduction of the barrier that complexity poses to further expansion.


How You Will Benefit


(I) Insights, and validations about the following topics:


Chapter 1: Autonomic computing


Chapter 2: List of computer scientists


Chapter 3: Algorithmic efficiency


Chapter 4: Outline of computer science


Chapter 5: Self-management (computer science)


Chapter 6: Autonomic networking


Chapter 7: Computer cluster


Chapter 8: Cloud computing


Chapter 9: Policy-based management


Chapter 10: Glossary of artificial intelligence


(II) Answering the public top questions about autonomic computing.


(III) Real world examples for the usage of autonomic computing in many fields.


(IV) 17 appendices to explain, briefly, 266 emerging technologies in each industry to have 360-degree full understanding of autonomic computing' technologies.


Who This Book Is For


Professionals, undergraduate and graduate students, enthusiasts, hobbyists, and those who want to go beyond basic knowledge or information for any kind of autonomic computing.

LanguageEnglish
Release dateJul 4, 2023
Autonomic Computing: Fundamentals and Applications

Related to Autonomic Computing

Titles in the series (100)

View More

Related ebooks

Intelligence (AI) & Semantics For You

View More

Reviews for Autonomic Computing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Autonomic Computing - Fouad Sabry

    Chapter 1: Autonomic computing

    Autonomic computing (AC) refers to the use of distributed computing resources that have self-management qualities. These resources can adjust to unpredictably changing conditions while concealing the intrinsic complexity of the system from users and operators. This project, which was started by IBM in 2001, had as its ultimate goal the creation of computer systems that are capable of managing themselves, the overcoming of the rapidly increasing complexity of managing computing systems, and the reduction of the barrier that complexity poses to further growth.

    The idea behind the AC system is that it should be able to use high-level policies to arrive at adaptive decisions. It will routinely check and improve its status, as well as automatically adjust itself to accommodate any changes in the environment. A computing framework that uses autonomic components (AC) that communicate with one another is called an autonomic computing framework. An AC can be modeled in terms of two primary control schemes, which are local and global, with sensors (for the purpose of self-monitoring) and effectors (for the purpose of self-adjustment), knowledge, and a planner/adapter for the purpose of utilizing policies that are based on self- and environment-awareness. This architecture is also referred to as the Monitor-Analyze-Plan-Execute architecture from time to time (MAPE).

    A variety of architectural frameworks that are based on self-regulating autonomic components have been recently proposed as a result of such a vision. Recent years have seen a pattern that is very similar to this one characterize significant research in the field of multi-agent systems. However, the majority of these approaches are typically conceived with centralized or cluster-based server architectures in mind, and they mostly address the need to reduce management costs rather than the need to enable complex software systems or provide innovative services. Moreover, the majority of these approaches focus on addressing the need to reduce management costs. In certain autonomic systems, mobile agents interact with one another through communication mechanisms that are only loosely coupled.

    According to the latest projections, the number of computing devices in use will increase by 38% each year. Computing systems have brought great benefits to the economy in the form of increased speed and automation, but there is an overwhelming need to automate their maintenance at this time.

    Kephart and Chess warn readers in an article published in 2003 in IEEE Computer that the nightmare of pervasive computing, in which architects are unable to anticipate, design, or maintain the complexity of interactions, could turn out to be the dream of interconnectivity of computing systems and devices. According to them, the most important aspect of autonomous computing is system self-management, which liberates administrators from low-level task management while simultaneously delivering improved system behavior.

    The complexity of modern distributed computing systems, and in particular the complexity of their management, is becoming an increasingly significant factor that prevents their further development. This is a widespread issue that affects the industry as a whole. Communication and computation are being handled by large companies and institutions thanks to the utilization of large-scale computer networks. The distributed applications that are currently being run on these computer networks are diverse and deal with a wide variety of tasks, ranging from the presentation of web content to providing customer support.

    In addition to this, mobile computing is rapidly spreading throughout these networks, which is necessary because workers need to be able to communicate with their employers even when they are away from the office. They accomplish this by accessing the data of their companies through various forms of wireless technology on mobile devices such as laptops, personal digital assistants, or mobile phones.

    This results in the overall computer network becoming extremely complicated, making it challenging for human operators to maintain manual control over it. The manual control method is inefficient, costly, and prone to making mistakes. The amount of manual labor that is typically required to maintain control of a growing computer network has a tendency to escalate very rapidly.

    The client-specific application layer and the database layer are the locations of eighty percent of the infrastructure's problems of this kind. The vast majority of so-called autonomic service providers only guarantee up to the plumbing layer (power, hardware, operating system, network and basic database parameters).

    Providing modern, networked computing systems with the ability to manage themselves without the need for direct intervention from humans is one potential solution. The goal of the Autonomic Computing Initiative, also known as ACI, is to lay the groundwork for autonomous computer systems. The human body's autonomic nervous system served as a source of inspiration for it. Without the need for any intervention from the conscious mind, this nervous system is in charge of regulating vital bodily processes such as breathing, heart rate, and blood pressure.

    Instead of having direct control over the system, the human operator in an autonomic system that manages itself takes on a new role in which he or she is responsible for defining the general policies and rules that direct the process of self-management. IBM defined the following four types of properties that are known as self-star properties for the purpose of this process. These properties can also be referred to as self-x properties or auto-star properties.

    Autonomous system configuration, also known as self-configuration of components; Self-healing refers to the automatic identification and correction of errors; Self-optimization can be defined as the automatic monitoring and control of resources to ensure that they are functioning at their highest potential in accordance with the requirements that have been set; Protection against arbitrary assaults and proactive identification are two aspects of self-defense.

    Others, such as Poslad, have added to the set of self-stars as shown in the following::

    A system that operates to maintain some parameter, such as the quality of service, within a reset range without the need for external control is said to have self-regulation; Techniques of machine learning, such as unsupervised learning, which do not require external control are utilized by systems that are capable of self-learning.; Self-awareness is also referred to as self-inspection and self-decision, and it refers to the ability of a system to know itself. It is necessary for it to be aware of the full extent of both its own resources and the resources to which it links. In order to manage and control its constituent parts, a system needs to be conscious of both its internal and external connections; The process whereby the structure of a system is driven by physics-type models without any explicit pressure or involvement from outside the system is referred to as self-organization; Self-creation is a type of system that is driven by ecological and social type models without any explicit pressure or involvement from outside the system. Self-creation is also known as self-assembly and self-replication. Members of a system are self-motivated and self-driven, producing complexity and order as a creative response to a strategic demand that is constantly shifting; A system that manages itself without the need for external intervention is said to practice self-management, which is also known as self-governance. The things that are being managed can be different from one system and application to the next. Instead of referring to a single self-star process, the term self management can also refer to a collection of self-star processes, such as autonomic computing; When a system describes itself, this is an example of self-description, which is also known as self-explanation or self-representation. It does not require any additional explanation in order to be comprehended (by human beings).

    IBM has outlined eight characteristics that must be present in an autonomous system:

    The system must

    Know yourself in terms of what resources you have access to, what your capabilities and limitations are, as well as how and why you are connected to other systems; be capable of automatically configuring and reconfiguring itself according to the shifting conditions of the computing environment; be capable of maximizing its performance in order to guarantee the most productive computing process possible; be able to work around any problems that it encounters, either by repairing itself or redirecting its functions away from the problematic area; detect, identify, and defend itself against different kinds of attacks in order to preserve the overall security and integrity of the system; interact with neighboring systems and establish communication protocols while adjusting to the changing environment it is in. adapt to its surroundings as they change; depend on public specifications and are incapable of operating in an environment controlled by a third party; anticipate the demand that will be placed on its resources while maintaining open communication with its users.

    Even though autonomic systems can serve a variety of purposes, and therefore behave in a variety of ways, each autonomic system should still be able to demonstrate a baseline set of characteristics in order to be successful in serving its intended purpose:

    Automatic: This essentially means being able to control its own internal functions and operations without the need for external input. Because of this, an autonomous system needs to be able to start up on its own and function normally without requiring any assistance from a human being or from the outside world. To reiterate, the know-how that is necessary to bootstrap the system must already be an integral part of the system.

    Adaptable: An autonomic system needs to be able to make adjustments to the way it works (i.e., its configuration, state and functions). Because of this, the system will be able to adapt to temporal and spatial shifts in its operational context either over the long term (environment customization/optimization) or over the short term (exceptional conditions such as malicious attacks, faults, etc.).

    Aware: In order to determine whether or not its current operation is accomplishing its goal, an autonomous system needs to be able to monitor (sense) both the external environment in which it operates and the state it finds itself in internally. The operational behavior of the system will be adapted in response to changes in context or state under the control of awareness.

    For the purpose of the deployment of autonomous systems, IBM defined five different evolutionary levels, also known as the autonomic deployment model:

    The present state of affairs is depicted at Level 1, which is the most fundamental level and consists primarily of manually managed systems.

    Levels 2–4 introduce increasingly automated management functions, while Levels 1–1 introduce manual management functions.

    The ultimate goal of autonomic, self-managing systems is represented by level 5, which is the highest possible level.

    Utilizing design patterns such as the model–view–controller (MVC) pattern to improve concern separation by encapsulating functional concerns is one way to simplify the design complexity of autonomic systems. Other examples of design patterns include the MVVM pattern and the MVP pattern.

    Control loops that are closed are an important fundamental idea that will be utilized in autonomous systems. This well-known idea originates from a branch of science called process control theory. The purpose of a closed control loop in a self-managing system is to monitor some resource (software or hardware component), and the system will attempt, on its own, to maintain the resource's parameters within a predetermined range.

    According to IBM, it is anticipated that a large-scale self-managing computer system will have hundreds or even thousands of these control loops working together.

    A fundamental building block of an autonomic system is the sensing capability (Sensors Si), which allows the system to observe the operational context of its external environment.

    An autonomic system has both the knowledge of the intention behind its actions (its purpose) and the knowledge necessary to carry out those actions (e.g, bootstrapping, configuration knowledge, analysis and interpretation of sensory input, etc.) without any interference from the outside.

    The way in which the autonomic nervous system actually functions is determined by the logic, which is accountable for ensuring that the appropriate choices are made so that it can fulfill its Purpose, as well as the influence that comes from observing the operational context (based on the sensor input).

    This model emphasizes the fact that the functioning of an autonomous system is directed toward a specific purpose. This refers to its survival instinct, as well as its mission (e.g., the service it is supposed to provide), its policies (e.g., which define the basic behavior), and its policies. If this were a control system, it would be encoded as a feedback error function. If it were a system that was assisted by heuristics, it would be encoded as an algorithm that was combined with a set of heuristics that bound its operational space.

    {End Chapter 1}

    Chapter 2: List of computer scientists

    This is a list of computer scientists, which includes people who work in the field of computer science, particularly those who conduct research and publish books.

    People who are well-known for their work as programmers are included in this list because they also do research in addition to programming. A few of these individuals lived before the development of the digital computer; however, we now consider them to be computer scientists due to the fact that their work can be seen as a contributing factor in the development of the computer. Others are mathematicians whose work can be categorized as belonging to what is now known as theoretical computer science and includes fields such as complexity theory and algorithmic information theory.

    Mobile Cloud Computing, Cybersecurity, Internet of Things, and More with Atta ur Rehman Khan

    Wil van der Aalst – business process management, process mining, Petri nets

    Quantum computing and complexity theory are among Scott Aaronson's areas of expertise.

    Rediet Abebe is an expert on computer programs and artificial intelligence.

    The role of Hal Abelson at the intersection of education and computing

    Database theory developed by Serge Abiteboul

    Samson Abramsky on the semantics of video games

    Leonard Adleman, founder of RSA and the DNA computing field

    Primality testing in polynomial time, developed by Manindra Agrawal

    The human-based computation pioneer Luis von Ahn

    The compilers book written by Alfred Aho, also known as the a in AWK

    Frances E. Allen – compiler optimization

    Gene Amdahl was a pioneer in the development of supercomputers and the founder of Amdahl Corporation.

    The volunteer computing efforts of David P. Anderson

    Natural user interfaces are Lisa Anthony's specialty.

    Andrew Appel, editor and compiler of

    Enjoying the preview?
    Page 1 of 1