The document discusses future directions for autonomous decentralized systems. It focuses on four key areas: 1) increasing system complexity and how rigid subsystem partitioning can help manage it, 2) advances in hardware allowing dedicated systems-on-chip for autonomous subsystems, 3) integrating legacy systems through standardized gateway components, and 4) how fault tolerance can be achieved through autonomous subsystems with well-defined interfaces.
The document discusses future directions for autonomous decentralized systems. It focuses on four key areas: 1) increasing system complexity and how rigid subsystem partitioning can help manage it, 2) advances in hardware allowing dedicated systems-on-chip for autonomous subsystems, 3) integrating legacy systems through standardized gateway components, and 4) how fault tolerance can be achieved through autonomous subsystems with well-defined interfaces.
The document discusses future directions for autonomous decentralized systems. It focuses on four key areas: 1) increasing system complexity and how rigid subsystem partitioning can help manage it, 2) advances in hardware allowing dedicated systems-on-chip for autonomous subsystems, 3) integrating legacy systems through standardized gateway components, and 4) how fault tolerance can be achieved through autonomous subsystems with well-defined interfaces.
The document discusses future directions for autonomous decentralized systems. It focuses on four key areas: 1) increasing system complexity and how rigid subsystem partitioning can help manage it, 2) advances in hardware allowing dedicated systems-on-chip for autonomous subsystems, 3) integrating legacy systems through standardized gateway components, and 4) how fault tolerance can be achieved through autonomous subsystems with well-defined interfaces.
Future Directions for Autonomous Decentralized Systems
(Opening Statement of the Panel) H. Kopetz Technische Universitt Wien, Austria hk@vmars.tuwien.ac.at To me, the future for autonomous decentralized systems (ADS) looks bright, because of ongoing developments in the following four areas (i) system complexity (ii) hardware architecture (iii) legacy systems (iv) fault-tolerance. Let me discuss some of the issues in each one of these areas in detail. System Complexity: The conceptual management of the every increasing complexity of large computer-based systems is, in my opinion, one of the main current and future challenges of our profession. The "unmanaged system complexity" is often the major cause for schedule overruns, missing dependability and often outright project failure (see, e.g., the very revealing report on the US Advanced Automation System (GAO 1997). One established technique to handle this complexity is the introduction of a rigid system structure that partitions a large systems into nearly autonomous subsystems and that reduces the interactions among these subsystems to the minimum information flow required for the achievement of the system objectives (Kopetz 1997, p.272). The stable interfaces between these subsystems must be fully defined in the temporal domain and in the value domain, carefully controlled, and above all, understandable in their structure and behavior. The interfaces must also act as error propagation boundaries. From the point of view of subsystem A that interfaces to a subsystem B, the shared interface definition must contain all information that is necessary for subsystem A to perform its mission (and vice versa). With respect to subsystem B, the proper operation of subsystem A thus depends only on the correct and timely information in the interface. The interface specifications thus states the mutual obligations of the cooperating subsystems. From
the point of view of subsystem B, every implementation
of subsystem A that meets this established interface specification is adequate. In such an architecture subsystem A can act autonomously as long as it meets the constraints of the interfaces. From the vantage point of a higher level, the system operation can be understood by grasping the information flow across the interfaces, disregarding the internal operation of the autonomously operating subsystems. The simplicity and understandability of the subsystem interfaces is thus the key to mastering the complexity at the system level. Hardware Architecture: The most cost-effective structure for a large computer system depends on the optimal combination of hardware and software resources. Because of the phenomenal advances of the microelectronics technology, the hardware part of the cost of implementing a large system is shrinking to the point where it makes not only technical, but also economic sense to design system architectures that are guided by clean functional partitioning according to the established architectural principle "form follows functions". The systems on chip (SOC) that are available today--a powerful microprocessor with sufficient memory, I/O interfaces and a communication controller to implement a self-contained application function--enable the design of hardware structures that do not make any compromises for the sake of saving on hardware resources. A good example of this kind of thinking is the use of a system chip with a Power PC processor for the control of airbags in a automobile. Although such a system chip might only be used once in its lifetime for a critical time interval of a few milliseconds--the point in time between detecting an impact and the point in time of activating the airbags--it makes architecturally and economically sense to assign a dedicated piece of hardware to this task, not being detracted by resource utilization arguments. In the future I see these SOCs with the appropriate application software as
forming autonomous subsystems of large computer
systems. Again, the most important research issue is the simplicity and understandability of the subsystem interfaces. Legacy Systems: Many of the future computer applications will not be designed on the "green lawn", but will be extensions and interconnections of already existing computer systems. These "legacy system", developed years ago as stand-alone operations, must be integrated into new distributed architectures with minimal interruptions of the day-to-day operations. The most promising technology to achieve these objectives is the "wrapper" technology: the provision of a gateway component that on the one side adapts to the idiosyncratic interface of the legacy system and on the other side conforms to the standard interface specification of the new distributed architecture. From the architecture point of view, the legacy system together with the associated gateway component, can be seen as an autonomous decentralized systems that provides the required functionality across the standardized interface of the gateway component. Fault Tolerance: It is a law of physics that every physical component is going to fail eventually. Although the failure rate of a single VLSI chips is low (between 10 -7 and 10-8 failures/hours for permanent physical failures), the failure rate of a non-fault-tolerant large
system that comprise many chips can be significant.
Considering the low cost of the computer hardware, it is becoming cost-effective to reduce this system failure rate by the systematic provision of fault tolerance in many safety-critical or money-critical applications. A prerequisite for the successful application of fault tolerance is the provision of a system architecture that supports the introduction of autonomous error-containment regions and fault-containment regions with error propagation boundaries around them. A large system, consisting of many decentralized autonomous subsystems will small understandable interfaces around them, comes close to this primary requirement on a fault-tolerant architecture. References: GAO (1997). Air Traffic Control--Complete and Enforced Architecture Neded for FAA Systems Modernization. US General Accounting Office (GAO).