Norwegian University of Science and Technology
Department of Telematics
This paper presents an optimization framework for finding efficient deployment mappings of replicated service components (to nodes), while accounting for multiple services simultaneously and adhering to non-functional requirements.... more
This paper presents an optimization framework for finding efficient deployment mappings of replicated service components (to nodes), while accounting for multiple services simultaneously and adhering to non-functional requirements. Currently, we consider load-balancing and dependability requirements. Our approach is based on a variant of Ant Colony Optimization and is completely decentralized, where ants communicate indirectly through pheromone tables in nodes. In this paper, we target scalability; however, existing encoding schemes for the pheromone tables did not scale. Hence, we propose and evaluate three different pheromone encodings. Using the most scalable encoding, we evaluate our approach in a significantly larger system than our previous work. We also evaluate the approach in terms of robustness to network partition failures.
The ISO/ITU standard MPEG is expected to be of extensive use for the transfer of video/ moving pictures traffic in the coming ATM high capacity networks. This traffic will stem from both multimedia services like teleconferencing and video... more
The ISO/ITU standard MPEG is expected to be of extensive use for the transfer of video/ moving pictures traffic in the coming ATM high capacity networks. This traffic will stem from both multimedia services like teleconferencing and video distribution. Hence, MPEG encoded video will be a salient constituent of the overall traffic. The encoding causes large and abrupt shifts in the transferred rate at the frame borders and induce strong periodic components, i.e. it generates a traffic pattern that is difficult to handle with a guaranteed QoS and a sufficiently large multiplexing gain. All methods, both analytical and simulation, proposed up to now for evaluation ATM systems under this traffic has substantial shortcomings when cell losses in the order of 10 -9 is required. This paper introduces a new trace driven simulation technique for cell losses in ATM buffers loaded by a large number of heterogeneous sources. Statistically firm results are obtained, within a reasonable computation effort/time, by applying a special importance sampling approach. The properties of the technique is examined and compared to a previously suggested stratified sampling technique. The capabilities of the technique is demonstrated by simulation of 76 sources of nineteen different MPEG VBR video source types with cell losses in the 10 -9 -10 -12 domain.
- by P. Heegaard and +1
- •
- Importance Sampling, Multimedia Services
Cell losses should be extremely rare events in ATM systems. An end-to-end loss rate of 10 -9 is a typical objective. In spite of this, cell losses represent severe service degradations, and measurements of cell losses are important to... more
Cell losses should be extremely rare events in ATM systems. An end-to-end loss rate of 10 -9 is a typical objective. In spite of this, cell losses represent severe service degradations, and measurements of cell losses are important to validate system performance. However, real time measurements in this range take from days to months and measurements on simulated systems orders of magnitude more.
- by P. Heegaard and +1
- •
- Real Time, Importance Sampling, Carrying Capacity
We investigate a means for efficient deployment of distributed services comprising of software components. Our work can be viewed as an intersection between model-based service development and novel network management architectures. In a... more
We investigate a means for efficient deployment of distributed services comprising of software components. Our work can be viewed as an intersection between model-based service development and novel network management architectures. In a service engineering context, models of services embellished with non-functional requirements are used as input to our swarm intelligence based deployment logic. Mappings between resources provided by the execution environment and components are the results of our heuristic optimization procedure that takes into account requirements of the services. Deployment mappings will be used as feedback towards the designer and the provider of the service. Moreover, our heuristic algorithm possesses significant potential in adaptation of services to changes in the environment.
- by Máté Csorba and +1
- •
We address the problem of efficient deployment of software services into a networked environment. Services are considered that are provided by collaborating components. The problem of obtaining efficient mappings for components to host in... more
We address the problem of efficient deployment of software services into a networked environment. Services are considered that are provided by collaborating components. The problem of obtaining efficient mappings for components to host in a network is challenged by multiple dimensions of quality of service requirements. In this paper we consider execution costs for components and communication costs for the collaborations between them. Our proposed solution to the deployment problem is a nature inspired distributed heuristic algorithm that we apply from the service provider’s perspective. We present simulation results for different example scenarios and present an integer linear program to validate the results obtained by simulation of our algorithm.
- by P. Heegaard and +1
- •
- Quality of Service, Heuristic algorithm
We study the problem of efficient deployment of software components in a service engineering context. Run-time manipulation, adaptation and composition of entities forming a distributed service is a multi-faceted problem challenged by a... more
We study the problem of efficient deployment of software components in a service engineering context. Run-time manipulation, adaptation and composition of entities forming a distributed service is a multi-faceted problem challenged by a number of requirements. The methodology applied and presented can be viewed as an intersection between systems development and novel network management solutions. Application of heuristics, in particular artificial intelligence in the service development cycle allows for optimization and should eventually grant the same benefits as those existing in distributed management architectures such as increased dependability, better resource utilization, etc. The aim is finding the optimal deployment mapping of components to physically available resources, while satisfying all the non-functional requirements of the system design. Accordingly, a new component deployment approach is introduced utilizing distributed stochastic optimization.
The ISO/ITU standard MPEG is expected to be of extensive use for the transfer of video/ moving pictures traffic in the coming ATM high capacity networks. This traffic will stem from both multimedia services like teleconferencing and video... more
The ISO/ITU standard MPEG is expected to be of extensive use for the transfer of video/ moving pictures traffic in the coming ATM high capacity networks. This traffic will stem from both multimedia services like teleconferencing and video distribution. Hence, MPEG encoded video will be a salient constituent of the overall traffic. The encoding causes large and abrupt shifts in the transferred rate at the frame borders and induce strong periodic components, i.e. it generates a traffic pattern that is difficult to handle with a guaranteed QoS and a sufficiently large multiplexing gain. All methods, both analytical and simulation, proposed up to now for evaluation ATM systems under this traffic has substantial shortcomings when cell losses in the order of 10 -9 is required. This paper introduces a new trace driven simulation technique for cell losses in ATM buffers loaded by a large number of heterogeneous sources. Statistically firm results are obtained, within a reasonable computation effort/time, by applying a special importance sampling approach. The properties of the technique is examined and compared to a previously suggested stratified sampling technique. The capabilities of the technique is demonstrated by simulation of 76 sources of nineteen different MPEG VBR video source types with cell losses in the 10 -9 -10 -12 domain.
Cell losses should be extremely rare events in ATM systems. An end-to-end loss rate of 10 -9 is a typical objective. In spite of this, cell losses represent severe service degradations, and measurements of cell losses are important to... more
Cell losses should be extremely rare events in ATM systems. An end-to-end loss rate of 10 -9 is a typical objective. In spite of this, cell losses represent severe service degradations, and measurements of cell losses are important to validate system performance. However, real time measurements in this range take from days to months and measurements on simulated systems orders of magnitude more.
- by P. Heegaard and +1
- •
- Real Time, Importance Sampling, Carrying Capacity
Combinatorial optimization algorithms are used in many and diverse applica- tions; for instance, in the planning, management, and operation of manufacturing and logistic systems and communication networks. For scalability and... more
Combinatorial optimization algorithms are used in many and diverse applica- tions; for instance, in the planning, management, and operation of manufacturing and logistic systems and communication networks. For scalability and dependability reasons, distributed and asynchronous imple- mentations of these optimization algorithms have obvious advantages over central- ized implementations. Several such algorithms have been proposed in the litera- ture. Some are
Virtual path management in dynamic networks poses a number of challenges related to combinatorial optimisation, fault and traffic handling. Ideally such management should react immediately on changes in the operational conditions, and be... more
Virtual path management in dynamic networks poses a number of challenges related to combinatorial optimisation, fault and traffic handling. Ideally such management should react immediately on changes in the operational conditions, and be autonomous, inherently robust and distributed to ensure operational simplicity and network resilience. Swarm intelligence based self management is a candidate potentially able to fulfil these requirements. Swarm intelligence achieved by cross entropy (CE) ants is introduced, and two CE ants based path management approaches are presented. A case study of a nation wide communication infrastructure is performed to demonstrate their abilities to handle change in network traffic as well as failures and restoration of links.
This paper describes a trace driven, fast simulation approach applicable to deal with the performance evaluation of a multiplex of heterogeneous traffic streams with variable bit rate and long lived serial correlation offered routers in... more
This paper describes a trace driven, fast simulation approach applicable to deal with the performance evaluation of a multiplex of heterogeneous traffic streams with variable bit rate and long lived serial correlation offered routers in the Internet. A challenge with simulations of the Internet is the huge number of events that are needed for each event of interest, e.g. the loss or excessive delay of a packet. The simulation efficiency of the trace driven approach in this paper is improved by use of importance sampling to provoke constellation of traces where the loss and long delays are more likely. The approach is successfully applied to speedup the simulation of multiplexing of heterogeneous MPEG encoded video streams.
- by Ragnar Andreassen and +1
- •
The notion of speci®cation frameworks transposes the framework approach from software development to the level of formal modeling and analysis. A speci®cation framework is devoted to a special application domain. It supplies reusable... more
The notion of speci®cation frameworks transposes the framework approach from software development to the level of formal modeling and analysis. A speci®cation framework is devoted to a special application domain. It supplies reusable speci®cation modules and guides the construction of speci®cations. Moreover, it provides theorems to be used as building blocks of veri®cations. By means of a suitable framework, speci®cation and veri®cation tasks can be reduced to the selection, parametrization and combination of framework elements resulting in a substantial support which opens formal analysis even for real-sized problems. The transfer protocol framework addressed here is devoted to the design of data transfer protocols. Speci®cations of used and provided communication services as well as protocol speci®cations can be composed from its speci®cation modules. The theorems correspond to the relations between protocol mechanism combinations and those properties of the provided service which are implemented by them. This article centers on the application of this framework which is discussed with the help of the speci®cation of a sliding window protocol. Moreover the structure of its veri®cation is described. The speci®cation and veri®cation technique applied is based on L. Lamport's temporal logic of actions (TLA). We use the variant cTLA which particularly supports the modeling of process systems. Ó : S 1 3 8 9 -1 2 8 6 ( 0 0 ) 0 0 0 8 9 -X compilers, and veri®cation tools; cf. [3,4,9,12±14,29,33,40]), formal models and formal descriptions of protocols have to be developed in a creative manner. Due to the complexity of modern high-speed protocol systems, this task can be quite expensive. Furthermore, the development of formal descriptions is prone to errors and, similar to program development, a long debugging phase may be necessary before completing the veri®cation successfully.
Presently, many communication protocols are under development which are tailored to the e cient high-speed data transfer meeting di erent application-speci c requirements. Our approach concentrates on a framework which facilitates the... more
Presently, many communication protocols are under development which are tailored to the e cient high-speed data transfer meeting di erent application-speci c requirements. Our approach concentrates on a framework which facilitates the formal veri cation of the protocols. The framework supplies veri ed and re-usable implications between prede ned protocol and service speci cation components. For the veri cation of a speci c protocol, protocol, service and medium can be modelled as compositions of framework speci cation components. The veri cation corresponds to proving that the system of protocol and medium implies the service. This implication can be proven by combining component implications of the framework. We apply L. Lamport's Temporal Logic of Actions (TLA) and use a TLA speci cation style supporting the compositional speci cation of process systems and the inference of system properties from process properties.
Software component technology on the one hand supports the cost-effective development of specialized applications. On the other hand, however it introduces special security problems. Some major problems can be solved by the automated... more
Software component technology on the one hand supports the cost-effective development of specialized applications. On the other hand, however it introduces special security problems. Some major problems can be solved by the automated run-time enforcement of security policies. Each component is controlled by a wrapper which monitors the component's behavior and checks its compliance with the security behavior constraints of the component's employment contract. Since control functions and wrappers can cause substantial overhead, we introduce trust-adapted control functions where the intensity of monitoring and behavior checks depends on the level of trust, the component, its hosting environment, and its vendor have currently in the eyes of the application administration. We report on wrappers and a trust information service, outline the embedding security model and architecture, and describe a Java Bean based experimental implementation
Component-structured software, which is coupled from independently developed software components, introduces new security problems. In particular, a component may attack components of its environment and, in consequence, spoil the... more
Component-structured software, which is coupled from independently developed software components, introduces new security problems. In particular, a component may attack components of its environment and, in consequence, spoil the application incorporating it. Therefore, to guard a system, we constrain the behavior of a component by ruling out the transmission of events between components which may cause harm. Security policies describing the behavior constraints are formally specified and, at runtime, so-called security wrappers monitor the interface traffic of components and check it for compliance with the specifications. Moreover, one can also use the specifications to prove formally that the combinations of the component security policies fulfill certain security properties of the complete component-structured application. A well-known method to express system security properties is access control which can be modelled by means of the popular Role Based Access Control (RBAC) method. Below, we will introduce a specification framework facilitating the formal proof that component security policy specifications fulfill RBAC-based application access control policies. The specification framework is based on the specification technique cTLA. The design of state-based security policy specifications and of RBAC-models is supported by framework libraries of specification patterns which may be instantiated and composed to a specification. Moreover, the framework contains already proven theorems facilitating the formal reasoning since a deduction proof can be reduced to proof steps which correspond directly to the theorems. In particular, we introduce the specification framework and clarify its application by means of an e-commerce example.
Transfer protocols are composed from basic protocol mechanisms and accordingly a complex protocol can be veri ed by a series of relatively simple mechanism proofs. Our approach applies L. Lamport's Temporal Logic of Actions (TLA). It is... more
Transfer protocols are composed from basic protocol mechanisms and accordingly a complex protocol can be veri ed by a series of relatively simple mechanism proofs. Our approach applies L. Lamport's Temporal Logic of Actions (TLA). It is based on a modular compositional TLAstyle and supports the analysis of exibly con gured high-speed transfer protocols.
Software component technology supports the cost-effective design of applications suited to the particular needs of the application owners. This design method, however, causes two new security risks. At first, a malicious component may... more
Software component technology supports the cost-effective design of applications suited to the particular needs of the application owners. This design method, however, causes two new security risks. At first, a malicious component may attack the application incorporating it. At second, an application owner may incriminate a component designer falsely for any damage in his application which in reality was caused by somebody else. The first risk is addressed by security wrappers controlling the behavior at the component interface at runtime and enforcing certain security policies in order to protect the other components of the application against attacks from the monitored component. Moreover, we use trust management to reduce the significant performance overhead of the security wrappers. Here, the kind and intensity of monitoring a component is adjusted according to the experience of other users with this component. Therefore a so-called trust information service collects positive and negative experience reports of the component from various users. Based on the reports, special trust values are computed which represent the belief or disbelief of all users in a component resp. the uncertainty about it. The wrappers adjust the intensity of monitoring a component dependent on its current trust value. In this paper, we focus on the second security risk. To prevent that a component user sends wrong reports resulting in a bad trust value of the component, which therefore would be wrongly incriminated, the trust information service stores also trust values of the component users. The trust values are based on valuations resulting from validity checks of the experience reports sent by the component users. Therefore an experience report is tested for consistency with a log of the component interface behavior which is supplied by the component user together with the report. Moreover, the log is checked for being correct as well. By application of Jøsang’s subjective logic we make the degree, to which the experience reports of a component user are considered to compute the trust value of a component, conditional upon the user’s own trust value. Thus, users with a bad reputation cannot influence the trust value of a component since their experience reports are discounted.
The purpose of this paper is to formally describe new optimization models for distributed telecommunication networks. Modern distributed networks put more focus on the processing of information and less on the actual transportation of... more
The purpose of this paper is to formally describe new optimization models for distributed telecommunication networks. Modern distributed networks put more focus on the processing of information and less on the actual transportation of data than we are traditionally used to in telecommunications. This paper introduces new approaches for modelling decision support at operational, tactical and strategic levels. This is