Papers by Thierry Turletti
HAL (Le Centre pour la Communication Scientifique Directe), Nov 1, 2016
With the advent of virtualization and network function softwarization, the networking world shift... more With the advent of virtualization and network function softwarization, the networking world shifts to Software Defined Networking (SDN). The OpenFlow protocol is one of the most suitable candidates to implement the SDN concept. In the meanwhile, the generalization of broadband Internet (mobile, cable, DSL, fiber etc.) has led to massive content consumption. However, while content is usually retrieved via layer 7 protocols, OpenFlow operations are performed at lower layers (layer 4 or lower) making the protocol completely ineffective to deal with content. To address this issue, we proposed and developed an API to manage content in OpenFlow networks. We implemented this API using open source software and study the impact of logical centralization suggested by SDN on network performances.
2023 IEEE Wireless Communications and Networking Conference (WCNC)
Ray Tracing is an electromagnetic wave propagation modeling approach used for accurate generation... more Ray Tracing is an electromagnetic wave propagation modeling approach used for accurate generation of Quality of Service (QoS) maps in mobile networks. Due to its complexity, current implementation of Ray Tracing fails to generate such maps in wide areas. In this paper, we propose an optimization to Ray Tracing able to accurately generate QoS maps in a reasonable time. Using a site-specific ray launching technique and an alternative to the reception test process, we divide by almost 1200 the execution time of Ray Tracing with less than 2% of memory usage as compared to baseline solutions.
2022 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN)
Proceedings of the 12th International Workshop on Wireless Network Testbeds, Experimental Evaluation & Characterization
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
2021 IEEE 7th International Conference on Network Softwarization (NetSoft)
Many companies and organizations are moving their applications from on-premises data centers to t... more Many companies and organizations are moving their applications from on-premises data centers to the cloud. The cloud infrastructures can potentially provide an infinite amount of computation (e.g., Elastic Compute) and storage (e.g., Simple Service Storage). In addition, all cloud providers propose different offers: IaaS, PaaS, and SaaS. This demo focuses on the IaaS services, presenting a simple tool to measure the network delay in a virtual infrastructure built entirely in the cloud. These measurements are useful for organizations that are moving current applications to, or creating new applications in, the cloud, but have requirements on the maximum, or average, network delay that these applications can tolerate. We present CloudTrace, a simple CLI tool that creates regional and multiregional experiments to measure delay, using Amazon AWS.
SSRN Electronic Journal
5G enhanced Mobile broadband (eMBB) aims to provide users with a peak data rate of 20 Gbps in the... more 5G enhanced Mobile broadband (eMBB) aims to provide users with a peak data rate of 20 Gbps in the Radio Access Network (RAN). However, since most Congestion Control Algorithms (CCAs) rely on startup and probe phases to discover the bottleneck bandwidth, they cannot quickly utilize the available RAN bandwidth and adapt to fast capacity changes without introducing large delay increase, especially when multiple flows are sharing the same Radio Link Control (RLC) buffer. To tackle this issue, we propose RAPID, a RAN-aware proxy-based flow control mechanism that prevents CCAs from overshooting more than the available RAN capacity while allowing near optimal link utilization. Based on analysis of up-to-date radio information using Multi-access Edge Computing (MEC) services and packet arrival rates, RAPID is able to differentiate slow interactive flows from fast download flows and allocate the available bandwidth accordingly. Our simulation and experimentation results with concurrent Cubic and BBR flows show that RAPID can reduce delay increase by a factor of 10 to 50 in both Line-of-Sight (LOS) and Non-LOS (NLOS) conditions while preserving high throughput in both 4G and 5G environments.
2015 IEEE 14th International Symposium on Network Computing and Applications, 2015
Nowadays, content retrieval is marking the Internet usage. User communications are no longer tied... more Nowadays, content retrieval is marking the Internet usage. User communications are no longer tied up to host interconnection. Information Centric Networking (ICN) models are proposed to cope with these changes. The new paradigm redesigns the Internet architecture to bring out content to the first level. Over the last decade, many key projects have proposed a large solution spectrum to rebuilt networking primitives focused on the content. One important and direct challenge of this shift is the large amount of routing states due to identifying contents rather than hosts. In this paper, we focus especially on DONA, one of the first ICN architecture, and analyse the required memory space to store routing states. Our study shows that today's technologies are not able to satisfy the content routing needs. Thus, we propose an enhancement of DONA called BADONA to deal with this problem. It uses the Bloom filter to drastically reduce the usage of the memory space. Finally, we evaluate our proposal performances to underscore its contribution.
Qualitative Health Research, 1997
Proceedings of the 15th International Conference on emerging Networking EXperiments and Technologies
Networks became so complex and technical that it is now hard if not impossible to model or simula... more Networks became so complex and technical that it is now hard if not impossible to model or simulate them. Consequently, more and more researchers rely on prototypes emulated in controlled environments and Mininet is by far the most popular tool. Mininet implements a simple, yet powerful API to define and run network experiments on a single machine. In most cases, running experiments on one machine is adequate but for resource intensive applications one machine may not be sufficient. For that reason we propose Distrinet, a way to distribute Mininet over multiple hosts. Distrinet uses the same API than Mininet, granting full compatibility with Mininet programs. Distrinet is generic and can optimally deploy experiments in Linux clusters or in public clouds and automatically minimizes the resource consumed in the experimental infrastructure.
2019 IEEE 8th International Conference on Cloud Networking (CloudNet)
With the emergence of Network Function Virtualization (NFV) and Software Defined Networking (SDN)... more With the emergence of Network Function Virtualization (NFV) and Software Defined Networking (SDN) efficient network algorithms considered too hard to be put in practice in the past now have a second chance to be considered again. In this context, we rethink the network dimensioning problem with protection against Shared Risk Link Group (SLRG) failures. In this paper, we consider a path-based protection scheme with a global rerouting strategy, in which, for each failure situation, there may be a new routing of all the demands. Our optimization task is to minimize the needed amount of bandwidth. After discussing the hardness of the problem, we develop a scalable mathematical model that we handle using the Column Generation technique. Through extensive simulations on real-world IP network topologies and on random generated instances, we show the effectiveness of our method. Finally, our implementation in OpenDaylight demonstrates the feasibility of the approach and its evaluation with Mininet shows that technical implementation choices may have a dramatic impact on the time needed to reestablish the flows after a failure takes place.
ICC 2021 - IEEE International Conference on Communications
To handle the ever growing demand of resource intensive experiments distributed, network emulatio... more To handle the ever growing demand of resource intensive experiments distributed, network emulation tools such as Mininet and Maxinet have been proposed. They automatically allocate experimental resources. In this work, we show that resources are poorly allocated, leading to resource overloading and hence to dubious experimental results. This is why we propose and implement a new placement module for distributed emulation. Our algorithms take into account both link and node resources and minimize the number of physical hosts needed to carry out the emulation. Through extensive numerical evaluations, simulations, and actual experiments, we show that our placement methods outperform existing ones and allowing to re-establish trust in experimental results.
HAL (Le Centre pour la Communication Scientifique Directe), Feb 5, 2021
With the increased complexity of today's networks, emulation has become an essential tool to test... more With the increased complexity of today's networks, emulation has become an essential tool to test and validate a new proposed networking solution. As these solutions also become more and more complex with the introduction of softwarization, network function virtualization, and artificial intelligence, there is a need of scalable tools to carry out resource intensive emulations. To this end, distributed emulation has been proposed. However, distributing a network emulation over a physical platform requires to choose carefully how the experiment is run over the equipment at disposal. In this work, we evaluate the placement algorithms which were proposed for, and implemented in, existing distributed emulation tools. We show that they may lead to bad placements in which several hardware resources such as link bandwidth, CPU, and memory are overloaded. Through extensive experiments, we exhibit the impact of such placements on important network metrics such as real network bandwidth usage and emulation execution time, and show that they may lead to unreliable results and to a waste of platform resources. To deal with this issue, we propose and implement a new placement module for distributed emulation. Our algorithms take into account both link and node resources and minimize the number of physical hosts needed to carry out the emulation. Through extensive numerical evaluations, simulations, and experiments, we show that our placement methods outperform existing ones leading to reliable experiments using a minimum number of resources.
Uploads
Papers by Thierry Turletti