Papers by Gerrit Huizenga

Conference Organizers Proceedings Formatting Team Linux-vserver Resource Efficient Os-level Virtualization
Linux-VServer is a lightweight virtualization system used to create many independent containers u... more Linux-VServer is a lightweight virtualization system used to create many independent containers under a common Linux kernel. To applications and the user of a Linux-VServer based system, such a container appears just like a separate host. The Linux-Vserver approach to kernel subsystem containerization is based on the concept of context isolation. The kernel is modified to isolate a container into a separate, logical execution context such that it cannot see or impact processes, files, network traffic, global IPC/SHM, etc., belonging to another container. Linux-VServer has been around for several years and its fundamental design goal is to be an extremely low overhead yet highly flexible production quality solution. It is actively used in situations requiring strong isolation where overall system efficiency is important, such as web hosting centers, server consolidation, high performance clusters, and embedded systems.

The Linux 2.6 release provides four disk I/O schedulers: deadline, anticipatory, noop, and comple... more The Linux 2.6 release provides four disk I/O schedulers: deadline, anticipatory, noop, and completely fair queuing (CFQ), along with an option to select one of these four at boot time or runtime. The selection is based on a priori knowledge of the workload, file system, and I/O system hardware configuration, among other factors. The anticipatory scheduler (AS) is the default. Although the AS performs well under many situations, we have identified cases, under certain combinations of workloads, where the AS leads to process starvation. To mitigate this problem, we implemented an extension to the AS (called Cooperative AS or CAS) and compared its performance with the other four schedulers. This paper briefly describes the AS and the related deadline scheduler, highlighting their shortcomings; in addition, it gives a detailed description of the CAS. We report performance of all five schedulers on a set of workloads, which represent a wide range of I/O behavior. The study shows that (1)...

Cache memory compression (or compressed caching) was originally developed for desktop and server ... more Cache memory compression (or compressed caching) was originally developed for desktop and server platforms, but has also attracted interest on embedded systems where memory is generally a scarce resource, and hardware changes bring more costs and energy consumption. Cache memory compression brings a considerable advantage in input-output-intensive applications by means of using a virtually larger cache for the local file system through compression algorithms. As a result, it increases the probability of fetching the necessary data in RAM itself, avoiding the need to make low calls to local storage. This work evaluates an Open Source implementation of the cache memory compression applied to Linux on an embedded platform, dealing with the unavoidable processor and memory resource limitations as well as with existing architectural differences. We will describe the Compressed Cache (CCache) design, compression algorithm used, memory behavior tests, performance and power consumption over...

NUMA is becoming more widespread in the marketplace, used on many systems, small or large, partic... more NUMA is becoming more widespread in the marketplace, used on many systems, small or large, particularly with the advent of AMD Opteron systems. This paper will cover a summary of the current state of NUMA, and future developments, encompassing the VM subsystem, scheduler, topology (CPU, memory, I/O layouts including complex non-uniform layouts), userspace interface APIs, and network and disk I/O locality. It will take a broad-based approach, focusing on the challenges of creating subsystems that work for all machines (including AMD64, PPC64, IA-32, IA-64, etc.), rather than just one architecture. 1 What is a NUMA machine? NUMA stands for non-uniform memory architecture. Typically this means that not all memory is the same “distance” from each CPU in the system, but also applies to other features such as I/O buses. The word “distance” in this context is generally used to refer to both latency and bandwidth. Typically, NUMA machines can access any resource in the system, just at diffe...
Application Failure Recovery
Metadata-integrated debugger
Apparatus and Method for Selective Power Reduction of Memory Hardware
Selective power reduction of memory hardware
Optimization of System Performance Through Scenario Evaluation
Method of managing resources within a set of processes
Conference organizers
Journal of Non-Crystalline Solids, 2010
Method For Balancing Resource Sharing And Application Latency Within A Data Processing System
Transferring annotations across versions of the data
Method for safely accessing shared storage
Application failure recovery
Dynamic method for configuring a computer system
Metadata-Integrated Debugger
Method for routing I/O data in a multiprocessor system having a non-uniform memory access architecture
Method To Transfer Annotation Across Versions of the Data
Optimization of System Performance Through Scenario Evaluation
Uploads
Papers by Gerrit Huizenga