TOPIC: 1. The Akamai Network: A Platform For High-Performance Internet
TOPIC: 1. The Akamai Network: A Platform For High-Performance Internet
TOPIC: 1. The Akamai Network: A Platform For High-Performance Internet
ABSTRACT: Comprising more than 61,000 servers located across nearly 1,000
networks in 70 countries worldwide, the Akamai platform delivers hundreds of
billions of Internet interactions daily, helping thousands of enterprises boost the
performance and reliability of their Internet applications. In this paper, we give an
overview of the components and capabilities of this large-scale distributed
computing platform, and offer some insight into its architecture, design principles,
operation, and management.
TOPIC: 2. Energy information transmission tradeoff in green cloud computing
ABSTRACT:
With the rise of Internet-scale systems and cloud computing services, there is an
increasing trend towards building massive, energy-hungry, and geographically
distributed data centers. Due to their enormous energy consumption, data centers
are expected to have major impact on the electric grid and potentially the amount
of greenhouse gas emissions and carbon footprint. In this regard, the locations that
are selected to build future data centers as well as the service load to be routed to
each data center after it is built need to be carefully studied given various
environmental, cost, and quality-of-service considerations. To gain insights into
these problems, we develop an optimizationbased framework, where the objective
functions range from minimizing the energy cost to minimizing the carbon
footprint subject to essential quality-of-service constraints. We show that in
multiple scenarios, these objectives can be conflicting leading to an energy-
information tradeoff in green cloud computing.
TOPIC: 3. Intelligent Placement of Datacenters for Internet Services
ABSTRACT:
ABSTRACT:
Power consumption imposes a significant cost for data centers implementing cloud
services, yet much of that power is used to maintain excess service capacity during
periods of predictably low load. This paper investigates how much can be saved by
dynamically `right-sizing' the data center by turning off servers during such
periods, and how to achieve that saving via an online algorithm. We prove that the
optimal offline algorithm for dynamic right-sizing has a simple structure when
viewed in reverse time, and this structure is exploited to develop a new `lazy'
online algorithm, which is proven to be 3-competitive. We validate the algorithm
using traces from two real data center workloads and show that significant cost-
savings are possible.
TOPIC: 5. Cutting the Electric Bill for Internet-Scale Systems
ABSTRACT: