An Economic Architecture for Cloud Computing

An Economic Architecture for Cloud Computing

Google Tech Talk May 8, 2009 ABSTRACT Presented by Kevin Lai Cloud computing and its predecessors, grid and utility computing, all address shared, on-demand computing at scale. To achieve sufficient scale to amortize costs, cloud computing emphasizes saving human time through 1) automated management, 2) easier programming using systems like MapReduce and Hadoop, and 3) shared and open access. Existing architectures for achieving these goals are composed of largely isolated resource allocation sub-systems (eg, for MapReduce scheduling, virtualization, CPU, network bandwidth, power, etc.) These sub-systems have global impact, but local visibility, which can cause them to work against each other. The shared, open nature of cloud computing systems requires balancing the need to do efficient local allocation of resources and the need to regulate and differentiate applications globally. We take a clean slate approach to designing a cloud computing architecure. We apply economic mechanisms to resources at every layer from the high-level Hadoop system through the allocation of virtualized resources to physical servers. We find that this approach 1) simplifies system design, 2) provides more high-level optimization opportunities, 3) provides greater control over predictability, and 4) increases overall application utility. Kevin Lai is a Research Scientist in the Social Computing Lab at HP Labs. He has done research on operating systems, mobile and wireless networking, network <b>…</b>