VPLACEMENT: Contention Aware Virtual Machine Placement System
Maximizing the number of cohosted virtual machines (VMs) while still maintaining the desired performance level is a critical goal in cloud. As we pack more virtual machines on a physical machine (PM), the resource contention increases, thereby affecting the response time. This virtual machine placement problem has been vastly studied and most of effort has been in either allocating more resources to virtual machines (resizing) or migrating them to a higher capacity PM based on the resource demand estimation. Studies have also shown that in the presence of resource contention the resource demand estimation mechanisms could predict more resource requirement than actually needed. Hence deciding virtual machine placement and allocated resources based on utilization estimation could lead to inefficient usage of PM resources. We propose a novel approach to solve this problem which focuses on overall application response time rather than individual virtual machines. Large scale applications are deployed as multi-tier components. These components interact with each other so that application can perform its task. Our placement algorithm uses the dependency relationship between these components to understand application response time behavior. Our solution focuses on reducing the performance degradation because of resource contention. We propose a VM placement system termed as Vplacement. This system uses the traffic analysis to understand the dependency relationship between application components. This dependency relationship and traffic analysis provides some vital iii data like impact of component processing time on application response time, the probability of resource contention between a pair of component nodes (coArrival Probability) etc. The impact and coarrival probability is used by the placement engine of Vplacement to minimize the degradation of application performance because of resource contention by cohosting the low impact component nodes together.