One of the first steps that many organizations take when they begin thinking outside the data center is to convert physical servers to virtual machines. An array of Physical-to-Virtual (P2V) tools can help systems administrators inspect a physical server’s filesystems; package up the operating system, the applications, and the data; and create a virtual machine image for a virtualized environment to replace the physical server.
Systems administrators who have lived through this conversion process often ask us whether RightScale offers any virtual-to-cloud tools that similarly forklift servers from a virtualized environment to a cloud. We don’t, and a better question is what the business benefits of such a tool would be.
The big benefit of forklifting physical servers into a virtualized environment is utilization. Virtualization allows organizations to share a physical server across many applications that formerly each underutilized their own physical box. The result was a significant savings thanks to increased utilization (and standardization) of hardware. Unfortunately there is no similar one-to-one gain to be had from forklifting servers from a virtualized environment to the cloud.
The benefits of the cloud are at a different level. Drawing from the NIST definition of cloud computing, the essential characteristics of a cloud are on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. But these are benefits for provisioning net new workloads – the cloud provides minimal value for migration of existing applications that run in a static fashion 24x7x365.
Cloud computing puts more of the decision-making, such as when to launch servers, closer to the end users of the compute resources. At a technical level, it shifts more of the responsibility for handling failures to the application architecture level. No conversion tool is likely to be able to help you take advantage of these benefits. In fact, forklifting virtual servers to cloud instances would most likely result in a reduction in the service quality because there would be a mismatch between the assumptions made by virtualized application deployments and the properties of the cloud.
Clouds allow for infrastructure resources to be provisioned on demand with dynamically assigned IP addresses based on application load. However, data center operators are used to being able to choose the exact hardware they want to provision, down to the exact clock frequency of the CPUs. In addition, they have historically been able to choose specialized hardware such as GPUs and high-speed network and storage subsystems based on application needs. Clouds typically do not offer this level of flexibility. They are designed to provide generic, commodity hardware resources for broad consumption. Things as simple as assigning multiple network interfaces, static IP addresses, VLAN isolation, and shared storage have been commonplace in data centers for years, yet are not available from most clouds today.
One of the primary drivers for moving to cloud infrastructure is hyper-standardization of infrastructure configurations. Unfortunately, this does not come without a cost. Clouds offer predefined configurations with specified amounts of CPU and memory resources, which requires application architectures be designed for the infrastructure resources available, rather than allowing the infrastructure to be customized to fit the needs of the application.
As an example of how the cloud demands a new kind of planning, consider network addressing. Applications have historically been provisioned in data centers using static IP addresses. The assumption was that the infrastructure would be designed for high availability and would rarely fail. If an app required additional resources, an admin would add them to the existing VM without changing IP addresses. By contrast, clouds assign IP addresses to VMs dynamically by default, and VMs must be re-created to add additional resources.
On the flip side, applications that are designed with a loosely coupled architecture can benefit significantly from cloud infrastructure – we have a white paper on this. A loosely coupled architecture typically includes many tiers of applications with load balancers and queues in between. This allows for dynamic, horizontal scalability, without requiring high-performance, highly reliable, and ultimately expensive hardware resources. This architecture works well with any application that takes advantage of a queuing system, or anything designed to leverage automation to take advantage of ephemeral infrastructure resources. If the service you want to access is not available, the data or work object sits in a queue until the appropriate service is available. The rest of the system can continue to function without coming to a standstill due to the outage of a single component. Thus the cloud provides much more potential than VMs on demand. With the proper level of automation, the infrastructure can actually be application-aware and adjust as application requirements change over time. This is the true power of the cloud, and is a key driving force behind the DevOps movement.
You can get the greatest benefit of cloud from applications designed with a loosely coupled architecture, and your time will be best spent focusing there. As we saw during the transition from mainframes to client/server, a new generation of applications will be designed to take advantage of cloud infrastructure. The benefits the cloud provides will appeal to users, who will migrate to the new apps over time.
As for your legacy applications, I recommend the “if it isn’t broken, don’t fix it” approach. If they run well today, why move them to the cloud? Let them continue to run where they do today – but plan to provision new apps in the cloud.