Virtualisation Fundamentals - Part 1
What's this virtualisation stuff all about? In the first of a series of blogs, I will be investigating what virtualisation is and why people are doing it. In subsequent parts, we will look at the how of it, the danger zones, and then get more specific.
Some history: Virtualisation was invented many moons ago by IBM. However, it came to the fore with the rise of client server computing and the x86 platform.
As companies started implementing multi-tiered applications, they found that each tier needed to be hosted on its own server. This led to a large increase in the number of servers required. These servers were only using between 10% and 15% of their available CPU capacity. They also all required power, cooling, space, maintenance and administration.
Another factor was that each had its own operating system which was tightly bound to the underlying hardware, so there was no flexibility, in as much as the OS and Application were tied to the server. Some of the restrictions above were due to the operating system; others were due to the architecture of the processor.
The Virtual White Knight (maybe a Green Knight): Now, some clever (actually very clever) people at Stanford University (Palo Alto, California) came up with an idea for getting more than one OS to run on a single server (the main part being a thing called binary translation, but we will dive into that another day). The piece of software was called a hypervisor.
The latest incarnation of the VMware hypervisor (vSphere Hypervisor aka ESXi) is called a type 1 hypervisor, meaning that it runs directly on the hardware; the alternative being a type 2, which runs on an underlying operating system (Oracle Vbox, VMware Workstation being examples).
The hypervisor serves as an abstraction layer; it creates some virtual hardware that we can use for our servers that will run on this piece of tin. Now we can add an operating system that will run on the virtual hardware, and install our applications on that.
Because of the abstraction, and the fact that the hypervisor will allow us to, we can add more operating system/application stacks. These are running on virtual hardware, so let\'s call them virtual machines (not exactly original, as thatís what they have been called since the IBM days).
"So what?", I hear you say. Well, we can now get several virtual machines on each real one (or on each host, as the hypervisor/server combination is known). This reduces power, cooling, space and maintenance. The virtual machine files are all contained in about 5 files in a single directory, and can run on any host running the correct hypervisor, and so gives us flexibility. Not forgetting that we are now using more of the available capacity.
That's the end of the first part. Come back in a couple of weeks when we will be looking at some of the features that you should be aware of in a virtual environment.
by Paul | 30 August 2011
Disclaimer: The opinions expressed in this blog represent those of the blogger or original article writer (if reproduced or linked to), and not those of IP Performance Ltd or IP Performance Ltd's subsidiaries, vendors, distribution channel or other business partners. Furthermore, all content being the responsibility of the individual blogger or original author, it is not intended to malign any religion, ethnic group, club, organisation, company, or individual. All data and information provided on this site is for informational purposes only. IP Performance Ltd makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.