Tuesday, October 11, 2011

History of Virtualization

When you think of the beginning of Server Virtualization, companies like VMWare may come to mind. The thing you may not realize is Server Virtualization actually started back in the early 1960’s and was pioneered by companies like General Electric (GE), Bell Labs, and International Business Machines (IBM).

The Invention of the Virtual Machine

In the Early 1960’s IBM had a wide range of systems; each generation of which was substantially different from the previous. This made it difficult for customers to keep up with the changes and requirements of each new system. Also, computers could only do one thing at a time. If you had two tasks to accomplish, you had to run the processes in batches. This Batch processing requirement wasn’t too big of a deal to IBM since most of their users were in the Scientific Community and up until this time Batch processing seemed to have met the customers needs.

Because of the wide range of hardware requirements, IBM began work on the S/360 mainframe system designed as a broad replacement for many of their other systems; and designed to maintain backwards compatibility. When the system was first designed, it was meant to be a single user system to run Batch Jobs.

However, this focus began to change in July 1, 1963 when Massachusetts Institute of Technology (MIT) announced Project MAC. Project MAC stood for Mathematics and Computation, but was later renamed to Multiple Access Computer. Project MAC was funded by a $2 Million grant from DARPA to fund research into computers, specifically in the areas of Operating Systems, Artificial Intelligence, and Computational Theory.

As part of this research grant, MIT needed new computer hardware capable of more than one simultaneous user and sought proposals from various computer vendors including GE and IBM. At this time, IBM was not willing to make a commitment towards a time sharing computer because they did not feel there was a big enough demand, and MIT did not want to have to use a specially modified system. GE on the other hand, was willing to make a commitment towards a time-sharing computer. For this reason MIT chose GE as their vendor of choice.

The loss of this opportunity was a bit of a wake-up call for IBM who then started to take notice as to the demand for such a system. Especially when IBM heard of Bell Labs’ need for a similar system.

In response to the need from MIT and Bell Labs, IBM designed the CP-40 main frame. The CP-40 was never sold to customers, and was only used in labs. However, it is still importatnt since the CP-40 later evolved into the CP-67 system; which is the first commercial Main Frame to support Virtualization. The Operating system which ran on the CP-67 was referred to as CP/CMS. CP Stands for Control Program, CMS stands for Console Monitor System. CMS was a small single-user operating system designed to be interactive. CP was the program which created Virtual Machines. The idea was the CP ran on the Mainframe, and created Virtual Machines which ran the CMS; which the user would then interact with.

The User interaction portion is important. Before this system, IBM focused on systems where there was no user interaction. You would feed your program into the computer, it would do it’s thing; then spit out the output to a printer or a screen. An Interactive Operating System meant you actually had a way of interacting with the programs while they ran.

The first version of the CP/CMS operating system was known as CP-40, but was only used in the lab. The Initial release of CP/CMS to the public was in 1968, the first stable release wasn’t until 1972.

The traditional approach for a time sharing computer was to divide up the memory and other system resources between users. An Example of a time sharing operating system from the era is MultiCS. MultiCS was created as part of Project MAC at MIT. Additional research and development was performed on MultiCS at Bell Labs, where it later evolved into Unix.

The CP approach to time sharing allowed each user to have their own complete operating system which effectively gave each user their own computer, and the operating was much more simple.

The main advantages of using virtual machines vs a time sharing operating system was more efficient use of the system since virtual machines were able to share the overall resources of the mainframe, instead of having the resources split equally between all users. There was better security since each users was running in a completely separate operating system. And it was more reliable since no one user could crash the entire system; only their own operating system.

Portability of Software

In the previous section, I mentioned MultiCS and how it evolved into Unix. While UNIX is not running virtualized operating systems, it is still a good example of application from another perspective. Unix is not the first multi-user operating system, but it is a very good example of one, and is one of the most widely used ever.

Unix is an example of Virtualization at the User or Workspace Level. Multiple users share the same CPU, Memory, Hard Disk, etc... pool of resources, but each have their own profile, separate from the other users on the system. Depending on the way the system is configured, the user may be able to install their own set of applications, and security is handled on a per user basis. Not only was Unix the first step towards multi-user operating systems, but it was also the first step towards application virtualization.

Unix is not an example of application virtualization, but it did allow users much greater portability of their applications. Prior to Unix, almost all operating systems were coded in assembly language. Alternatively, Unix was created using the C programming language. Since Unix was written in C, only small parts of the operating system had to be customized for a given hardware platform, the rest of the operating system could easily be re-compiled for each hardware platform with little or no changes.

Application Virtualization

Through the use of Unix, and C compilers, and adept user could run just about any program on any platform, but it still required users to compile all the software on the platform they wished to run on. For true portability of software, you needed some sort of software virtualization.

In 1990, Sun Microsystems began a project known as “Stealth”. Stealth was a project run by Engineers who had become frustrated with Sun’s use of C/C++ API’s and felt there was a better way to write and run applications. Over the next several years the project was renamed several times, including names such as Oak, Web Runner, and finally in 1995, the project was renamed to Java.

In 1994 Java was targeted towards the Worldwide web since Sun saw this as a major growth opportunity. The Internet is a large network of computers running on different operating systems and at the time had no way of running rich applications universally, Java was the answer to this problem. In January 1996. the Java Development Kit (JDK) was released, allowing developers to write applications for the Java Platform.

At the time, there was no other language like Java. Java allowed you to write an application once, then run the application on any computer with the Java Run-time Environment (JRE) installed. The JRE was and still is a free application you can download from then Sun Micro-systems website, now Oracle's website.

Java works by compiling the application into something known as Java Byte Code. Java Byte Code is an intermediate language that can only be read by the JRE. Java uses a concept known as Just in Time compilation (JIT). At the time you write your program, your Java code is not compiled. Instead, it is converted into Java Byte Code, until just before the program is executed. This is similar to the way Unix revolutionized Operating systems through it’s use of the C programming language. Since the JRE compiles the software just before running, the developer does not need to worry about what operating system or hardware platform the end user will run the application on; and the user does not need to know how to compile a program, that is handled by the JRE..
The JRE is composed of many components, most important of which is the Java Virtual Machine. . Whenever a java application is run, it is run inside of the Java Virtual Machine. You can think of the Java Virtual Machine is a very small operating system, created with the sole purpose of running your Java application. Since Sun/Oracle goes through the trouble of porting the Java Virtual Machine to run on various systems from your cellular phone to the servers in your Data-center, you don’t have to. You can write the application once, and run anywhere. At least that is the idea; there are some limitations.

Mainstream Adoption of Hardware Virtualization

As was covered in the Invention of the Virtual Machine section, IBM was the first to bring the concept of Virtual Machines to the commercial environment. Virtual Machines as they were on IBM’s Mainframes are still in use today, however most companies don’t use mainframes.
In January of 1987, Insignia Solutions demonstrated a software emulator called SoftPC. SoftPC allowed users to run Dos applications on their Unix workstations. This is a feat that had never been possible before. At the time, a PC capable of running MS DOS cost around $1,500. SoftPC gave users with a Unix workstation the ability to run DOS applications for a mere $500.
By 1989, Insignia Solutions had released a Mac version of SoftPC, giving Mac users the same capabilities; and had added the ability to run Windows applications, not Just DOS applications. By 1994, Insignia Solutions began selling their software packaged with operating systems pre-loaded, including: SoftWindows, and SoftOS/2.
Inspired by the success of SoftPC, other companies began to spring up. In 1997, Apple created a program called Virtual PC and sold it through a company called Connectix. Virtual PC, like SoftPC allowed users to run a copy of windows on the Mac computer, in order to work around software incompatibilities. In 1998, a company called VMWare was established, and in 1999 began selling a product similar to Virtual PC called VMWare workstation. Initial versions of VMWare workstation only ran on windows; but later added support for other operating systems.
I mention VMWare because they are really the market leader in Virtualization in today's market. In 2001, VMWare released two new products as they branched into the enterprise market, ESX Server and GSX Server. GSX Server allowed users to run virtual machines on top of an existing operating system, such as Microsoft Windows, this is known as a Type-2 Hypervisor. ESX Server is known as a Type-1 Hypervisor, and does not require a host operating system to run Virtual Machines.
A Type-1 Hypervisor is much more efficient than a Type-2 hypervisor since it can be better optimized for virtualization, and does not require all the resources it takes to run a traditional operating system.
Since releasing ESX Server in 2001, VMWare has seen exponential growth in the enterprise market; and has added many complimentary products to enhance ESX Server. Other vendors have since entered the market. Microsoft acquired Connectix in 2003, after which they re-released Virtual PC as Microsoft Virtual PC 2004, then Microsoft Virtual Server 2005, both of which were un-released products from Connectix at the time Microsoft acquired them.
Citrix Inc, entered the Virtualization market in 2007 when they acquired Xensource, an open source virtualization platform which started in 2003. Citrix soon thereafter renamed the product to Xenserver.

Published Applications

In the early days of UNIX, you could access published applications via a Telnet Interface; and later SSH. Telnet is a small program allowing you to remotely access another computer. SSH is a version of telnet including various features such as encryption.
Telnet/SSH allows you to access either a text interface, or a Graphical interface, although it is not really optimized for graphics. Using telnet, you can access much of the functionality of the given server, from almost anywhere.
Windows and OS/2 had no manner of remotely accessing applications without third party tools. And the third party tools available only allowed one user at a time.
Some Engineers at IBM had an idea to create a multi-user interface for OS/2, however IBM did not share the same vision. So in 1989 Ed Lacobucci left IBM and started his own company called Citrus. Due to an existing trademark, the company was quickly re-branded as Citrix, a combination of Citrus and Unix.
Citrix licenced the source code to OS/2 through Microsoft and began working on creating their extension to OS/2. The company operated for two years and created a Multi-User interface for OS/2 called MULTIUSER. However Citrix was forced to abandon the project in 1991 after Microsoft announced it was no longer going to support OS/2. At that point, Citrix licensed source code from Microsoft and began working on a similar product focused on Windows.
In 1993 Citrix Acquired Netware Access Server from Novell. This product was similar to what Citrix had accomplished for OS/2 in that it gave multiple users access to a single system. Citrix Licensed the Windows NT source code in from Microsoft, then in 1995 began selling a product called WinFrame. WinFrame was a version of Windows NT 3.5 with remote access capabilities; allowing multiple users to access the system at the same time in order to remotely run applications.
While developing WinFrame for Windows NT 4.0, Microsoft decided to no longer grant the necessary licenses to Citrix. At this point Citrix licensed WinFrame to Microsoft, and it was included with Windows NT 4.0 as Terminal Services. As part of this agreement, Citrix agreed not to create a competing product, but was allowed to extend the functionality of Terminal Services.

Virtual Desktops

Virtual Desktop Infrastructures (VDI) is the practice of running a users Desktop Operating system, such as Windows XP within a virtual machine on a centralized infrastructure. Virtual Desktop Computers as we think of them today are a fairly new topic of conversation. But are very similar to the idea IBM had back in the 1960’s with the virtual machines on their mainframe computers. You give each user on the system their own operating system, then each user can then do as the please without disrupting anyother users on the system. Each user has their own computer, it is centralized, and it is a very efficient use of resources.
If you compare MultiCS from back in the 1960’s to the IBM Mainframes, it would be similar to comparing a Microsoft Terminal Server to a Virtual Desktop infrastructure today.
The jump from Virtual Desktops on Mainframes to Virtual Desktops as we know them today didn’t really happen until 2007 when VMWare introduced their VDI product. Prior to this release, it was possible for users in a company to use virtual desktops as their primary computers. However, it wasn’t really a viable solution due to management headaches. The introduction of Virtual Machine Manager from VMWare, and similar products from companies like Microsoft and Citrix has allowed this area to grow very rapidly.


Computer Virtualization has a long history, spanning nearly half a century. It can be used for making your applications easier to access remotely, allowing your applications to run on more systems than originally intended, improving stability, and more efficient use of resources.

Some technologies can be traced back to the 60’s such as Virtual Desktops, others can only be traced back a few years, such as virtualized applications.

Ubuntu Enterprise Cloud (UEC) : How to

Grow Your Own Cloud Servers With Ubuntu

Have you been wanting to fly to the cloud, to experiment with cloud computing? Now is your chance. With this article, we will step through the process of setting up a private cloud system using Ubuntu Enterprise Cloud (UEC), which is powered by the Eucalyptus platform.
The system is made up of one cloud controller (also called a front-end server) and one or more node controllers. The cloud controller manages the cloud environment. You can install the default Ubuntu OS images or create your own to be virtualized. The node controllers are where you can run the virtual machine (VM) instances of the images.

System Requirements

At least two computers must be dedicated to this cloud for it to work:
  • One for the front-end server (cloud or cluster controller) with a minimum 1GHz CPU, 512MB of memory, CD-ROM, 40GB of disk space, and an Ethernet network adapter
  • One or more for the node controller(s) with a CPU that supports Virtualization Technology (VT) extensions, 1GB of memory, CD-ROM, 40GB of disk space and an Ethernet network adapter
You might want to reference a list of Intel processors that include VT extensions. Optionally, you can run a utility, called SecurAble, in Windows. You can also check in Linux if a computer supports VT by seeing if "vmx" or "svm" is listed in the /proc/cpuinfo file. Run the command: egrep '(vmx|svm)' /proc/cpuinfo. Bear in mind, however, this tells you only if it's supported; the BIOS could still be set to disable it.

Preparing for the Installation

First, download the CD image for the Ubuntu Server remix — we're using version 9.10 — on any PC with a CD or DVD burner. Then burn the ISO image to a CD or DVD. If you want to use a DVD, make sure the computers that will be in the cloud read DVDs. If you're using Windows 7, you can open the ISO file and use the native burning utility. If you're using Windows Vista or later, you can download a third-party application like DoISO.
Before starting the installation, make sure the computers involved are setup with the peripherals they need (i.e., monitor, keyboard and mouse). Plus, make sure they're plugged into the network so they'll automatically configure their network connections.

Installing the Front-End Server

The installation of the front-end server is straightforward. To begin, simply insert the install CD, and on the boot menu select "Install Ubuntu Enterprise Cloud", and hit Enter. Configure the language and keyboard settings as needed. When prompted, configure the network settings.
When prompted for the Cloud Installation Mode, hit Enter to choose the default option, "Cluster". Then you'll have to configure the Time Zone and Partition settings. After partitioning, the installation will finally start. At the end, you'll be prompted to create a user account.
Next, you'll configure settings for proxy, automatic updates and email. Plus, you'll define a Eucalyptus Cluster name. You'll also set the IP addressing information, so users will receive dynamically assigned addresses.

Installing and Registering the Node Controller(s)

The Node installation is even easier. Again, insert the install disc, select "Install Ubuntu Enterprise Cloud" from the boot menu, and hit Enter. Configure the general settings as needed.
When prompted for the Cloud Installation Mode, the installer should automatically detect the existing cluster and preselect "Node." Just hit Enter to continue. The partitioning settings should be the last configuration needed.

Registering the Node Controller(s)

Before you can proceed, you must know the IP address of the node(s). To check from the command line:
Then, you must install the front-end server's public ssh key onto the node controller:
  1. On the node controller, set a temporary password for the eucalyptus user using the command:
    sudo passwd eucalyptus
  2. On the front-end server, enter the following command to copy the SSH key:
    sudo -u eucalyptus ssh-copy-id -i ~eucalyptus/.ssh/id_rsa.pub eucalyptus@
  3. Then you can remove the eucalyptus account password from the node with the command:
    sudo passwd -d eucalyptus
  4. After the nodes are up and the key copied, run this command from the front-end server to discover and add the nodes:
    sudo euca_conf --no-rsync --discover-nodes

Getting and Installing User Credentials

Enter these commands on the front-end server to create a new folder, export the zipped user credentials to it, and then to unpack the files:
mkdir -p ~/.euca
chmod 700 ~/.euca
cd ~/.euca
sudo euca_conf --get-credentials mycreds.zip (It takes a while for this to complete; just wait)
unzip mycreds.zip
cd -
The user credentials are also available via the web-based configuration utility; however, it would take more work to download the credentials there and move them to the server.

Setting Up the EC2 API and AMI Tools

Now you must setup the EC2 API and AMI tools on your front-end server. First, source the eucarc file to set up your Eucalyptus environment by entering:
For this to be done automatically when you login, enter the following command to add that command to your ~/.bashrc file:
echo "[ -r ~/.euca/eucarc ] && . ~/.euca/eucarc" >> ~/.bashrc
Now to install the cloud user tools, enter:
sudo apt-get install ^31vmx32^4
To make sure it's all working, enter the following to display the cluster availability details:
. ~/.euca/eucarc
euca-describe-availability-zones verbose

Accessing the Web-Based Control Panel

Now you can access the web-based configuration utility. From any PC on the same network, go to the URL, https://:8443. The IP address of the cloud controller is displayed just after logging onto the front-end server. Note that that is a secure connection using HTTPS instead of just HTTP. You'll probably receive a security warning from the web browser since the server uses a self-signed certificate instead of one handled out by a known Certificate Authority (CA). Ignore the alert by adding an exception. The connection will still be secure.
The default login credentials are "admin" for both the Username and Password. The first time logging in you'll be prompted to setup a new password and email.

Installing images

Now that you have the basic cloud set up, you can install images. Bring up the web-based control panel, click the Store tab, and click the Install button for the desired image. It will start downloading, and then it will automatically install, which takes a long time to complete.

Running images

Before running an image on a node for the first time, run these commands to create a keypair for SSH:
touch ~/.euca/mykey.priv
chmod 0600 ~/.euca/mykey.priv
euca-add-keypair mykey > ~/.euca/mykey.priv
You also need to open port 22 up on the node, using the following commands:
euca-authorize default -P tcp -p 22 -s
Finally, you can run your registered image. The command to run it is available via the web interface. Login to the web interface, click the Store tab, and select the How to Run link for the desired image. It will display a popup with the exact command.
The first time you run an instance, it will likely take a while for the image to be cached. You can get the status of your instance by running the command:
watch -n5 euca-describe-instances
Once it moves from "pending" to "running", reference the assigned IP address and connect to it:
IPADDR=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $4}')
ssh -i ~/.euca/mykey.priv ubuntu@$IPADDR
To terminate the SSH connection for the instance:
INSTANCEID=$(euca-describe-instances | grep $EMI | grep running | tail -n1 | awk '{print $2}')
euca-terminate-instances $INSTANCEID

Maintaining the cloud

Now you should have a working cloud on your network. If you run into problems, you might have to reference the official documentation or hit the message boards. Before I leave, here are a few final tips:
  • To restart the front-end server run: sudo service eucalyptus [start|stop|restart]
  • To fresh a node run: sudo service eucalyptus-nc [start|stop|restart]
  • Here are some key file locations:
    • Log files
    • Configuration files
    • Database
    • Keys
Eric Geier is the Founder and CEO of NoWiresSecurity, which helps businesses easily protect their Wi-Fi with enterprise-level encryption by offering an outsourced RADIUS/802.1X authentication service. He is also the author of many networking and computing books for brands like For Dummies and Cisco Press.