just another cloudy day….

Creating a mini VMware Lab using VMware Workstation – Part 1

Creating a mini VMware Lab using VMware Workstation – Part 1

VMware vSphere 5.0 has finally arrived and includes several new unique features – such as Storage DRS and Autodeploy – that deliver unprecedented value to VMware customers. Unlike prior versions, vSphere 5 supports only the ESXi hypervisor architecture, the only thin purpose-built hypervisor that does not depend on a general purpose operating system.

Altogether, it’s an awesome server virtualization software from VMware. What we’ll do in this part of the blog series is set up the ESXi server as a VM on VMware Workstation 8. Later on, we’ll create a FreeNAS server too as a VM and connect the ESXi with it, creating a mini Lab!!

Steps to Install VMware vSphere 5 (ESXi 5) in VMware Workstation with Windows 7

VMware is one of the coolest virtualization software. It supports most of the operating systems as guest operating system. Microsoft Server operating systems such as windows 2008 and windows 2008 R2 also can be installed in VMware workstation. Here I’m going to show how you can install ESXi 5 server on VMware Workstation

1) Make sure your computer processor supports hardware Virtualization Technology (VT) and its enabled in BIOS. If not, enable it in BIOS and check whether the Operating System in sensing it.

Find out more information on how to enable VT-X.

It is enabled in my Intel processor computer which is running Windows 7 64bit.

Memory – vSphere 5 requires minimum 2GB memory for virtual machine, so make sure you have enough of physical memory to allocate for virtual machine and physical computer. I have 4GB physical memory in host.

2) Download the vSphere 5 (ESXi 5.0) and Client version 5 from official site HERE. Registration is required.

3) Create a new virtual machine in VMware workstation or player. Browse the ESXi 5.0 installable ISO file you downloaded. Unfortunately the OS type can’t be detected automatically in Workstation versions 7 and below, but it automatically detects the ESXi on Workstation 8. The one I am using.

4) Set the location of virtual machine and hard disk size also. Here is the summary of my ESXi 5 virtual machine. If you like to customize some settings, such as processor cores, network, sound and extra hard disks, then press ‘Customize Hardware’

NOTE: in order to utilize ESXi as a VM on VMware Workstation, the Virtualize Intel VT-x/EPT or AMD-V/RVI option has to be enabled in the VM Settings à Processors section. In order for this option to work, your physical machine MUST support Virtualization Technology. To know more how to check whether your Machine supports VT, CLICK HERE

5) Start the virtual machine and select ‘Standard Installer’ option.  Press ‘ENTER’ to continue the installation in next screen.

6) Press F11 for EULA.  Select the disk to install if you have multiple virtual disks here. I created and started the vSphere 5 virtual machine with single 40GB disk.

Next screen will be to select the keyboard layout and type the root password. Its better to assign root password (which should be with minimum 7 characters) now.

7) Press Enter to continue the installation if you are sure your physical computer is having hardware virtualization enabled processor.

8.) Once installation completed, remove the ISO from CD drive of virtual machine and restart the VM. It should boot properly and receive DHCP IP as a management IP.

9) Press F2 to customize the server management settings. Enter the root password which was given during the installation.

You need to make sure the virtual machine’s network setting is configured properly to communicate with other hosts or guests. In my case, I will be accessing this vSphere 5 virtual machine from my host computer Windows 7, so I set up the network ‘Host-only’ network type in VMware network settings which will enable the host and guest network communication.

Check the connectivity by pinging the vSphere ESXi 5 server. I could ping to from my physical windows 7 computer.

10) Access the server IP address in your internet browser. Ignore or continue the certificate error, you must be able to see the Getting start page which will redirect you to download vSphere 5 client.

Note- This will take you to VMware’s official site to download vSphere 5 client. If you already downloaded this client package (as shown in Step 2), then skip the download and install the client package.

11) After installing the latest Client version, enter the IP of server and root password. You can ignore certificate error here too.

12) Here is the working VMware vSphere 5 in VMware workstation with Windows 7. Performance is normal because I have only 4GB memory in host. No doubt, this is very helpful for learning and experiment purpose.

It’s always nice to have these type of server virtualization software in normal PCs and create virtual machines in it.

If you are interested in create and connect NAS (Network Access Storage) in same testing environment, you can install FreeNAS OS as a separate virtual machine and connect to vSphere 5. You can read more on a step by step to connect FreeNAS with VMware vSphere server HERE.

Leave a comment »

Things to consider when Designing your Private Cloud

Things to consider when Designing your Private Cloud

A private cloud solution is a cloud computing platform that is implemented within the corporate firewall, under the control of the IT department. A private cloud is designed to offer the same features and benefits of public cloud offerings, but removes a number of challenges of public cloud solutions including control over enterprise and customer data and other challenges around security. However, it introduces a number of challenges that would normally be addressed by a cloud provider. In this article we look at the considerations for implementing a private cloud solution.

Private clouds should offer the same features that a public cloud offers. This includes service automation management, usage metering, resource management and security. In addition to these key services that a public cloud offers, a private cloud solution also adds the complexity of managing scalability, standardization and quality of service.

When it comes to scalability, private cloud computing services usually don’t have the economies of scale that large public cloud providers provide. This can cause budget issues that can make private cloud solutions appear expensive. In addition, private cloud solutions have similar challenges around integrations with legacy systems, other private cloud solutions and public cloud solutions. This definitely must be investigated where an applications extends across many compute environments.

Standardization of key services is also critical in a private cloud solution. Private cloud solutions need a great deal of standardization and if this work has not been done previously as part of the IT department’s best practices, private cloud will only make this lack of maturity more apparent.

Finally, service management and quality of service management is critical to the success of a private cloud computing offering. This again can be challenging to an IT organization if these best practices are not already implemented.

In addition to the technology challenges of implementing a private cloud solution, there is also soft cost to implement a private cloud solution that might not exist with public cloud. This includes training of existing IT staff on the new products and processes that support private cloud solutions. In addition, it includes a cultural change within an organization if internal customers are not used to chargeback or other ITIL best practices that become more critical when implementing private cloud solutions.

Designing private cloud solutions to meet a particular business need can solve some of the security challenges that might exist with public cloud solutions; however, may introduce new challenges depending on the current maturity of the IT organization implementing the private cloud solution. These challenges include return on investment discussions, training, and service management considerations. Depending on a given business requirement, close evaluation and analysis is required to ensure a private cloud solution is the best fit for a given requirement.


Leave a comment »

Private Clouds – Is this the Future?

Private Clouds – Is this the Future?

Normally, data centers are historically grown and contain a number of different, heterogeneous systems. Depending on the age of the data center, you can see the evolutionary steps of the IT industry. At older companies, you find mainframe computers, large midrange systems, and a number of rack-based Intel servers all together on one data floor. Looking at younger companies, (less than 10 years old), you won’t see this variety of platforms. They will rather count on a larger number of similar hardware, but highly virtualized to achieve the required flexibility and be able to run a large variety of workloads.

Virtualization certainly was the industry trend of the last decade.

But, what’s next? When we look ahead 10 years from now, what will be the trend of the next decade? I predict, it will be the private cloud!

From a technology point of view, the private cloud is less a revolution than virtualization was. I see it more as a logical next step. Although virtualization changed the way users perceived servers, with cloud computing, users perceive them now as a service.

It is also not a very big effort to add private cloud capabilities to today’s data centers. Every virtualized server farm can be equipped with a cloud computing layer that handles the user interaction, and the provisioning and de-provisioning of virtual servers. So, it is quite easy to adapt to this new technology.

Another reason for private clouds to conquer more and more data center space is because of cloud computing in general: workloads are not limited to run on servers in a specific data center of a company any more. In the next years, we will see more workloads put on public clouds. These remote workloads still require some degree of management and a central control point for provisioning and de-provisioning. Building up this control point for consuming remote public cloud services enables the local private cloud layer to hook in and be managed from the same infrastructure in a hybrid cloud set up.

I mentioned that with cloud computing IT is more perceived as a service than just technology; this is exactly what users outside the IT department will expect in the future. The cloud computing delivery model has already existed in the consumer market for quite some time now. People are used to visiting app stores to install their application software. They understand video on demand and software as a service for their private day-to-day IT usage. In the very near future, users will expect that on their workplaces too.

Leave a comment »

Desktop Virtualization

Desktop Virtualization

As the size of your enterprise increases, so does the scope of its technical and network needs. Something as seemingly simple as applying the latest OS hot fixes, or ensuring that virus definitions are up to date, can quickly turn into a tedious mess when the task must be performed on the hundreds or thousands of computers within your organization.

VDI Allows One to Manage Many

A virtual desktop infrastructure (VDI) environment allows your company’s information technology pros to centrally manage thin client machines, leading to a mutually beneficial experience for both end-users and IT admins.

What is VDI?

Sometimes referred to as desktop virtualization, virtual desktop infrastructure or VDI is a computing model that adds a layer of virtualization between the server and the desktop PCs. By installing this virtualization in place of a more traditional operating system, network administrators can provide end users with ‘access anywhere’ capabilities and a familiar desktop experience, while simultaneously heightening data security throughout the organization.

Some IT professionals associate the acronym VDI with VMware VDI, an integrated desktop virtualization solution. VMware VDI is considered the industry standard virtualization platform.

VDI Provides Greater Security, Seamless User Experience Superior data security: Because VDI hosts the desktop image in the data center, organizations keep sensitive data safe in the corporate data center—not on the end-user’s machine which can be lost, stolen, or even destroyed. VDI effectively reduces the risks inherent in every aspect of the user environment.

More productive end-users: With VDI, the end-user experience remains familiar. Their desktop looks just like their desktop and their thin client machine performs just like the desktop PC they’ve grown comfortable with and accustomed to. With virtual desktop infrastructure, there are no expensive training seminars to host and no increase in tech support issues or calls. End- user satisfaction is actually increased because they have greater control over the applications and settings that their work requires.

Other Benefits of VDI

  • Desktops can be set up in minutes, not hours
  • Client PCs are more energy efficient and longer lasting than traditional desktop computers
  • IT costs are reduced due to a fewer tech support issues
  • Compatibility issues, especially with single-user software, are lessened
  • Data security is increased

VDI Models

There are several different conceptual models of desktop virtualization, which can broadly be divided into two categories based on whether or not the operating system instance is executed locally or remotely. It is important to note that not all forms of desktop virtualization involve the use of virtual machines (VMs).

Host-based forms of desktop virtualization require that users view and interact with their desktops over a network by using a remote display protocol. Because processing takes place in a data center, client devices can be thin clients, zero clients, smartphones, and tablets. Included in this category are:

Host-based virtual machines:Each user connects to an individual virtual machine that is hosted in a data center. The user may connect to the same VM every time, allowing personalization, (known as a persistent desktop) or be given a random VM from a pool (a non-persistent desktop). See also: virtual desktop infrastructure (VDI)

Shared hosted: Users connect to either a shared desktop or simply individual applications that run on a server. Shared hosted is also known as remote desktop services or terminal services. See also: remote desktop services and terminal services.

Host-based physical machines or blades: The operating system runs directly on physical hardware located in a data center.

Client-based Virtual Machines: types of desktop virtualization require processing to occur on local hardware; the use of thin clients, zero clients, and mobile devices is not possible. These types of desktop virtualization include:

OS streaming: The operating system runs on local hardware, but boots to a remote disk image across the network. This is useful for groups of desktops that use the same disk image. OS streaming requires a constant network connection in order to function; local hardware consists of a fat-client with all of the features of a full desktop computer except for a hard drive.

Client-based virtual machines: A virtual machine runs on a fully-functional PC, with a hypervisor in place. Client-based virtual machines can be managed by regularly syncing the disk image with a server, but a constant network connection is not necessary in order for them to function.


How to check whether your PC has Virtualization enabled or not?

How to check whether your PC has Virtualization enabled or not?

This post explains how to enable Virtualization Technology (VT) in motherboard BIOS. To run some operating systems, virtualization software and virtual machines hardware virtualization should be enabled. Mostly, operating systems which not required virtualization technology run normally when virtualization technology is enabled, but for the operating systems required this technology, it must be enabled.

All latest processors and motherboards support virtualization technology, check your motherboard vendor about this support and how to enable or disable VT in BIOS. Operating system detects the hardware virtualization technology once it’s enabled on your motherboard.

Where to Find Virtualization Technology (VT) in BIOS?

This setting is found under ‘Advanced Chipset settings’ in BIOS.

It will be different for each motherboard. Check your mother board manual.

After you change the settings to disable or enable, it’s recommended to shutdown the computer for minimum 10 seconds and restart the machine (Cold Restart) to take effects. If your motherboard is latest one, it detects this change doe’s cold restart. Whenever I change this VT setting on my motherboard, it delays the next restart automatically.

How to Confirm Virtualization Technology is Enabled or Disabled?

1) If your processor is Intel, then you can use this free utility to confirm the result which operating system is sensing.

Download Intel tool to confirm virtualization technology

The result of this utility brings a screen as shown below:

2) For AMD Processors download the below utility.

3) Microsoft Tool to check hardware virtualization Technology ( VT)

Download the free tool from Microsoft here.

NOTE: Installation is not required here. When you execute the EXE file, following result appears:

The above result shows that the machine has HAV enabled.

1 Comment »

Five open source tools for building and managing clouds

Five open source tools for building and managing clouds

Open source technology is going to seriously impact the cloud computing world, and there are two main reasons why: Xen, for example. But there are other important open source offerings that can benefit cloud users. These include… KVM, Deltacloud, Eucalyptus, Cloud.com’s CloudStack Community Edition and OpenNebula.


KVM (Kernel-based Virtual Machine) is an open source hypervisor for Linux running on x86 hardware. It contains virtualization extensions (Intel VT or AMD-V). With KVM, you can run multiple virtual machines (VMs) running unmodified Linux or Windows images. KVM is an upstream hypervisor, sitting in the Linux kernel that converts the kernel into a bare metal hypervisor. Being upstream means that every Linux distribution ships with KVM. As the Linux kernel gets updates, KVM takes advantage of them automatically. KVM is supported in Red Hat Enterprise Linux, Ubuntu, and SUSE Linux Enterprise Server.


Deltacloud is an open source project started last year by Red Hat. It is now an Apache incubator project, not just a Red Hat endeavor. Deltacloud abstracts the differences between clouds and maps a cloud client’s application programming interface (API) into the API of a number of popular clouds, including Amazon EC2, GoGrid, OpenNebula, and Rackspace. Drivers for Terremark and vCloud will be available in the near future. As a result, Deltacloud is a way of enabling and managing a heterogeneous cloud virtualization infrastructure.

Deltacloud allows for any certified virtualized environment, such as environments based on KVM, VMware ESX and Hyper-V, to be managed from one common management interface. That is, instead of having a management console for VMs based on ESX and a management console for VMs based on Hyper-V, all VMs can be managed from one management console. Deltacloud does this by enabling different virtual machines to be transferred or migrated in real time from one virtualization capacity to another, such as from VMware to RHEV (Red Hat Enterprise Virtualization) or VMware to Microsoft. If an enterprise is already using IBM Tivoli or HP OpenView, Deltacloud can be integrated.


Eucalyptus Community Cloud is a sandbox environment in which you can test drive and experiment with Eucalyptus. It is a private cloud platform that implements the Amazon specification for EC2 as Infrastructure as a Service (IaaS). Eucalyptus conforms to both the syntax and the semantic definition of the Amazon API and tool suite, with few exceptions. Eucalyptus also makes available administrative functionalities, such as user management, storage configuration, network management, and hypervisor configuration for managing and maintaining private clouds. Eucalyptus targets Linux systems that use KVM and Xen for virtualization. It has been packaged for inclusion in the 9.04 release of Ubuntu, and other Eucalyptus packages exist for CentOS, Debian, openSUSE, and Red Hat Enterprise Linux 5.x.


Cloud.com (formerly VMOps) offers an open source edition (GPL v3 license) of its CloudStack infrastructure management product: CloudStack Community Edition. CloudStack supports VMware ESX, Xen and KVM (and eventually Hyper-V). It offers many of the capabilities that you would expect from a cloud management interface: VM self-service provisioning, dynamic workload management, multi-tenancy, etc. It also supports Windows and Linux guest operating systems.


OpenNebula is an open source tool kit for cloud computing. It allows you to build and manage private clouds with Xen, KVM, and VMware ESX, and hybrid clouds with Amazon EC2 and other providers through Deltacloud adaptors. The remote public cloud provider could be a commercial cloud service provider such as Amazon, or it could be a partner private cloud running a different OpenNebula instance.

Understand open source cloud tools before hopping onboard

Startups and other more established companies are supporting open source products that create and manage cloud environments. The question is: Will IT organizations deploy these open source products to create and manage private clouds? Is it worth the risk when several of the providers of the open source cloud-based products are small and the products are just beginning to be tested in production environments?

The five open source cloud-oriented products previewed earlier are all worthy tools for helping you build private clouds. They are all already in use; some, such as KVM, Eucalyptus, and CloudStack Community Edition, are already in production use. In addition, Deltacloud is playing a central role in Red Hat’s virtualization and cloud products, and OpenNebula is active in several international projects.

The future of open source tools in cloud computing and virtualization

Red Hat has adopted KVM as its hypervisor of choice for implementing virtualization tools (RHEV), and it will not include Xen in its Linux distribution, beginning with Red Hat Enterprise Linux 6. Red Hat has also developed tools to assist its customers in migrating from Xen virtual machines to KVM virtual machines. These tools will be available later this summer with the release of Red Hat Enterprise Linux 6. Red Hat will support Xen in Red Hat Enterprise Linux 5.x until 2014.

If you are currently virtualizing your data center with Red Hat Enterprise Linux with Xen, then you need to start thinking about converting to KVM. KVM is supported in releases of Red Hat Enterprise Linux, starting with release 5.4. Cloud providers such as Amazon, who use Red Hat Enterprise Linux and Xen, should also begin to move to KVM when Red Hat Enterprise Linux 6 is made available. If you mix and match KVM and Xen, then you may have to do conversions between VMs as you move them about your virtualized environments. Potential Red Hat customers should wait for the release of Red Hat Enterprise Linux 6 and begin with KVM, avoiding the use of Xen.

Red Hat recently announced that it is commercializing Deltacloud as part of its set of virtualization and cloud management products. Red Hat is using Deltacloud in developing its RHEV tool set, so if you are contemplating using multiple hypervisors in your Red Hat-based private cloud environment, then you should take a close look at Deltacloud. It allows you to use a single interface to move VMs among various types of clouds. Drivers for Deltacloud already exist for several clouds, including Amazon, Rackspace and GoGrid.

Canonical is using Eucalyptus in its Ubuntu Enterprise Cloud (UEC) product to help you create your own private cloud. Anyone looking to build a private cloud in a Linux environment should strongly consider using Eucalyptus in UEC. It is compatible with Amazon EC2, making it easy for you to move virtual machines from your Eucalyptus-based private cloud to the Amazon public cloud.

Leave a comment »

Five Best Virtual Machine Applications

Five Best Virtual Machine Creation Applications

Most modern computers are powerful enough to run entire operating systems within your main operating systems, which means virtual machines are more common place today than ever. Here’s a look at the five most popular virtual machine applications.

What is a Virtual Machine?

A virtual machine (VM) is in some ways a simulation of a physical machine. You will need special software to run a VM on your personal computer.

Virtual machines allow you to run one operating system emulated within another operating system. Your primary OS can be Windows 7 64-bit, for example, but with enough memory and processing power, you can run Ubuntu and OS X side-by-side within it.

VirtualBox (Windows/Mac/Linux, Free)

VirtualBox has a loyal following thanks to a combination of a free-as-in-beer price tag, cross-platform support, and a huge number of features that make running and maintaining virtual machines a breeze. Virtual machine descriptions and parameters are stored entirely in plain-text XML files for easy portability and easy folder sharing. Its “Guest Additions” feature, available for Windows, Linux, and Solaris virtual machines, makes VirtualBox user friendly, allowing you to install software on the virtual machine that grants extra privileges to the host machine for tasks like sharing files, sharing drives and peripherals, and more. You can read about additional VirtualBox features HERE.

Parallels (Windows/Mac/Linux, $79.99)

Although best known for the Mac version of their virtual machine software, Parallels also runs virtualization on Windows and Linux. The Parallels software boasts a direct link, thanks to optimization on Intel and AMD chips, to the host computer’s hardware with selective focus—when you jump into the virtual machine to work the host machine automatically relinquishes processing power to it. Parallels also offers clipboard sharing and synchronization, shared folders, and transparent printer and peripheral support. Read more  about Parallels HERE.

VMware (Windows/Linux, Basic: Free, Premium: $189)

VMware for desktop users comes in two primary flavours: VMware Player and VMware Workstation. VMware Player is a free solution aimed at casual users who need to create and run virtual machines but don’t need advanced enterprise-level solutions. VMware Workstation includes all the features of VMware Player—easy virtual machine creation, hardware optimization, driver-less guest OS printing—and adds in the ability to clone machines, take multiple snapshots of the guest OS, and a replay changes made to the guest OS for testing software and recording the results within the virtual machine. You can read more about VMware Player HERE and VMware Workstation HERE.

QEMU (Linux, Free)

QEMU is a powerful virtualization tool for Linux machines built upon the back of the KVM system (Kernel-based Virtual Machine). QEMU executes guest code directly on the host hardware, can emulate machines across hardware types with dynamic translation, and supports auto-resizing virtual disks. Where QEMU really shines, especially among those who like the push the limits of virtualization and take their virtual machines with them, is running on hosts without administrative privileges. Unlike nearly every emulator out there QEMU does not require admin access to run, making it a perfect candidate for building thumb-drive based portable virtual machines. Read more about QEMU HERE.

Windows Virtual PC (Windows, Free)

Compared to the other any-OS-under-the-sun virtual machine applications in this week’s Hive Five, Windows Virtual PC is a tame offering. Windows Virtual PC exists solely to emulate other—usually earlier—versions of Windows. If you need to run an app that only works under Windows XP or test software for backwards compatibility with Vista, Windows Virtual Machine has you covered. It’s limited, true, but for people working in a strictly Windows environment—and most of the world still is—it gets the job done. Note: Virtual PC is available as Virtual PC 2004, Virtual PC 2007, and Windows Virtual PC, use this host and guest OS compatibility chart to figure out which one fits your needs.

Which Virtual Machine Application Is Best?

  • Parallels 13.1%
  • VirtualBox 50.29%
  • QEMU 1.81%
  • VMware 30.39%
  • Windows Virtual PC 3.64%
  • Other: 0.76%
Leave a comment »

What is virtualization?

What is virtualization?

If you work with virtualization for a living, inevitably you’ll be asked what virtualization is. Trying to explain it to someone who doesn’t work with computers can often be challenging, and after you explain it they still may not know what it’s about. So how do you explain it to someone for the first time? I find that using analogies that anyone can relate to is a good way to explain things to people. Before I attempt a virtualization analogy I’ll try explaining it in basic computer terms.

Virtualization software, also called a hypervisor, emulates computer hardware allowing multiple operating systems to run on a single physical computer host. Each guest operating system appears to have the host’s processor, memory, and other resources all to itself. The hypervisor, however, is actually controlling the host processor and resources and allocates what is needed to each operating system, making sure that the guest operating systems (called virtual machines) cannot disrupt each other.

There are two types of x86 virtualization: bare-metal and hosted. Sometimes these types are referred to as Type-1 and Type-2 hypervisors respectively. Bare-metal means the virtualization layer (hypervisor) installs directly onto a server without the need for a traditional operating system like Windows or Linux to be installed first. “Hosted” means that an operating system must first be installed on a server, and the virtualization layer is installed afterwards, just like an application.

Bare-metal hypervisors include VMware ESX, Citrix XenServer and Microsoft Hyper-V Server. Hosted hypervisors include VMware Workstation, Fusion, VMware Player and VMware Server, Microsoft Virtual PC and Microsoft Server, and Sun’s VirtualBox. Some of the differences between hosted and bare-metal hypervisors are listed below.

Hosted hypervisors

  • Requires a host operating system (Windows/Linux/Mac), installs like an application.
  • Virtual machines can use all the hardware resources that the host can see.
  • Maximum hardware compatibility as the operating system supplies all the hardware device drivers.
  • Overhead of a full general-purpose operating system between the virtual machines and the physical hardware results in performance 70-90% of native.

Bare-metal hypervisors

  • Installs right on the bare metal and therefore offers higher performance and scalability but runs on a narrower range of hardware.
  • Many advanced features for resource management, high availability and security.
  • Supports more VMs per physical CPU then hosted products.
  • Because there is no overhead from a full host operating system performance is 83-98% of native. There is a small bit of overhead from the virtualization layer of the hypervisor

Why is virtualization such a great thing? Because most computers do not fully utilize the resources (memory, CPU, disk, network) that they have which is very wasteful. Would you rather have 10 computers that are all using less than 20% of their total resources, or three computers that are using 70% of their resources?

You might think you could avoid this by simply installing more applications on one computer but this is often a bad idea as the applications may conflict with each other and cause problems, and a single OS crash will take down all your applications. Virtualization solves this by allowing the applications to run on the same physical computer, but separates them by allowing each one to have its own isolated guest operating system.

Imagine computers as cars on the road in motion. Each car has its own resources, such as fuel, heat/cooling, radio, etc. Most cars are never filled to capacity, and many have only one person in them which is wasteful.

Imagine virtualization as a bus, instead of many people driving in many cars you now have many people being moved around by a few buses. A person may only ride one bus at a time, but if a bus becomes inoperable due to a flat tire or an engine problem, the people may simply get off and transfer to another bus that has unused seats. In virtualization, this “transfer” happens because of features like High Availability (HA).

A person may also hop from one bus to another if it becomes too crowded while it is moving. In virtualization, this is called VMotion, if you’re using VMware or Live Migration if you’re using Hyper-V. By utilizing buses that hold more people instead of cars, fewer resources are wasted, while all the people still get where they are going. Buying and operating one bus instead of 10 cars is a lot cheaper and more efficient.

Benefits to Virtualization:

Virtualization can help you shift your IT focus from managing boxes to improving the services you provide to the organization. If you are managing multiple servers and desktops, virtualization can help you to:

  • Save money. Companies often run just one application per server because they don’t want to risk the possibility that one application will crash and bring down another on the same machine. Estimates indicate that most x86 servers are running at an average of only 10 to 15 percent of total capacity. With virtualization, you can turn a single purpose server into a multi-tasking one, and turn multiple servers into a computing pool that can adapt more flexibly to changing workloads.
  • Save energy. Businesses spend a lot of money powering unused server capacity. Virtualization reduces the number of physical servers, reducing the energy required to power and cool them.
  • Save time. With fewer servers, you can spend less time on the manual tasks required for server maintenance. On the flip side, pooling many storage devices into a single virtual storage device, you can perform tasks such as backup, archiving and recovery more easily and more quickly. It’s also much faster to deploy a virtual machine than it is to deploy a new physical server.
  • Reduce desktop management headaches. Managing,  securing and upgrading desktops and notebooks can be a hassle. Desktop virtualization solutions let you manage user desktops centrally, making it easier to keep desktops updated and secure.
Leave a comment »

%d bloggers like this: