The decoupling of a users physical machine from the desktop and software

Desktop Virtualization Journal

Subscribe to Desktop Virtualization Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Desktop Virtualization Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Desktop Virtualization Authors: Bruce Popky, Vinod Mohan, Peter Silva, Jayaram Krishnaswamy, Bob Gourley

Related Topics: Virtualization Magazine, Enterprise Architecture, Desktop Virtualization Journal, Business Process Management

Article

Making Virtualization Worth the Investment

Three guidelines for successful virtualization

Today's IT organizations have recognized the power of virtualization to improve service delivery while also reducing costs. Once relegated to the lab, virtualization now plays key roles across the IT organization. Companies are increasingly moving business-critical workloads onto virtualized infrastructures, with many organizations now running, or planning to run, more than 80 percent of their computing on virtualized platforms. The next step for these organizations is to realize the promised benefits of virtualization - agility and lower costs - and to grow its footprint. Virtualization can also lay the groundwork for a move to the cloud.

A New Level of Freedom
While virtualization completely changes the realized capacity of your physical resources, it does not change the functionality of the virtualized systems. Virtualization cuts your tight dependence on hardware by abstracting it away, thereby reducing complexity and increasing flexibility. However, the applications on the virtualized servers run no differently and, to the end user, a virtual server appears the same as a physical server.

Hardware abstraction, which essentially makes all underlying hardware look the same, introduces a new level of freedom to IT resources. Now, virtual servers can be created easily and quickly, and often with little involvement from IT. That freedom presents a challenge in managing and controlling virtualized resources since the virtual servers are often no easier to manage. This challenge is causing many IT organizations to struggle with their adoption of virtualization.

What's required is an approach to managing virtualized resources that views them in the larger context of the business. Business Service Management (BSM) does just that. BSM simplifies and automates IT processes across physical, virtual, and cloud-based resources, allowing you to prioritize work based on business needs. It integrates virtualization management into a unified, holistic management framework that encompasses all physical and virtual components of the IT infrastructure, across all IT disciplines. With this approach, you can ensure effective management of your virtualized environment and also accelerate your move to virtualization.

Three Guidelines for Successful Virtualization
The recommended approach has three fundamental guidelines:

  1. Avoid creating new silos
  2. Have a clear picture of the IT environment
  3. Leverage intelligent automation

Avoid Creating New Silos
Even though virtualization introduces new technology into the data center, this does not mean that you must create a new silo to manage virtualized entities separate from the rest of your IT infrastructure. Most likely, you are already grappling with eliminating the many silos that have appeared over the years. These silos are typically separated by resource type, such as distributed servers, mainframes, operating systems, storage devices, and network resources. Often, they have different toolsets that require different skill sets, resulting in inefficiencies that slow down processes and drive up costs. You don't need to create yet another silo for virtual resources, or worse yet, multiple silos for different virtualization technologies. Silos only impede the agility made possible by virtualization.

BSM helps you adopt virtualization without creating a new silo for managing virtualized entities. A key factor in the BSM approach is the integration of processes and tools across different resource types and technologies, breaking down the barriers between silos and eliminating the inefficiencies caused by silos.

If your organization is like most, you won't virtualize all systems in your data centers at once. Consequently, you will likely have a hybrid physical/virtual environment, at least for the foreseeable future. It's important, therefore, that you implement a management approach that accommodates a highly diverse, hybrid environment.

Keep your options open with respect to virtualization technologies. Over-dependence on a single infrastructure vendor leaves you vulnerable to sudden price increases and changes in strategy, functionality, and support. You need to be able to implement the combination of resources that best meet your requirements. To that end, you should implement a management solution that accommodates multiple virtualization technologies.

Have a Clear Picture of the IT Environment
Many IT organizations provision new virtual servers without a full and accurate picture of the current state of the IT environment and what the impact of adding a new system will be. Knowledge of the environment becomes even more important in the context of the physical to virtual (P2V) transition. Very often the P2V transition is envisioned on a server-by-server basis - essentially making a copy of the physical server in virtual form. While this simplistic approach might work for a lab, it is not effective for transitioning production systems. In that more complex case, it is essential to understand the dependencies between systems and the services being provided. Once the dependency map is understood, you can plan to transition entire services together, significantly reducing the risk of unintended service downtime.

Once virtualized, the need for a service dependency map continues to be essential. Again, the perception of virtualization is influenced by the legacy of the lab - individual systems with little or no connections outside themselves. In today's production-level virtualized architectures, not only are there many dependencies between virtual systems, those systems are spread out across many different physical servers and often move to new servers when capacity needs require it. Therefore, any discovery solution has to understand the virtual infrastructure and track both the typical service dependencies as well as the rapidly shifting hardware dependencies. This allows your operations team to identify the full effect of hardware failures quickly, thus reducing downtime.

Before planning and implementing virtualization, you must have an up-to-date physical and virtual view of your data center showing what devices are being used; their location;  their configurations, capacities and dependences; the types of software and operating systems; and the applications running on them. It's important to understand the services the devices are supporting, the hardware they are running on, and what virtual hypervisors are available.

It is also essential that the discovery tool store the infrastructure information in a configuration management database (CMDB) where it can be accessed by other tools. The discovery tool should also update the information in the CMDB when changes occur, ensuring that the information remains up to date. For example, if a virtual machine moves from one physical server to another, that should be noted immediately. This is essential in the rapidly changing virtualized environment to allow proper disaster and fault recovery.

Once you have a clear picture of your environment, you can make proper use of a capacity management tool. The primary business benefit of virtualization is achieving the highest utilization possible with the available hardware and reducing unnecessary capital expenditure on new hardware - both of which are impossible without a capacity optimization process. Armed with comprehensive and up-to-the-minute information, you can make informed decisions to transition physical systems onto your virtualized infrastructure, as well as to deploy new virtual devices as efficiently as possible. In this way, you can avoid resource contention problems that can drag down performance, while also ensuring you get the best use of your hardware investments.

Take Advantage of Intelligent Automation
In a purely physical IT infrastructure, things are relatively static. Typically, physical servers do not come and go or constantly change their confirmation. Virtualization alters all that. Virtual resources move freely across physical resources, and change is rapid and continual. In minutes, you can spin up a new server for an application developer or deploy a new instance of an entire multitier application to accommodate a spike in usage.

It is this dynamism that makes virtualization so attractive. It is also what creates a management challenge. You need to be able to unleash this power without losing control. That requires the ability to quickly execute established processes. For example, spinning up a server or a new instance of a multitier application requires the ability to quickly execute an established provisioning process. Executing processes quickly in today's highly complex and diverse infrastructures requires intelligent automation.

Automation tools must have knowledge of the current state of the IT infrastructure. For example, an automated provisioning tool must be able to determine what physical capacity is available at that moment to host new virtual resources. The tool must also have knowledge of all the components and dependencies of a multitier application stack to automatically provision a new instance of that application. This means that automation in the virtual world needs tight integration with the CMDB and capacity optimization. It is no longer enough to send out virtual systems with abandon. True service provisioning requires a sophisticated mix of both architectural knowledge (what components are needed and how they interrelate) and capacity (what is available and what is best for this particular service).

As virtualization matures from the lab into the production data center, the concept of what it means to provision a virtual server also must change. No longer is it adequate to maintain an overly simplistic, rubber-stamp approach to provisioning using only images or templates. This approach puts most of the configuration work on the requester, and the complexity of managing virtual images tends to grow exponentially over time. Any virtual automation system must allow flexibility when building new systems and be able to deploy complex applications on those systems (clustered databases, application middleware, etc.). In other words, what benefit is deploying a virtual template in five minutes if it takes another two weeks to deploy a database on it? Production quality automation requires a full-stack approach.

Realize the Benefits of Virtualization
If implemented within the context of overall BSM, virtualization offers many benefits to IT. Sharing physical resources improves asset utilization, which translates into lower capital and operating expenditures. Virtualization also speeds your ability to fulfill requests for services. It enables user self-service, so that a developer can obtain the server on his or her own by simply selecting it from a service catalog. The result is dramatically greater productivity, which enables you to respond more quickly to the changing needs of the business. Virtualization also enables you to address performance issues proactively, which translates to a higher-quality service delivery.

You can realize all these benefits while maintaining management control of the IT environment. And once virtualization is established, you can continually increase its footprint, adding value without increasing risk. Moreover, a strong virtualization approach will get your organization ready for cloud computing, if that is what the future holds for your business.

For more information about virtualization, visit www.bmc.com/solutions/virtualization-management.html.

More Stories By Ben Newton

Ben Newton is senior manager of Operations Management Solutions at BMC Software, where he leads a team focused on product messaging for BMC’s data center automation and proactive operations portfolio. For the last decade, he has specialized in the various aspects of data center automation, particularly related to configuration, application release, compliance automation, and cloud computing. Before joining BMC, he worked as a systems architect for Electronic Data Systems (EDS) and Northrop Grumman. He graduated in 2000 from Cornell University with a master’s degree in computer science.

More Stories By David Williams

David Williams is a vice president of Strategy in the Office of the CTO at BMC Software, with particular focus on availability and performance management, application performance management, IT operations automation, and management tools architectures. He has 29 years of experience in IT operations management.

Williams joined BMC from Gartner, where he was research vice president, leading the research for IT process automation (run book automation); event correlation and analysis; performance monitoring; and IT operations management architectures and frameworks. His past experience also includes executive-level positions at Alterpoint (acquired by Versata) and IT Masters (acquired by BMC), and as vice president of Product Management and Strategy at IBM Tivoli. He also worked as a senior technologist at CA for Unicenter TNG and spent his early years in IT working in computer operations for several companies, including Bankers Trust.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.