Server Virtualization and PCI

Virtualization is all the rage.  And why not?  It has the promise of getting the most out of servers that are in the data center.  However, as with any technology, it comes with its own issues.  I bring up virtualization because I have run into a number of questions about virtualization over the last few weeks, so I thought it would make a good topic.

First, most people do not know or recognize the four types of virtualization in use today.  There is the type for today’s post that is the virtualization of servers.  There is virtualization of the network in the form of virtualized LANs or VLANs.  There is virtualization of storage using storage area networks otherwise know as SANs.  Finally, there is the up and coming arena of virtualized applications.

For virtual servers, we need to discuss the basic virtual server environment.  First, you have the hypervisor.  The first hypervisors were applications that ran on top of a host OS such as Windows or Linux allowing a single machine to run multiple guest OSes.  VMware Workstation, VMware Server and VirtualPC are examples of hypervisor applications.  Then came stand-alone hypervisors that run as their own environment and do not require a host OS.  VMware ESXi, Xen and HyperV are examples of stand-alone hypervisors.  The stand-alone hypervisor is how most organizations implement virtual servers for production.  The key feature of stand-alone hypervisors is their ability to allow guest OSes to execute over a cluster of physical servers allowing a significant amount of processor power to be harnessed for virtual servers.  For disk storage, virtual servers are typically tied to a SAN.  Only the stand-alone hypervisor typically boots from the physical disk(s) in the physical server, but that does not have to be the case.

A lot has been written over the last couple of years regarding virtual machine exploits, in particular Blue Pill and SubVirt.  However, these exploits are only theoretical and require either Intel’s or AMD’s latest processors that incorporate virtualization on the chip.  Experts are divided on whether or not these exploits can even work, let alone go undetected.  The developer of Blue Pill claims to have developed code for the exploit, but the code has never been posted anywhere for independent review.  SubVirt was a strictly theoretical exercise conducted by the University of Michigan and Microsoft software engineers and code was never developed.  Until a stand-alone hypervisor attack exists, the jury will be out on their viability.

So, what are the risks to hypervisors and virtual servers?  The good news for stand-alone hypervisors is that while there are exploits that create various denial of service conditions, none of these exploits will compromise the hypervisor.  You should be aware that there are exploits that target hypervisor applications or their host OS and then compromise the guest OSes.  There are also exploits that compromise guest OSes and then compromise the hypervisor and, potentially, any other guest OSes.  A lot of these exploits require that some form of guest OS to guest OS communication be implemented, so as long as that sort of communication is not configured, the guest  OSes will not cause the virtual environment to be compromised.  The important thing to remember is that there are still risks to virtualized environments even though those risks are relatively low compared to other operating environments.

Now that we have a background in virtualization of servers, what is the impact when we bring PCI into the picture?  For PCI purposes, virtualization just adds a layer of complexity.  Just remember the PCI “golden rule” of processing, storing or transmitting cardholder data and apply it to the added complexity of virtualization.  When virtual servers are processing, storing or transmitting cardholder data, the virtual server, its underlying hypervisor and SAN are all in scope.  That does not mean to say that all virtual servers on the same hypervisor in the same SAN are also in scope.  However, there are conditions that can bring those other environments into scope.

The most obvious way virtual servers are brought into scope is the same for any server.  When the network is not properly segmented, any PCI in scope server that is on a network segment with out of scope servers, those out of scope servers become in scope because they are on the same network segment.

The way SANs are brought into scope is when the PCI virtual server shares storage with other virtual servers that may or may not be in-scope.  This typically happens when the hypervisor boots from a common area in the SAN for the execution of standard server builds.  There is also the possibility that a common SAN area is used by more than one guest OS but, while this is possible, it is typically not a configuration you are going to see.

How are other virtual servers brought into scope for PCI?  This occurs when virtual servers executing in the same cluster are configured with guest communications enabled.  Guest communications allows servers to communicate between them without using traditional network or other server-to-server communications.  When a virtual server that is in scope for PCI has these communications enabled, all other guest systems attached to this virtual server are also in scope.

How do you keep virtual servers out of scope?

  • Make sure that you logically or physically isolate virtual servers that are in scope for PCI from those virtual servers that are out of scope.
  • Make sure that you have formal, documented controls that ensure that in scope and out of scope, virtual servers maintain their separation/segregation.
  • Do not implement any virtual server to virtual server communications in your in scope virtual environment.

3 Responses to “Server Virtualization and PCI”

  1. 1 Sergei V
    January 10, 2017 at 11:29 AM

    What are your thoughts on the statements made by PCI SSC in section 4.2 of the virtualization supplement “As a general rule, any VM or other virtual component that is hosted on the same hardware or hypervisor as an in-scope component would also be in scope for PCI DSS, as both the hypervisor and underlying host provide a connection (either physical, logical, or both) between the virtual components, and it may not be possible to achieve an appropriate level of isolation, or segmentation, between in-scope and out-of-scope components located on the same host or hypervisor.” In your article above you stated “That does not mean to say that all virtual servers on the same hypervisor in the same SAN are also in scope” which may not be in the spirit of what is being said in the virtualization supplement. For a mixed ESXi environment, is there a good resource or guideline to follow to de-scope some of the VMs that share the same ESXi infrastructure and SAN as in-scope VMs?

    • February 18, 2017 at 3:40 PM

      VMware ESX/ESXi provides the necessary controls to isolate VMs from one another, so just because a VM is in scope for PCI compliance does bring all VMs in that ESX instance into scope. That said, the ESX instance does bring the controls of the ESX implementation into scope.

      For the paranoid among us, they have separate ESX instances for PCI, HIPAA, FISMA, etc. just to keep control environments straight and easy to manage from a compliance perspective.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

June 2009

%d bloggers like this: