Posts Tagged ‘Virtualization


PCI SSC Releases Virtualization Guidelines

On Tuesday, June 14, 2011, the PCI SSC released an Information Supplement regarding Virtualization Guidelines.  Not only does this Information Supplement cover virtualization from a VMware and Hyper-V perspective, but also goes into cloud computing.

The supplement is broken into six sections:

  • Introduction
  • Virtualization Overview
  • Risks for Virtualized Environments
  • Recommendations
  • Conclusion
  • Virtualization Considerations for PCI DSS

The Introduction and Overview sections are good foundations.  But if you have a good knowledge of the concepts of virtualization, I would not waste time reading these sections.  The Risks section is a very good discussion of the risks presented by virtualization.  However a lot of readers of this supplement are likely going to be disappointed as there is little new material covered in this section that has not been discussed before in other information sources or even my blog entries.  In my opinion, the Recommendations section presents what would be expected.

The real gem in this supplement is the Appendix that provides the Virtualization Considerations for PCI DSS.  This supplement takes the relevant PCI DSS requirements and provides a lot of guidance regarding what QSAs should consider when assessing virtual environments.  In a number of these, there are also some additional best practices and recommendations made by the writers of the supplement.  In reading these best practices and recommendations, one would think these would be common sense, but I guess you just cannot assume that any more.

Page 23 has the other great gem in a diagram that graphically represents the responsibilities of cloud customers and cloud providers regarding who is responsible for data, software, user applications, operating systems, databases, virtual infrastructure, physical infrastructure and the data center where everything resides across the three types of cloud services; IaaS, PaaS and SaaS.  If you are explaining cloud computing to non-technical people, this is probably one of the best diagrams I have seen to explain responsibilities.

If I had to take the PCI SSC to task on anything, I would argue that cloud computing does not necessarily have anything to do with virtualization.  Yes, a lot of cloud computing solution providers are using virtualized systems to provide their services, but not every cloud provider uses virtualization.  And even if the cloud provider does use virtualization, why is that the customer’s concern?  In my opinion, cloud computing should be an entirely separate document.

I have included below links to all of my prior posts on virtualization for reference.


PCI And Virtualization

I just received an invitation for a Webinar on Virtualization and PCI compliance.  My friend, John Kindervag is one of the panelists and, no, this is not an unpaid advertisement for anyone to attend even though I have provided the link to register.  For an hour they will be discussing this topic because now the PCI DSS v2.0 references virtualization.  Let us be very clear, while the PCI DSS prior to v2.0 never explicitly discussed virtualization, QSAs were instructed on how to approach virtualization security.  And as you will see, virtualization security is no different than any other operating system security.

In my very humble opinion, virtualization is a one minute security issue, if that long.  Let us cut to the chase, as small an attack vector virtualization can be, it is still a potential attack vector, so you need to secure it.  Is that clear enough?  The real issue is how to secure a virtualized environment.

There are two different forms of virtualization.  There are stand-alone hypervisors (what NIST refers to as “bare metal”) like VMware vSphere, VMware ESXi, Microsoft Hyper-V and Citrix XenServer.  Bare metal hypervisors are what we typically run into the most in our PCI compliance engagements, but not necessarily a guarantee.  There are also VMware Server, VMware Desktop and Microsoft VirtualPC (what NIST refers to as “hosted”) that require a host OS to run on as an application no different than Microsoft Word.  Obviously, the attack vectors are wildly different for each type of virtualization.

For whatever reason, it seems that a lot of IT professionals do not recognize that a hypervisor is an operating system.  Yes it is a very specialized operating system, but it is an operating system just like Linux or Windows.  Most hypervisors are based on Linux or UNIX and have a few security hardening similarities.  But given a hypervisor’s specialization, they have significantly different security hardening requirements from their Linux or UNIX counterparts.  As such, hypervisor vendors typically provide a security hardening standard for each of their hypervisor operating systems.  All you need to do is go to the hypervisor vendor’s Web site and download the security hardening guide for your version of hypervisor.  Which brings up a good point, if your hypervisor vendor does not provide a security hardening guide, then you need to find a different hypervisor.

For bare metal implementations, the only thing you have to secure is the hypervisor itself.  However, with hosted virtualization, you need to secure the host operating system as well as the hypervisor.  In addition to the hypervisor, you will need to follow the host operating system vendor’s security hardening guide to ensure that the host OS is also secure.

But hardening your virtualized operating system is not the end of the job.  You need to properly implement your virtualized environment securely as well and that is more than just hardening the hypervisor.  The most obvious security item that needs to be done is that any guest operating systems implemented need to also be securely hardened.  It still surprises me the number of IT professionals that somehow seem to think that because they are implementing Windows or Linux as a virtual machine that there is something different about security and you can totally skip or skimp on hardening.  Security hardening procedures need to be completely followed regardless of whether the guest OS is stand-alone or in a virtual machine.

The next area that seems to get the short shrift is infrastructure security.  This is particularly true of the management of the hypervisor environment.  Most implementations I have seen do a good job of securely connecting the virtual machines, but the hypervisor infrastructure environment leaves a lot to be desired from a security perspective.  The first mistake I see is that the hypervisor management environment is not segregated from other networks.  In the first scenario I commonly see, the production network and the hypervisor management network are on the same segment.  If an attacker compromises any virtual machine, they gain access to the hypervisor management environment and can therefore gain access to the virtual cardholder data environment.  In the other scenario, the corporate network and hypervisor network are one and the same and therefore everyone that is on the corporate network can also gain access to the hypervisor management network.  The way to fix both of these situations is to put the hypervisor management network on its own network segment.  I also recommend to organizations that they dedicate a NIC to only that segment.  However, if an organization already has an operations management network segment separate from other networks, I have no problem having the hypervisor management network in that segment as well.

The other scenario I frequently see is virtual machines from the cardholder data environment (CDE) intermingled with virtual machines that are not part of the CDE.  The problem here is that in the event of a compromise of a non-CDE virtual machine, CDE virtual machines may be accessible because of the configuration of the virtualization environment.  The best way to use virtualization for PCI compliance is to isolate your CDE virtual machines in a physically separate virtual environment from your non-CDE virtual machines.

For the truly paranoid, you can also fiddle with parameters such as physical/logical NIC assignments as well as SAN configurations.  While these sorts of configuration changes can provide additional security to the equation, I have my doubts as to the significance of these changes from a security perspective.  In my years of dealing with virtualization, these sorts of configuration changes have been more for performance reasons and enhanced security was just a nice byproduct.

Finally, there is the maintenance aspect of virtualization.  I think everyone gets the fact that virtualized or not, the guest operating systems need to be maintained and patched just like their stand-alone brethren.  However, when you ask organizations how often they patch their hypervisor; some will say to you very honestly, “You have to patch it?”  Earlier on I stated that a hypervisor is also an operating system and, as such, it needs to be patched just like any other operating system.  Granted a hypervisor does not usually get patched every month like Windows, but there are patches issued every so often by hypervisor vendors.

Best of luck to John and the round table that are presenting this month on virtualization and PCI compliance.  Hopefully this post will help explain what they will be discussing as well as lead to more insightful questions on the topic.


Server Virtualization and PCI

Virtualization is all the rage.  And why not?  It has the promise of getting the most out of servers that are in the data center.  However, as with any technology, it comes with its own issues.  I bring up virtualization because I have run into a number of questions about virtualization over the last few weeks, so I thought it would make a good topic.

First, most people do not know or recognize the four types of virtualization in use today.  There is the type for today’s post that is the virtualization of servers.  There is virtualization of the network in the form of virtualized LANs or VLANs.  There is virtualization of storage using storage area networks otherwise know as SANs.  Finally, there is the up and coming arena of virtualized applications.

For virtual servers, we need to discuss the basic virtual server environment.  First, you have the hypervisor.  The first hypervisors were applications that ran on top of a host OS such as Windows or Linux allowing a single machine to run multiple guest OSes.  VMware Workstation, VMware Server and VirtualPC are examples of hypervisor applications.  Then came stand-alone hypervisors that run as their own environment and do not require a host OS.  VMware ESXi, Xen and HyperV are examples of stand-alone hypervisors.  The stand-alone hypervisor is how most organizations implement virtual servers for production.  The key feature of stand-alone hypervisors is their ability to allow guest OSes to execute over a cluster of physical servers allowing a significant amount of processor power to be harnessed for virtual servers.  For disk storage, virtual servers are typically tied to a SAN.  Only the stand-alone hypervisor typically boots from the physical disk(s) in the physical server, but that does not have to be the case.

A lot has been written over the last couple of years regarding virtual machine exploits, in particular Blue Pill and SubVirt.  However, these exploits are only theoretical and require either Intel’s or AMD’s latest processors that incorporate virtualization on the chip.  Experts are divided on whether or not these exploits can even work, let alone go undetected.  The developer of Blue Pill claims to have developed code for the exploit, but the code has never been posted anywhere for independent review.  SubVirt was a strictly theoretical exercise conducted by the University of Michigan and Microsoft software engineers and code was never developed.  Until a stand-alone hypervisor attack exists, the jury will be out on their viability.

So, what are the risks to hypervisors and virtual servers?  The good news for stand-alone hypervisors is that while there are exploits that create various denial of service conditions, none of these exploits will compromise the hypervisor.  You should be aware that there are exploits that target hypervisor applications or their host OS and then compromise the guest OSes.  There are also exploits that compromise guest OSes and then compromise the hypervisor and, potentially, any other guest OSes.  A lot of these exploits require that some form of guest OS to guest OS communication be implemented, so as long as that sort of communication is not configured, the guest  OSes will not cause the virtual environment to be compromised.  The important thing to remember is that there are still risks to virtualized environments even though those risks are relatively low compared to other operating environments.

Now that we have a background in virtualization of servers, what is the impact when we bring PCI into the picture?  For PCI purposes, virtualization just adds a layer of complexity.  Just remember the PCI “golden rule” of processing, storing or transmitting cardholder data and apply it to the added complexity of virtualization.  When virtual servers are processing, storing or transmitting cardholder data, the virtual server, its underlying hypervisor and SAN are all in scope.  That does not mean to say that all virtual servers on the same hypervisor in the same SAN are also in scope.  However, there are conditions that can bring those other environments into scope.

The most obvious way virtual servers are brought into scope is the same for any server.  When the network is not properly segmented, any PCI in scope server that is on a network segment with out of scope servers, those out of scope servers become in scope because they are on the same network segment.

The way SANs are brought into scope is when the PCI virtual server shares storage with other virtual servers that may or may not be in-scope.  This typically happens when the hypervisor boots from a common area in the SAN for the execution of standard server builds.  There is also the possibility that a common SAN area is used by more than one guest OS but, while this is possible, it is typically not a configuration you are going to see.

How are other virtual servers brought into scope for PCI?  This occurs when virtual servers executing in the same cluster are configured with guest communications enabled.  Guest communications allows servers to communicate between them without using traditional network or other server-to-server communications.  When a virtual server that is in scope for PCI has these communications enabled, all other guest systems attached to this virtual server are also in scope.

How do you keep virtual servers out of scope?

  • Make sure that you logically or physically isolate virtual servers that are in scope for PCI from those virtual servers that are out of scope.
  • Make sure that you have formal, documented controls that ensure that in scope and out of scope, virtual servers maintain their separation/segregation.
  • Do not implement any virtual server to virtual server communications in your in scope virtual environment.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023