It amazes me how clueless some IT people are about the technologies and methodologies they use in their business. You talk to them about the latest technologies and all they can do is parrot back sales rhetoric from the vendors. As a result, you have to wonder about articles you see in the trades about the adoption rates of these new advances.
This is particularly true about “cloud computing” and other latest and greatest technologies. I have written about “the cloud” before, but I thought I would add to that based on this article published by InformationWeek on 2011 Cloud Computing predictions. The number one prediction is that “the cloud” will be PCI compliant. The article says that “the cloud” was believed to be PCI non-compliant and that the PCI DSS does not address “the cloud.”
Let us clear up the PCI DSS not addressing “the cloud” comment as it is the easiest to address. Whether people admit it or not, the PCI DSS is actually written in such a way to address changes in technology without directly calling out new technologies. For example, people complained about virtualization not being explicitly called out years ago. They pointed to requirement 2.2.1.a that restricted configurations to “one primary function” per server. Those of us that had been through the QSA training had been told that this requirement was by server, logical or physical, and did not apply to the hypervisor. However, there were caveats on how to treat the configuration of the hypervisor and any virtual clusters. While cloud computing is not explicitly called out, the PCI DSS can be applied to “the cloud” as long as the operating constraints of “the cloud” are taken into account during the assessment.
So the PCI DSS can be applied to “the cloud” in its existing form. Then where is the problem? The first problem with “the cloud” is in defining “the cloud.” If you were to ask every vendor of cloud computing to define “the cloud,” I will guarantee you will get a unique answer from each vendor asked.
To address this key issue, I went to the best source I could find, the National Institute of Standards and Technology (NIST). NIST has possibly the best definition for “the cloud” I have found and it has only taken NIST 15 iterations to get to the one that is currently published. However, NIST defines the following as key characteristics of “the cloud.”
- On demand, self service – That means that technicians are not required to establish “the cloud” environment.
- Broad network access – While this most likely means the Internet, it does not have to, hence private “clouds.”
- Resource pooling – This typically implies virtual systems and storage area networks and similar technologies that allow the apportionment of processing power and storage capacity across large resource pools.
- Rapid elasticity – Again, this typically implies some form of virtualization that allows the allocation of additional resources such as CPU or storage on demand.
- Measured service – This means that you are charged for what you use. For organizations that only need one processor and ten GB of storage, they will be charged significantly less than an organization using 10 processors and one TB of storage.
For all of the marketing hype that “the cloud” is something radically new, at its heart it is just time sharing with a new name using different technology. Where “the cloud” comes up different from time sharing is in the service models used to implement “the cloud.” NIST defines the following three service models for “the cloud.”
- Software as a Service (SaaS) – This is what most IT people think of when they think of “the cloud.” As defined by NIST, SaaS is the capability provided to the consumer to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
- Platform as a Service (PaaS) – NIST defines PaaS as the capability provided to the consumer is to deploy onto “the cloud” infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
- Infrastructure as a Service (IaaS) – NIST defines IaaS as the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems; storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
What the NIST definition does not talk about is how “the cloud” is physically implemented. From a PCI compliance perspective, how “the cloud” is implemented is very important and determines whether or not it can be PCI compliant. In my experience there are three ways that “the cloud” can be physically implemented.
- Virtualized – One or more servers configured as a cluster that runs multiple virtual servers of the same or differing operating systems. The cluster is attached to a storage area network (SAN) that is configured to provide storage to the individual consumers’ virtual servers based on their storage requirements. Under this implementation model, each consumer runs on its own virtual server with no other consumers.
- Multiple instances – Under this implementation model each consumer runs on the same server as a separate instance much like the time sharing model of old where multiple consumers ran on the same mainframe using any of the applications that were installed and made available to the consumers. The applications execute as a separate instance for each consumer on the same system, but may share database management or other system software. This implementation approach is going to be most common on UNIX and mainframe implementations of “the cloud.”
- Hybrid – This is the most complex implementation model as it combines virtualization with multiple instances. What happens is that the multiple instance implementation model is implemented in on virtual servers.
In the virtual implementation model where the consumer’s cloud functions as a virtual server configured like its physical server cousin (i.e., one server one function), PCI compliance is likely possible. This is because there is logical segregation of virtual servers just as there is physical segregation with real servers.
However, just because we have logical segregation of servers does not mean that it will automatically be PCI compliant. Hypervisors such as VMware and Hyper-V offer connectivity between virtual servers that does not require a network and it is these connections that create PCI compliance issues. So a QSA will need to investigate the virtual implementation to determine whether or not such connections have been implemented.
In the multiple instance and hybrid environments, the consumer’s environment is not logically or physically discrete from other consumers. As a result, in order to be PCI compliant, all other consumers have to be PCI compliant and must submit themselves to a PCI assessment whenever another of the consumers is being assessed because they are all using the same environment. While technically these environments could be assessed as PCI compliant, let us face it, in practice getting every consumer to agree to be assessed by your QSA is just not going to happen, even if each consumer in this environment needs to be PCI compliant. As a result, it is these implementation models that will likely never become PCI compliant. Or if such an environment does become PCI compliant, it will likely be a painful and arduous process.
Something not widely discussed is the susceptibility of “the cloud” to attacks. When you bring up this subject with IT professionals, a lot of them have this odd idea that “the cloud” is nothing like any of their existing technology. “It’s the cloud,” one IT executive recently said to me when asked directly about the vulnerability of “the cloud.” What people seem to forget is that “the cloud” is usually the same as their existing server environment only virtualized and not in their data center. In the end, Windows and Linux are still Windows and Linux with all of their potential attack vectors whether they are running on a physical server or in “the cloud.”
But in addition to the consumer visible technology, there is the underlying virtualization that is also an attack vector, albeit a rather small one. In the multiple instance implementation model, if the platform is a traditional mainframe OS such as zOS, MCP or HP-UX, the likelihood of attack is probably very small. However in hypervisor environments such as VMware, Xen and Hyper-V, the likelihood of attack is still small, but much higher than the mainframe.
In the end “the cloud” can be PCI compliant, but it all depends on how “the cloud” has been implemented. And that means more than just relying on sales and marketing hyperbole. One has to know the architectural foundation of “the cloud.” Unfortunately, most vendors treat this information as trade secret and it can be very difficult to obtain. However, if you stick to your guns and explain that you will not sign a contract without this information; most vendors will relent and provide it. And if you find that your preferred vendor’s cloud is not going to likely be compliant, then you need to walk away and find a cloud that can be compliant.