Archive for October, 2009

31
Oct
09

PCI SSC Issues Clarification On Encrypted Data Being In-Scope

On October 27, 2009 the PCI SSC and the card brands issued a clarification on whether or not encrypted cardholder data is still in-scope (Article Number 1086).  For such a simple clarification, there was a lot of verbiage, so I thought it might be good to take a closer look at the FAQ in question.

The FAQ starts out stating the obvious facts regarding encryption that we all know.

“Encryption solutions are only as good as the industry-approved algorithms and key management practices used, including security controls surrounding the encryption/decryption keys (“Keys”). If Keys are left unprotected and accessible, anyone can decrypt the data. The DSS has specific encryption key management controls (DSS 3.5 and 3.6), however, other DSS controls such as firewalls, user access controls, vulnerability management, scanning, logging and application security provide additional layers of security to prevent malicious users from gaining privileged access to networks or cardholder data environments that may grant them access to Keys. It is for this reason that encrypted cardholder data is in scope for PCI DSS.”

The FAQ then goes on to give an exception.

“However, encrypted data may be deemed out of scope if, and only if, it has been validated that the entity that possesses encrypted cardholder data does not have the means to decrypt it. Any technological implementation or vendor solution should be validated to ensure both physical and logical controls are in place in accordance with industry best practices, prohibiting the entity, or malicious users that may gain access to the entity’s environment, from obtaining access to Keys.”

It appears that the PCI SSC and the card brands have recognized that if you do not have the ability to decrypt, then the data cannot be treated as in-scope.

However, if an encryption algorithm is used, there must be a way to decrypt.  As a result, the next sentence addresses this fact.

“Furthermore, service providers or vendors that provide encryption solutions to merchants who have administrative access and controls to Keys along with the management of termination points for encryption to process transactions, are required to demonstrate physical and logical controls to protect cryptographic keys in accordance with industry best practices (such as NIST referenced in PCI DSS requirement 3.6), along with full compliance with PCI DSS. Merchants should ensure their solution providers who provide key management services and/or act as the point of encryption/decryption are in compliance with PCI DSS.”

While we now have an exception, it appears to be only valid if the merchant do not have access to the encryption keys.  Notice, this exception does not include service providers that may also not have access to encryption keys, only to merchants.  In addition, the exception specifically calls out that vendors and service providers that have control of the encryption keys must prove that the keys are protected in accordance with industry best practices such as the NIST standards called out in requirement 3.6.  Notice, they MUST prove.  Therefore this nonsense of “trust us, we do this” is no longer acceptable.  They must provide proof that they are managing encryption keys under industry best practices.

Finally, merchants do not necessarily get off scot-free if their encryption keys are managed by a third party and the merchant does not have access to the keys.  The final statement in the FAQ covers this point.

“Merchants should be aware that encryption solutions most likely do not remove them completely from PCI DSS. Examples of where DSS would still be applicable include usage policies, agreements with service providers that deploy payment solutions, physical protection of payment assets and any legacy data and processes (such as billing, loyalty, marketing databases) within the merchant’s environment that may still store, process or transmit clear text cardholder data, as that would remain in scope for PCI DSS.”

So, in my opinion, here is the bottom line for this FAQ.  You can rule encrypted cardholder data out as long as the merchant does not have access to the encryption keys and the third party in control of the encryption keys can prove that it is properly managing the keys.  However, all other relevant PCI DSS requirements still apply.

UPDATE:  It has been pointed out to me that as long as there is an independent internal party available to manage the key, that that would also satisfy the exception.  While I agree with that assessment, with most small and mid-size merchants and service providers, I would argue that the only way they can comply with this exception is to use an external, third party to manage the keys.  Therefore, I stand by my interpretation that a third party needs to manage the keys.

UPDATE: The PCI SSC changed their FAQ system and the referenced FAQ number has changed to 1086 which I have changed in the post.

25
Oct
09

PA-DSS Certification Means Nothing?

A couple of weeks ago we had a conference call with our PCI SSC QA team.  During that call, one of the areas we discussed was the relevance of PABP or PA-DSS certification on the PCI DSS Security Assessment Procedures (SAP) or the Self-Assessment Questionnaires (SAQ).  At the end of the discussion, essentially what we were told was that the PABP and PA-DSS were not relevant to a merchant’s compliance with the PCI DSS.

Based on this definition, I can understand why software vendors are so anti-PCI compliance.  If the PABP or PA-DSS does nothing to help customers achieve PCI compliance, then why pay to have your solutions certified?  From a merchant’s perspective, why buy a PABP or PA-DSS certified application if it is meaningless to your PCI compliance under the PCI DSS?  We were even told in our QSA recertification training in the spring that the relevance of PABP and PA-DSS was really nothing.  Trust me, there is nothing worse than telling a client that, even though their application is PABP or PA-DSs certified, you still have to cover all of that ground all over.  In my mind, what this implies is that PA-QSAs are not to be trusted.  This is a situation similar to relying on other QSA’s work.

Yes, I know that Visa mandates application certification for purchased solutions.  However, if there is no business benefit other than making Visa happy, why bother?  It is positions like this that give critics the material to successfully argue the short sidedness and irrelevance of the PCI standards.

I do agree that you cannot use PABP or PA-DSS compliance to just brush aside all of the PCI DSS.  However, there are some obvious areas where such a PABP or PA-DSS certification should be a valid control as long as the application is implemented per the vendor’s Implementation Guide which must explain how to maintain PCI compliance.

For instance, if you have an application that is PABP or PA-DSS certified, what is the point of covering requirements 3.2, 3.3 and 3.4?  In my opinion, if the application is implemented properly, this section should be Not Applicable because the application is PABP or PA-DSS compliant.  However, based on our training, marking this not applicable is not allowed.

Another set of meaningless requirements when dealing with a certified application are 6.3 – 6.5.  As long as the client is not modifying the application, these are pointless.  The PABP and PA-DSS cover these for the software publisher, so they should be not applicable as well.  However, as a QSA you are still required to prove that you did the proper assessment of these points regardless.

Nine times out of ten, the vendor is really irritated (this is being kind) when you ask them to explain how they meet these requirements.  And rightly so.  They have spent tens of thousands of dollars per application to have them assessed as PABP and/or PA-DSS compliant.  And now you have an army of QSAs contacting them to cover the same ground as the PA-QSA covered.

At some point, the PCI SSC needs to get a clue about what is going on out in the field.  I know that this topic has been brought up on a number of occasions, but there just does not seem to be any interest in dealing with the issue.  Until merchants, service providers and software vendors start to see that the PCI standards have a real business benefit, we are going to fight our way tooth and nail to a secure card processing environment.

15
Oct
09

Cloud Computing and PCI Compliance

Here is a topic that is starting to gain traction with a lot of organizations.  However, it is a solution that is fraught with danger, particular when it comes to PCI compliance.

First, we need to define cloud computing.  Cloud computing is essentially defined as being similar to grid computing in that it leverages the computing power of multiple CPUs to execute applications.  As usual, what goes around comes around.  If cloud computing sounds familiar, the best analogy I can give is the old mainframe.  Only instead of a single system with multiple CPUs, we are using virtual technology and one or more instances of Windows, Linux, UNIX, etc.

Where the cloud computing model starts to go awry is in how it is implemented.  In order to keep costs low, cloud computing environments typically run one copy of an application in a single environment as multiple instances.  In this scenario, multiple organizations will be using an application like Oracle Financials, but each organization is running as its own instance on the same virtual machine.  To make matters worse, their data are being stored in the same Oracle database instance but, obviously, in different tables.

A lot of you are now saying, so what?  The “so what” is in the lack of segregation between organizations.  If your organization is using Oracle Financials as a PCI in-scope application and there are other organizations running their applications on the same logical system, then your application and theirs are all in-scope.  Do you think that those other organizations are going to be cooperative to have your auditors examine their organization and applications?  Probably not.

Okay, so we will require our application to be logically separated from other organizations.  While you can do that, you are now looking at the ASP model of computing and you are not going to get the cost advantage of cloud computing.

As a side note.  The media have reported recently on a number of cases where one of the organizations in the cloud was conducting something illegal and law enforcement raided the data center and took all of the cloud computers involved as well as disk and tape storage as evidence.  Organizations that were not involved were effectively put out of business as their systems were now evidence.  In a number of these cases, the organizations attempted to get their portion of the systems released and were rebuffed by the courts.  So, keep this in mind when you start looking at cloud computing.

03
Oct
09

Log Data Revisited

I am arguing with another client about the retention of log data. They are pushing back because they generate around 190GB of log data daily from their Web sites. That works out to around a whopping 70TB of log data annually. The problem they face is one of: (1) not enough space in their data center for another SAN and, (2) the budget constraints we all face in a down economy. They are not unusual. We get a lot of push back on log data.

The problem is that most security and operations personnel live in the moment or at least the ‘near’ moment. These people are researching a problem that happened a couple of minutes or maybe a day or two ago. As a result, they have a tough time justifying in their own minds why retaining a year’s worth of log data is important. Then there is management’s view on this subject. Management typically does not understand the reasoning for keeping a year’s worth of log data either, let alone what log data really is. Forensic examiners, for the most part, live in the past. They want to know everything that is relevant up to the point where the target determined that an incident had occurred. The problem is that without complete records from every device involved in the incident, the tracing of an incident back to its origination is likely impossible.

And this is typically the first point where most clients push back. “Why do I have to log everything?” is the most common refrain. I like to use an analogy here. I explain that system logs and their relevant management systems are the IT industry’s version of commercial aviation’s flight data recorder. If you want to be able to reconstruct where things went wrong and the incident occurred so that you can address the situation, you need this information. Unfortunately, unlike an airplane, an incident can take months to even be recognized let alone the number of devices that might be involved, so there needs to be a much larger pool of data collected and made available for forensic analysis. And until an incident is recognized, you will not know what information will be germane to the forensic investigation and what is chaff. As a result, you need to be logging as much as you can so that you have as complete a record of your operations as possible.

The second thing we hear most often is, “We only look at log data when we have a problem.” Typically what this means is that the problem has now grown to the size of a whale and has become noticeable by end users who are complaining. Unfortunately, some of those end users are also likely senior management, so it has become a large priority to fix the problem. As a result, you really do not have a problem, you have a crisis. It will now likely take a lot of time, effort and possibly outside consultants to fix the problem at a great expense. If the organization had been analyzing its log data on a daily basis, the problem likely would have been identified long before it became a crisis and could have been taken care of very easily with the existing staff and possibly a bit of overtime.

The third lament is, “Why do I have to retain a year’s worth of log data?” As I stated earlier, an incident may not be recognized until a number of months have passed. The reason this can occur is because vulnerabilities can sometimes be unrecognized for months, even years. Statistics say that, on average, it takes a little over 30 days from the time a vulnerability is recognized until a patch is available. But that is the rub, it is not until the vendor or a researcher identifies the vulnerability and makes it public that you will finally know that you have a vulnerability. And that confirmation by the vendor may occur months after the vulnerability was found by the attackers. In the meantime, the attackers have been using the vulnerability to their advantage without anyone’s knowledge. As a result, your protection systems do not have a signature to recognize this new vulnerability until it is identified months later. So you do not know if you have been compromised until you are able to pass that new signature by all of your systems and log data. If you only have 30 days worth of log data, you may never see that you have been compromised by the new vulnerability. This is where what you do not know really hurts you.

The final common complaint I hear is, “Why do I have to have log data from my routers, switches and other infrastructure?” I always respond back with my own question, “What? Your infrastructure is not involved in an incident?” Your network infrastructure of firewalls, intrusion detection/prevention, routers, switches, load balancers and the like may be the first opportunity you have to gain information on how an incident was engineered. All of these devices are capable of generating log information of their own. It is that information that will be invaluable in determining the initial point(s) of entry into your network. For it is always the infrastructure logs that will tell you whether the incident was caused from the outside or if it was caused from the inside.

The latest breach study from Verizon Business Services indicates that 70% of all breaches occur from the outside of an organization and that 65% of breaches are the result of human error. The CSI Computer Crime Survey indicates that every business will suffer an incident on an average of every two and a half years. Basically this all says that you will have an incident at some point in the near future and it will likely be the result of human error. If you do not have the information to identify and rectify the problem, how will you address the problem? And that is just it; you will not address the problem because likely, you will not even know the problem exists. And this is where organizations get into trouble. Because they do not know they have a problem, they believe all is well until the problem finally becomes so big that it cannot be ignored. Then the organization blames everyone else because they should have been telling them the problem could have existed.

So keep and analyze that log data. Analyze your log data at least daily, but preferably more often. The job and business you save may be your own.




October 2009
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

Months