Archive for January, 2012


Encryption Key Management Primer – Requirement 3.6

Requirement 3.6 states:

“Fully document and implement all key-management processes and procedures for cryptographic keys used for encryption of cardholder data, including the following:
Note: Numerous industry standards for key management are available from various resources including NIST, which can be found at”

Again, for users of PGP or hardware security module (HSM), you should have no problem complying with the sub-requirements of 3.6.  For those that do encryption key management manually, you will have to implement detailed procedures to accomplish these requirements and this is who I am focusing this post.  However, keep in mind that the order in which I address the sub-requirements is not necessarily in PCI order.

Let’s first talk about key management in general.  The PCI DSS references the NIST key management procedures.  NIST has issued special publication (SP) 800-57 that is probably the best “bible” for encryption key management.  It goes into detail not only on encryption itself (volume 1), but also key management (volume 2) and discussion of special key management situations such as public key infrastructure (PKI), IPSec and others (volume 3).  For requirement 3.6, only volume 2 is likely relevant unless you are using IPSec, PKI or other special cases.

For those of you that are a service provider and you share cryptographic keys with your customers, you will need to document your policies, standards and procedures for securely sharing those cryptographic keys with your customers.

Under requirement 3.6.c are the secure key management practices that the PCI DSS requires.  These are where people seem to get off track and where NIST SP800-57 can provide you the greatest assistance.

Requirement 3.6.1 is the easiest and I have spoken on this topic in my post on encryption basics.  You need to generate strong encryption keys.  You should generate your encryption keys using a minimum of two people (your key custodians) and using a pseudo-random key generator such as the one available from Gibson Research Corporation (GRC).  Each key custodian enters their part of the key into the system and then the system uses those parts to encrypt the data.

Requirement 3.6.2 discusses how encryption keys are distributed and 3.6.3 is about how key parts are stored.  This is related to how your key custodians manage their part of the key.  Typically a key custodian will be assigned a safe where their key part is stored on paper or other unsecured media.  Only the custodian and their back up know the combination to the safe.  I have also seen PGP Zip or encrypted WinZip files used as the secure repository for key parts.  The idea is that key parts cannot be distributed in clear text.  And when the key parts are stored, they need to be secured.

Requirement 3.6.4 always seems to be a sticking point because people get caught up in the key expiration concept.  The primary thing to remember is that whether or not a key expires is typically related to the encryption algorithm used such as for those using public key infrastructure (PKI).  In most cases, the encryption keys do not expire, so this requirement is not applicable.  In those cases where the key does expire, you need to have procedures documented explaining how the key expiration process is handled.

This does bring up a good point about any encryption process and addresses requirements 3.6.5.a and 3.6.5.b.  There will be times when encryption keys need to be changed.  This most often happens when a key custodian changes roles or leaves the organization.  In those cases, you need to have a process in place to change the encryption keys.

One thing implied by requirements 3.6.5.a and 3.6.5.b is how do you know that an encryption key has been weakened or compromised?  Typically, your activities surrounding critical file monitoring will be the trigger that encryption keys have been compromised or have at least been attempted to be compromised.  You should be monitoring the encryption process as a critical file as well as the encrypted encryption keys if you are storing them on a server or other computer system.  Should your file monitoring indicate that these files are being tampered with, you need to change your keys.

Requirement 3.6.6 is all about the manual management of encryption keys by key custodians and the need for no one individual to know the entire encryption key.  Obviously this requires at least two people to be involved, but you need to have backups for those people, so it really is a minimum of four people.  I have seen instances of three and four primary key custodians, however, going beyond that seems a bit much.

Requirement 3.6.7 is all about change control in making sure that key changes require authorization and that unauthorized changes cannot go unnoticed.  Management approval of encryption key changes is a straight forward concept and should be a concept already implemented for change control.  However, people seem to get balled up in the unauthorized key change concept.  Again, your critical file monitoring processes should catch these attempts.

Requirement 3.6.8 requires that all key custodians are required to formally acknowledge their responsibilities in managing and protecting any encryption keys in their custody by signing an agreement to that effect.  This does not have to be some long and lengthy document, just a single page that indicates that the individual has access to the encryption key parts and that they agree to keep those parts secure and protected.

And there we have it.  If you have read the entire series, you should now have a very basic understanding of encryption and encryption key management.  You are not an expert, but you should now have a basic understanding of the concepts and controls surrounding encryption.


Are You A Level 2 Merchant?

It is that time of the year again.  I have had calls from a number of Level 2 merchants in a panic about the upcoming MasterCard deadline.  I also have a number of perspective clients that are saying, “Deadline?  What deadline?”

To refresh everyone’s memory, three and a half years ago, MasterCard issued a directive that by June 30, 2010, all Level 2 merchants needed to either: (1) have a PCI SSC certified Internal Security Assessor (ISA) prepare their Self-Assessment Questionnaire (SAQ) or, (2) have a PCI SSC certified Qualified Security Assessor (QSA) conduct a PCI assessment and issue a Report On Compliance (ROC).

Because of the uproar this directive caused with their Level 2 merchants, MasterCard backed off on the 2010 date but set forth a new date of June 30, 2012.  Now jump to the present, it is January 2012 and the calls from Level 2 merchants are starting to ramp up.  These merchants are now in a panic because, guess what?  Level 2 merchants put the ISA/ROC issue on the back burner and forgot about it until just now and they cannot afford to meet this requirement.  Oops!

I have sent a message to MasterCard to confirm that the June 30, 2012 date is still valid.  Until I have confirmation, if you look at MasterCard’s Web site, the June 30, 2012 date is still posted as the date you will need to meet the aforementioned requirements.

For all of you Level 2 merchants that accept MasterCard, I would highly recommend that you contact your acquiring bank and confirm the SAQ and ROC reporting requirements.

UPDATE: MasterCard did confirm that the June 30, 2012 date was accurate.


Encryption Key Management Primer – Requirement 3.5

Before getting into key management, it is important to acknowledge that these requirements are not relevant to every encryption solution.  In the case of PGP, key management requires the user to create a private/public key pair and then authenticate using their passphrase.  It is not that the principles of key management do not apply to PGP; it is just that such key management practices are not usually associated with PGP.

Hardware security module (HSM) solutions typically provide facilities for strong key creation as well as a secure storage mechanism for keys.  A prime example of this concept is with the Trusted Platform Module (TPM) that is now standard on notebook computers.  I will try to point out the differences with these various approaches as I move through the discussion.

So with that as a background, let us start into our discussion of requirement 3.5.  The overall goal of requirement 3.5 is stated as:

“Protect any keys used to secure cardholder data against disclosure and misuse:  Note: This requirement also applies to key-encrypting keys used to protect data-encrypting keys—such key-encrypting keys must be at least as strong as the data-encrypting key.”

Typically, the first question I get is, “What in the world is a key-encrypting key?”  I need to push this question off, as there are other questions that need to be addressed before a meaningful discussion of the key-encrypting key (KEK) can occur.

With the key-encrypting key question put on the back burner, the second question most people ask is how does one accomplish protection of encryption keys?  In the case of PGP, that is the responsibility of the key owner not to disclose their passphrase.  It is the role of the HSM key repository to protect and manage keys.  If your encryption solution does not provide a secure repository for your encryption keys, then you need to provide such a capability.

When you have PGP or HSM, then protecting encryption keys is built-in.  A manual option is to store you encryption keys on paper or removable storage media and store the paper or removable media in a safe.  The problem with the manual option is that encryption keys are typically needed to boot the secure server or start an application that needs access to encrypted data.  The security surrounding the keys becomes problematic at best as operations personnel need regular access to the keys in order to keep systems and applications running.  As a result, most organizations desire to have their keys stored electronically for ease of access by servers and applications.

This now brings us to the concept of a KEK.  If you desire electronic storage of encryption keys, then you need to ensure their security.  If you store encryption keys in a folder on another server, you need to not only restrict access; you also need to encrypt them so that should someone copy the folder, they cannot read the keys.  To accomplish that, you encrypt the encryption keys wherever they are stored and then secure the key encrypting key (aka KEK).

This is where requirements 3.5.1 and 3.5.2 come into play.  These requirements are focused on ensuring that if an organization has taken the approach of managing their own encryption keys and electronically storing them, the keys are properly secured and managed by as few persons as feasible.

That does not mean that those of you using PGP or HSM are not responsible for also complying with requirements 3.5.1 and 3.5.2.  It is just that those of you using PGP or HSM likely have an easier time complying as both technologies can provide their own solutions for meeting these requirements.

In a future entry, I will discuss requirement 3.6.


Hashing Basics

I am catching some heat over the Encryption Basics post from some of my more ardent detractors that have called me on the carpet over making security and PCI “too simple” or “dumbed down.”  As I said in that post, it was in no way meant to be a full dissertation on the topic.  This post, as with the previous post, is not a complete dissertation either.  I just want to clarify some things so that people have a reasonable foundation on the topic of hashing.

Hashing comes up in requirement 3.4 which says:

“Render PAN unreadable anywhere it is stored (including on portable digital media, backup media, and in logs) by using any of the following approaches: one-way hashes based on strong cryptography (hash must be of the entire PAN), truncation (hashing cannot be used to replace the truncated segment of PAN), index tokens and pads (pads must be securely stored), strong cryptography with associated key-management processes and procedures.”

Before we get into hashing algorithms, I need to make sure everyone is clear that hashing is a one-way operation.  Unlike encryption, you cannot reverse data that has been hashed.  So if you are considering using hashing, you need to make sure that you will never need the original information such as a PAN.

Probably the most common one-way hash algorithm is SHA-1.  Unfortunately, it has been proved that SHA-1 has some issues that make it not as secure as its later iteration, SHA-2.  As such, those of you using SHA-1 should be migrating to SHA-2 or another hashing algorithm.  Those of you considering a solution that relies on SHA-1 should be asking the vendor if SHA-2 or another secure algorithm can be used.

The other most common one-way hashing algorithm is RSA’s Message Digest 5 (MD5) algorithm.  Like SHA-1, MD5 has also been deemed no longer acceptable due to a number of issues that make it no longer secure.  As a result, users of MD5 are recommended to migrate to RSA’s MD6, SHA-2 or another hashing algorithm.  And just so you are not confused, MD5 is still commonly used as a way to generate a checksum for confirming the validity of a download which is an entirely different purpose than what I am discussing here.

And to show you that things do not stand still in the cryptographic hashing world, the United States National Institute of Standards and Technology (NIST) is in the process of selecting the winner of their SHA-3 competition.  That winner will be announced sometime in late 2012.

While you can use a hash function “as is,” security professionals typical recommend the addition of a “salt” to complicate the result of the resulting hashed value.  A salt is two or more characters, usually non-displayable binary values, appended to the original value, in our example the PAN.  The salt adds a level of complexity to the resulting hashed value thus making the use of rainbow tables to break hashed values more difficult, if not impossible, to use.

One useful thing you can accomplish with a hashed value is that you can still use it for research as long as you are not using a rotating salt.  That means that the hashed PAN should have the same hashed value every time the same PAN is hashed.  This can be very important for fraud investigators and anyone else that needs the ability to research transactions conducted with the same PAN.  If these people do need the actual PAN, they will have to go to a different system that stores the PAN encrypted or go to their card processor for the PAN.

The PCI DSS issues a warning about using hashing and truncation together.  There is a note in requirement 3.4 that states:

“It is a relatively trivial effort for a malicious individual to reconstruct original PAN data if they have access to both the truncated and hashed version of a PAN.  Where hashed and truncated versions of the same PAN are present in an entity’s environment, additional controls should be in place to ensure that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.”

This note is more relevant to situations where the truncation is the first six digits AND the last four digits.  However, it brings up a good point regarding all methods of information obscuring including encryption.  Never, ever store the obscured value along with the truncated value.  Always separate the two values and also implement security on the obscured value so that people cannot readily get the obscured value and the truncated value together without oversight and management approval.

Hopefully now we all have a base level knowledge of hashing.


Encryption Basics

NOTE: This is a revised version of my original post to reflect readers concerns regarding statements made that do not reflect best practices surrounding encryption key management.  A big thank you to Andrew Jamieson who reviewed and commented on this revised posting.

During the last couple of years, I have run into more and more questions regarding encryption and encryption key management than I thought existed.  As a result, I have come to the realization that, for most people, encryption is some mystical science.  The stories of the Enigma machine and Bletchley Park have only seemed to add to that mysticism.  Over the years, I have collected my thoughts based on all of the questions and developed this distilled and very simplified version of guidance for those of you struggling with encryption.

For the security and encryption purists out there, I do not represent this post in any way, shape, or form as the “be all, to end all” on encryption.  Volumes upon volumes of books and Web sites have been dedicated to encryption, which is probably why it gets the bad reputation it does as the vast majority of these discussions are about as esoteric as they can be.

In addition, this post is written in regards to the most common method of encryption used in encrypting data stored in a database or file and that is the use of an encryption algorithm against a column of data or an entire file.  It does not cover public key infrastructure (PKI) or other techniques that could be used.  So please do not flame me for missing your favorite algorithm, other forms of encryption or some other piece of encryption minutiae.

There are all sorts of nuances to encryption methods and I do not want to cloud the basic issues so that people can get beyond the mysticism.  This post is for educating people so that they have a modicum of knowledge to identify hyperbole from fact.

The first thing I want to clarify to people is that encryption and hashing are two entirely different methods.  While both methods obscure information, the key thing to remember is that encryption is reversible and hashing is not reversible.  Even security professionals get balled up interchanging hashing and encryption, so I wanted to make sure everyone understands the difference.

The most common questions I get typically revolve around how encryption works.  Non-mathematicians should not need to know how an encryption algorithm works, that is for the experts that develop and prove that they work.  In my opinion, unless you are a mathematician studying cryptography, I recommend that people trust the research conducted by the experts regarding encryption algorithms.

That is not to say you should not know strong cryptography from weak cryptography.  I am just suggesting that the underlying mathematics that defines a strong algorithm can be beyond even some mathematicians, so why we expect non-mathematicians to understand encryption at this level is beyond me.  My point is that the algorithms work.  How they work is not and should not be a prerequisite for management and even security professionals to using encryption.

This leads me to the most important thing people need to know about encryption.  If you only take away one thing from this post, it would be that strong encryption comes down to four basic principles.

  • The algorithm used;
  • The key used;
  • How the key is managed; and
  • How the key is protected.

If you understand these four basic principles you will be miles ahead of everyone else that is getting twisted up in the details and missing these key points.  If you look at PCI requirement 3, the tests are structured around these four basic principles.

On the algorithm side of the equation, the best algorithm currently in use is the Advanced Encryption Standard (AES).  AES was selected by the United States National Institute of Standards and Technology (NIST) in 2001 as the official encryption standard for the US government.  AES replaced the Data Encryption Standard (DES) that was no longer considered secure.  AES was selected through a competition where 15 algorithms were evaluated.  While the following algorithms were not selected as the winner of the NIST competition, Twofish, Serpent, RC6 and MARS were finalists and are also considered strong encryption algorithms.  Better yet, for all of you in the software development business, AES, Twofish, Serpent and MARS are open source.  Other algorithms are available, but these are the most tested and reliable of the lot.

One form of DES, Triple DES (3DES) 168-bit key strength, is still considered strong encryption.  However how long that will remain the case is up for debate  I have always recommended staying away from 3DES 168-bit unless you have no other choice, which can be the case with older devices and software.  If you are currently using 3DES, I would highly recommend you develop a plan to migrate away from using it.

This brings up another key take away from this discussion.  Regardless of the algorithm used, they are not perfect.  Over time, encryption algorithms are likely to be shown to have flaws or be breakable by the latest computing power available.  Some flaws may be annoyances that you can work around or you may have to accept some minimal risk of their continued use.  However, some flaws may be fatal and require the discontinued use of the algorithm as was the case with DES.  The lesson here is that you should always be prepared to change your encryption algorithm.  Not that you will likely be required to make such a change on a moment’s notice.  But as the experience with DES shows, what was considered strong in the past, is no longer strong or should not be relied upon.  Changes in computing power and research could make any algorithm obsolete thus requiring you to make a change.

Just because you use AES or another strong algorithm does not mean your encryption cannot be broken.  If there is any weak link in the use of encryption, it is the belief by many that the algorithm is the only thing that matters.  As a result, we end up with a strong algorithm using a weak key.  Weak keys, such as a key comprised of the same character, a series of consecutive characters, easily guessed phrase or a key of insufficient length, are the reasons most often cited as why encryption fails.  In order for encryption to be effective, encryption keys need to be strong as well.  Encryption keys should be a minimum of 32 characters in length.  However in the encryption game, the longer and more random the characters in a key the better, which is why you see organizations using 64 to 256 character long random key strings.  When I use the term ‘character’ that can be printable characters of upper and lower case alphabetic as well as numeric and special characters.  But ‘character’ can also include hexadecimal values as well if your key entry interface allows for hexadecimal values to be entered.  The important thing to remember is that you should ensure that the values you enter for your key are as hard to guess or brute force as maximum key size of the algorithm you are using.  For example, using a seven character password to generate a 256 bit AES key does not provide for the full strength of that algorithm.

This brings us to the topic of encryption key generation.  There are a number of Web sites that can generate pseudo-random character strings for use as encryption keys.  To be correct, any Web site claiming to generate a “random” string of characters is only pseudo-random.  This is because the character generator algorithm is a mathematical formula and by its very nature is not truly random.  My favorite Web site for this purpose is operated by Gibson Research Corporation (GRC).  It is my favorite because it runs over SSL and is set up so that it is not cached or processed by search engines to better guarantee security.  The GRC site generates 63 character long hexadecimal strings, alphanumeric strings and printable ASCII strings, not numerical strings provided by other random and pseudo-random number generator sites.  Using such a site, you can generate keys or seed values for key generators.  You can combine multiple results from these Web sites to generate longer key values.

In addition, you can have multiple people individually go to the Web site, obtain a pseudo-random character string and then have each of them enter their character string into the system.  This is also known as split key knowledge as individuals only know their input to the final value of the key.  Under such an approach, the key generator system asks each key custodian to enter their value (called a ‘component’) separately and the system allows no key custodian to come into contact with any other custodian’s component value.  The key is then generated by combining the entered values in such a way that none of the individual inputs provides any information about the final key.  It is important to note that simply concatenating the input values to form the key does not provide this function, and therefore does not ensure split knowledge of the key value.

Just because you have encrypted your data does not mean your job is over.  Depending on how your encryption solution is implemented, you may be required to protect your encryption keys as well as periodically change those keys.  Encryption key protection can be as simple as storing the key components on separate pieces of paper in separate, sealed envelopes or as high tech as storing them on separate encrypted USB thumb drives.  Each of these would then be stored in separate safes.

You can also store encryption keys on a server not involved in storing encrypted data.  This server should not be any ordinary server as it needs to be securely configured and very limited access.  Using this approach is where those key encryption keys (KEK) come into play.  The way this works is that each custodian generates a KEK and encrypts their component with the KEK.  Those encrypted components can then be placed in an encrypted folder or zip file where computer operations have the encryption key.  This is where you tend to see PGP used for encryption as multiple decryption keys can be used so that in an emergency, operations can decrypt the archive and then the key custodians or their backups can decrypt their key components.

Finally, key changes are where a lot of organizations run into issues.  This is because key changes can require that the information be decrypted using the old key and then encrypted with the new key.  That decrypt/encrypt process can take days, weeks even years depending on the volume of data involved.  And depending on the time involved and how the decrypt/encrypt process is implemented, cardholder data can potentially be decrypted or exposed because of a compromised key for a long period of time.

The bottom line is that organizations can find out that key changes are not really feasible or introduce more risk than they are willing to accept.  As a result, protection of the encryption keys takes on even more importance because key changes are not feasible.  This is another reason why sales of key management appliances are on the rise.

That is encryption in a nutshell, a sort of “CliffsNotes” for the non-geeky out there.  In future posts I intend to go into PKI and other nuances to encryption and how to address the various PCI requirements in requirements 3 and 4.  For now, I wanted to get a basic educational foundation out there for people to build on and to remove that glassy eyed look that can occur when the topic of encryption comes up.

January 2012