Posts Tagged ‘Encryption

15
May
15

Whole Disk Encryption Explained

There are a lot of security professionals and lay people that seem to believe that encryption is encryption and that is simply not the case. Worse yet, vendors play into this misconception and obfuscate the issue with lots of words and phrases discussing seemingly complicated encryption key management procedures and the like that are actually meaningless when it comes to protecting data on running systems. As a result, it is time to clarify all of this misunderstanding.

First things first, we need to discuss how whole disk encryption works. As its name implies, whole disk encryption encrypts an entire disk drive. When a file on the whole disk encrypted drive is accessed, the encryption solution decrypts the file necessary using the decryption key provided at system startup and the rest of the drive remains encrypted. That way if a system failure occurs or the system is shutdown deliberately, the drive is always protected.

That is the key concept of whole disk encryption. The drive is technically only encrypted when the system is shutdown. If the system is running, the encryption is technically not in place because the operating system has the decryption key to access the disk at will. This is why whole disk encryption is great for devices like notebooks and the like that are shutdown at some point.

This is also why whole disk encryption is meaningless when applied to a server. When is a server shut down? Never. When using whole disk encryption on a running server, the only control that protects data is access controls, not encryption.

So using this definition, let us examine requirement 3.4.1 in the PCI DSS. That requirement states:

“If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.”

The first statement that throws people is “logical access must be managed separately and independently of native operating system authentication and access control mechanisms”. In short, whole disk encryption cannot rely only on the operating system’s authentication process.

The best example of this is Microsoft BitLocker. BitLocker has a number of modes under which it can operate. It can integrate with Active Directory (AD), it can rely on a trusted platform module (TPM) chip in the computer or it can operate stand-alone.

In stand-alone mode, BitLocker requires the user to provide the BitLocker key by either manually keying it in or providing it on a USB device that stores the BitLocker key in order to boot the system. If that key is not provided, the system will not even offer the user to logon. This form of BitLocker meets the requirements set forth in requirement 3.4.1.

But then the requirement goes on and say, “Decryption keys must not be associated with user accounts”.

In stand-alone mode, the BitLocker key is not associated with the user’s credentials so it also meets this part of 3.4.1.

However, in the AD or TPM modes, BitLocker operates behind the scenes and the end user never knows that their disk is encrypted and the user still logs onto the system as always using their Windows credentials. These modes do not meet the independence requirement in 3.4.1 because all that is protecting the data is the user’s Windows credentials. And in the case of AD mode, BitLocker also does not meet the user credential disassociation requirement because the BitLocker decryption key is tied to the user’s Windows credentials.

But if people would fully read the Guidance column for requirement 3.4.1 they would read the following at the end of the guidance for 3.4.1 where the Council states:

“Full disk encryption helps to protect data in the event of physical loss of a disk and therefore may be appropriate for portable devices that store cardholder data.”

BINGO!

Whole disk encryption helps protect data in the event of physical loss of a disk. Period.

So with a server that never shuts down, if a drive in a SAN or NAS fails, the failed disk with whole disk encryption will be encrypted when it is pulled from the array. But if things are running just fine, whole disk encryption does nothing to protect the data.

So do not be baffled by the statements from those vendors trying to convince you that whole disk encryption on your server is going to protect your data while the server is running. That is not true.

Now you know.

18
Jan
14

Why The Paradigm Must Change

The Target, Neiman Marcus and the potential other breaches of retailers to come should be a learning moment for all of us to demand that the card brands change their business paradigm to one that is more secure.

Bolt-Ons Do Not Cut It

For all intents and purposes, how a credit card works has not changed since the late 1950s when they were introduced.  Yes, there have been advancements such as EMV, 3D Secure and end-to end encryption (E2EE), but those are all things that just bolt onto the original concept.  The trouble is that, given today’s technologies and their capabilities, the card and the bolt-ons are just no longer providing the security they once did.

With the Target breach there has been a call to get the US to finally convert to EMV.  The trouble is that EMV would have leaked enough information for fraud to be committed as well, so it is not an answer.

Trade association spokespeople trotted out 3D Secure and other methods of securing online transactions.  The trouble is that most merchants eschew 3D Secure and its kind.  In addition, there are known vulnerabilities with these supposedly secure payment methods so they also have potential issues that could be exploited.

Then there is E2EE also known as point-to-point encryption (P2PE) from a PCI perspective.  These also can be exploited.  It may be more difficult, but when you are determined to gain access to sensitive information, that does not matter.

After the release of the PCI DSS in 2008, a lot of retailers implemented a variety of E2EE solutions.  Unfortunately, the endpoint at the retail location was the POS register and not the terminal.  This was not due to merchants’ negligence; this was due to how their POS applications operated.  This allowed for attacks such as that used in the Target breach to succeed.  All the attacker has to do is insert their malware into the POS process so that the malware can “see” the cardholder data before it gets encrypted.

Even in solutions that do E2EE/P2PE to the terminal can be defeated by taking the same approach and inserting the malware into the terminal process before the terminal can encrypt the data.  Worse yet, if the terminal is breached, the attacker can capture PINs if they also have malware that captures the keystrokes on the terminal before the PIN is encrypted.  There are a number of methods to minimize these risks at the terminal, but if the terminal supply chain is compromised as it was over a year ago in the Barnes & Noble breach, there is little a merchant can do to stop such attacks.

The bottom line is that all of these solutions are bolt-ons to the existing card paradigm and all still have risks that a breach could occur.

Using Complexity Against Us

Brian Krebs and others have wondered aloud how a sophisticated organization such as Target that has information security and forensic resources second possibly only to the government could have been compromised.  Particularly after the 2007 compromise by Albert Gonzales when Target totally revamped and increased their security posture to minimize the likelihood of another event.

The first clue to me came when I read the iSIGHT PARTNERS report on the Target breach.  The theme that comes through loud and clear is that the attackers are using the complexity of Target’s technology infrastructure against Target.  I mean how could FTP activity and huge data transfers (internal and external) go so unnoticed?

Actually, that was likely fairly easy.  The attackers used existing network traffic to mask their own network traffic.  They sought out servers that already had large volumes of traffic and put their data collection server on one of those servers that already had a lot of traffic.  Better yet, a server that was already running as an FTP server.  As a result, even with diligent monitoring, the increase in traffic likely did not raise any alarms.

People assume that such breaches are like a “snatch and grab” in the real world.  The attackers break into an organization’s network, quickly take what they can off of the computers they encounter and leave.  That was the modus operandi (MO) in the past, but not today.  Sophisticated and organized attackers such as those that breached Target, do what they can to remain unseen while they learn more about their victim.  They take their time mapping out the network and determining what devices they want to compromise to further their efforts to gain access to the sensitive information they seek.  Because of this, it is highly likely that the Target attackers encountered the Target customer database during their investigation of the Target network and took it first so that they would have at least something for all of their efforts.

The most insidious thing I think the attackers did was that they likely used Target’s software distribution system to disseminate their malware.  Given the number of POS systems compromised (around 51,000); I find it hard to believe that the attackers manually installed their malware on those POS systems.  It would have placed their operation at extreme risk likely resulting in its discovery.  By using Target’s software distribution system, the attackers got an added benefit of legitimacy to their malware because they Target themselves did the installation.  As such, the malware would appear as valid because Target’s software management system initiated the change.

Now What?

All of this brings up an interesting conundrum.  If attackers are stepping up their game and using such techniques, how do we detect them?  It is a very good question with no good answers.  The iSIGHT report offers methods to stop and eradicate this particular attack.  However, the next attack and the attack after that will all likely use different malware and different techniques to get the data out of your network.

We are in is a war of escalation with no end in sight.  Merchants step up their efforts to stop such attacks and the attackers adapt and adopt new techniques to breach organizations and gain access to their sensitive information.  What we need is a solution that stops the escalation and gets us out of this vicious circle.

That is why I am pushing the 15 – 16 character single use transaction code as that solution.  My reasons are as follows.

  •  The algorithms already exist as a number of the card brands experimented with them a decade or more ago.
  • It will work with existing POS technology and applications.
  • It will work with existing eCommerce sites.
  • It can be implemented into eWallet applications.
  • It can be processed, stored and transmitted without encryption.
  • It can be generated by PCs, smartphones, tablets, credit card sized devices and any other devices that have computational capabilities.
  • It can be displayed on devices in a character format for manual entry or as one or 2D bar codes for scanning.
  • It can be transmitted via swipe, EMV, near field communication (NFC), Wi-Fi or even Bluetooth.
  • And best of all, it is secure by the very nature that it can only be used once.

There will be some changes that would be required at the transaction processors and acquiring banks to handle such a solution.  But given that some of the card brands already have experience with this solution, there is a body of knowledge that already exists as to how it needs to be implemented.

Let the discussion begin on how we move ahead with a better, more secure solution.

07
Mar
13

Encrypted Cardholder Data – Out Of Scope?

I had an interesting exchange on Google+ the other day regarding whether or not encrypted data is in scope for PCI compliance.  In the end it was suggested that I write a blog entry regarding this topic as they said how to treat encryption has not been articulated very clearly by the PCI SSC.  I would argue that the rules regarding encryption and scope have been very clearly articulated in the PCI SSC’s FAQ #1086.  However, based on the conversation we had, it was obvious that this is not the case.  So here are the rules as practiced by most QSAs.

The key to how to interpret whether or not encrypted cardholder data is in-scope is in the FAQ.  Encrypted cardholder data (stored or transmitted) being out of scope is based on whether or not that data meets the following definition.

“It is possible that encrypted data may potentially be out of scope for a particular entity if, and only if, it is validated (for example, by a QSA or ISA) that the entity in possession of the encrypted data does not have access to the cleartext cardholder data or the encryption process, nor do they have the ability to decrypt the encrypted data.”

The important phrase in the aforementioned definition is “if, and only if.”  The only way encrypted cardholder data (CHD) is out of scope is if the entity being assessed for PCI compliance cannot decrypt the encrypted CHD.  This is a very important concept that gets constantly misinterpreted by QSAs and their clients.  However, it is up to the QSA to confirm that the organization being assessed cannot decrypt the encrypted CHD and to document the procedures conducted to prove that fact.

With that as background, let us look at storing and transmitting encrypted data and how they can be out of scope and what that means.  As you will see, out of scope can mean different things depending on the implementation of encryption.

Stored Cardholder Data

Under requirement 3.4, one of the methods recommended for the storing of the primary account number (PAN) is:

“Strong cryptography with associated key-management processes and procedures“

One of the ways organizations accomplish this is through a hardware security module (HSM) that manages the cryptographic key process.  The HSM generates the keys, manages the keys and provides an application programming interface (API) for code to access the keys.  Since the HSM is a “black box” a lot of organizations point to that fact as the reason their encryption is out of scope.

There is an additional condition to the encryption out of scope definition that usually gets forgotten.  This is what allows for the scope of the cardholder data environment (CDE) to be reduced.

“Systems performing encryption and/or decryption of cardholder data, and any systems performing key management functions, are always in scope for PCI DSS. Scope reduction for encrypted data may only be considered when that data is isolated from these processes.”

As such, since the organization using the HSM technically has access to the cryptographic keys through the HSM’s APIs, the encryption is in-scope.

Where stored encrypted CHD is out of scope is when a third party controls the encryption keys.  This most often occurs with tokenization.  Under a tokenization scheme, the CHD is sent to a third party who then securely stores the CHD and returns a token that links the CHD at the third party to the token stored by the merchant.  If the merchant needs to make any subsequent charges to the account, the merchant sends the stored token to the third party and the third party substitutes the stored CHD for the token and the transaction is completed.  But since the merchant does not have access to the token creation process, the token is out of scope because it is no longer considered CHD.

Transmitted Cardholder Data

Secure transmission of CHD can be accomplished through a number of methods.  The most common of these methods is secure sockets layer (SSL) or transport layer security (TLS).  In either case, if the organization being assessed has one of the endpoints in the SSL/TLS encryption, then the SSL/TLS process is in scope.  This is obviously most common in the conduct of e-Commerce when the merchant’s Web site has an SSL/TLS certificate for securing the transmission of the CHD to pay for the customer’s purchases from the Web site.

However we are also seeing SSL/TLS used more and more as the encryption method of choice for point-to-point encryption (P2PE) solutions.  Again, if either of the endpoints in the P2PE environment are under the control of the organization being assessed, then the endpoint or endpoints are in-scope for the PCI assessment.

One way we do see to get everything but the merchant’s endpoint out of scope is terminals that are encrypted from the terminal to the processor and the processor controls the encryption keys for the P2PE.  This is most often used in the gas station environment where the pump card reader does P2PE to the processor using derived unique key per transaction (DUKPT) or similar techniques to create an encrypted connection.

That said, what happens to the users and devices in between the two encryption endpoints on an encrypted communication link?  They are out of scope as long as they do not have the ability to decrypt the data stream.  This is another misunderstood interpretation of the FAQ.  While some personnel inside an organization have access to encryption keys, if a user or device does not have access to the encryption keys or the communication endpoints, they too are out of scope.  This is how an SSL/TLS/IPsec connection can be used for isolating the transmission of CHD from the rest of the network and satisfy network segmentation.

Another issue that comes up with managed service providers (MSP) is when the MSP has access to firewalls or routers that are encryption endpoints.  Even if the MSP does not have access to the encryption keys, they do have access to the encryption endpoint(s) and cleartext CHD, therefore their personnel and relevant device management practices are in-scope for PCI compliance.

The bottom line in all of this is; if your organization has the ability to decrypt either the stored CHD or transmissions of CHD, then you are in-scope for PCI compliance.

And as a reminder to everyone, just because something is out of scope it does not mean that it does not need to be assessed.  It is always necessary for a certain amount of testing procedures to be conducted to determine that the item is out of scope.

Hopefully we can now all operate from the same set of rules.

17
Mar
12

When Will The PCI SSC And Card Brands Stop The Mobile Payment Insanity?

This week PayPal introduced Here, their mobile payment processing application for Apple iOS and Android devices.  The good news is that PayPal Here at least appears to encrypt cardholder data, but that appears to be the extent of the good news.

If you pay attention to the latest Suzie’s Lemonade Verizon Wireless advertisement, you will see Intuit’s GoPayment solution on a tablet computer processing a payment.  In another bit of good news, the Intuit GoPayment solution is also encrypted.

But what is most disconcerting is that this is just another in a string of mobile payment processing solutions to be introduced with no way of knowing whether or not the security claims are accurate or can even be trusted.  This is because the PCI Security Standards Council halted last year the certification of any of these mobile payment processing solutions through the PIN Transaction Security (PTS) and Payment Application Data Security Standard (PA-DSS).  Do not get me wrong, the Council was in a tough spot and did not have a plan as to how to deal with these solutions, so the called a halt while they went back and reassessed things.

However the marketplace is not waiting.  So while the marketplace continues to deliver solutions, the merchant is left to their own devices to know whether any of these mobile payment processing solutions can be trusted.  And what I am fearful of is that small merchants, who are the marketing target of these solutions, will be put out of business should the device somehow be compromised or they lose a device.

So what are the questions surrounding the PayPal Here?

First, what is the encryption algorithm that is used?  I am guessing that they are using DUKPT, as that is common with these sorts of devices, but it is something that should be explicitly identified in their FAQ.  What PayPal currently says is, “… the PayPal Here card reader is encrypted and is backed by over a dozen years of experience of providing buyers and sellers the highest levels of security.”  If they are using DUKPT, can the key be changed?  Most of the mobile solutions I have seen do not allow for the key change under DUKPT which could be a problem.

Then there is the fact that you can manually enter a cardholder’s information through the iOS/Android application.  Given the fact that these systems are notorious for having keyboard loggers of industrial strength that means that any manual input will be recorded in the device.  I am guessing that would include cardholder name, PAN and expiration date.  However, if PayPal Here collects the CVV/CVC/CID value in manual entry that would be an extremely bad thing as the device will retain that until it overwrites it months later.  Again, there is no documentation to determine what is allowed to be manually entered, so we cannot assess how much risk this might introduce.

But the scariest feature of the PayPal Here is unrelated to credit cards; it is the remote capture of checks.  PayPal Here allows the merchant to scan a check and then submit it to their financial institution.  Again, given the data retention of these devices, I can only guess that the check images processed through the PayPal Here application will remain on the device until the space is needed which could be months.  The problem with all of this is that if people think credit card security was questionable, check processing security is nonexistent.  As a result, anyone with a modicum of knowledge could use the check information on one of these devices to drain every bank account stored.

Let us hope that the PCI Security Standards Council and the card brands quickly get back to certifying these applications so that merchants are not using insecure payment solutions.

28
Jan
12

Encryption Key Management Primer – Requirement 3.6

Requirement 3.6 states:

“Fully document and implement all key-management processes and procedures for cryptographic keys used for encryption of cardholder data, including the following:
Note: Numerous industry standards for key management are available from various resources including NIST, which can be found at http://csrc.nist.gov.”

Again, for users of PGP or hardware security module (HSM), you should have no problem complying with the sub-requirements of 3.6.  For those that do encryption key management manually, you will have to implement detailed procedures to accomplish these requirements and this is who I am focusing this post.  However, keep in mind that the order in which I address the sub-requirements is not necessarily in PCI order.

Let’s first talk about key management in general.  The PCI DSS references the NIST key management procedures.  NIST has issued special publication (SP) 800-57 that is probably the best “bible” for encryption key management.  It goes into detail not only on encryption itself (volume 1), but also key management (volume 2) and discussion of special key management situations such as public key infrastructure (PKI), IPSec and others (volume 3).  For requirement 3.6, only volume 2 is likely relevant unless you are using IPSec, PKI or other special cases.

For those of you that are a service provider and you share cryptographic keys with your customers, you will need to document your policies, standards and procedures for securely sharing those cryptographic keys with your customers.

Under requirement 3.6.c are the secure key management practices that the PCI DSS requires.  These are where people seem to get off track and where NIST SP800-57 can provide you the greatest assistance.

Requirement 3.6.1 is the easiest and I have spoken on this topic in my post on encryption basics.  You need to generate strong encryption keys.  You should generate your encryption keys using a minimum of two people (your key custodians) and using a pseudo-random key generator such as the one available from Gibson Research Corporation (GRC).  Each key custodian enters their part of the key into the system and then the system uses those parts to encrypt the data.

Requirement 3.6.2 discusses how encryption keys are distributed and 3.6.3 is about how key parts are stored.  This is related to how your key custodians manage their part of the key.  Typically a key custodian will be assigned a safe where their key part is stored on paper or other unsecured media.  Only the custodian and their back up know the combination to the safe.  I have also seen PGP Zip or encrypted WinZip files used as the secure repository for key parts.  The idea is that key parts cannot be distributed in clear text.  And when the key parts are stored, they need to be secured.

Requirement 3.6.4 always seems to be a sticking point because people get caught up in the key expiration concept.  The primary thing to remember is that whether or not a key expires is typically related to the encryption algorithm used such as for those using public key infrastructure (PKI).  In most cases, the encryption keys do not expire, so this requirement is not applicable.  In those cases where the key does expire, you need to have procedures documented explaining how the key expiration process is handled.

This does bring up a good point about any encryption process and addresses requirements 3.6.5.a and 3.6.5.b.  There will be times when encryption keys need to be changed.  This most often happens when a key custodian changes roles or leaves the organization.  In those cases, you need to have a process in place to change the encryption keys.

One thing implied by requirements 3.6.5.a and 3.6.5.b is how do you know that an encryption key has been weakened or compromised?  Typically, your activities surrounding critical file monitoring will be the trigger that encryption keys have been compromised or have at least been attempted to be compromised.  You should be monitoring the encryption process as a critical file as well as the encrypted encryption keys if you are storing them on a server or other computer system.  Should your file monitoring indicate that these files are being tampered with, you need to change your keys.

Requirement 3.6.6 is all about the manual management of encryption keys by key custodians and the need for no one individual to know the entire encryption key.  Obviously this requires at least two people to be involved, but you need to have backups for those people, so it really is a minimum of four people.  I have seen instances of three and four primary key custodians, however, going beyond that seems a bit much.

Requirement 3.6.7 is all about change control in making sure that key changes require authorization and that unauthorized changes cannot go unnoticed.  Management approval of encryption key changes is a straight forward concept and should be a concept already implemented for change control.  However, people seem to get balled up in the unauthorized key change concept.  Again, your critical file monitoring processes should catch these attempts.

Requirement 3.6.8 requires that all key custodians are required to formally acknowledge their responsibilities in managing and protecting any encryption keys in their custody by signing an agreement to that effect.  This does not have to be some long and lengthy document, just a single page that indicates that the individual has access to the encryption key parts and that they agree to keep those parts secure and protected.

And there we have it.  If you have read the entire series, you should now have a very basic understanding of encryption and encryption key management.  You are not an expert, but you should now have a basic understanding of the concepts and controls surrounding encryption.

15
Jan
12

Encryption Key Management Primer – Requirement 3.5

Before getting into key management, it is important to acknowledge that these requirements are not relevant to every encryption solution.  In the case of PGP, key management requires the user to create a private/public key pair and then authenticate using their passphrase.  It is not that the principles of key management do not apply to PGP; it is just that such key management practices are not usually associated with PGP.

Hardware security module (HSM) solutions typically provide facilities for strong key creation as well as a secure storage mechanism for keys.  A prime example of this concept is with the Trusted Platform Module (TPM) that is now standard on notebook computers.  I will try to point out the differences with these various approaches as I move through the discussion.

So with that as a background, let us start into our discussion of requirement 3.5.  The overall goal of requirement 3.5 is stated as:

“Protect any keys used to secure cardholder data against disclosure and misuse:  Note: This requirement also applies to key-encrypting keys used to protect data-encrypting keys—such key-encrypting keys must be at least as strong as the data-encrypting key.”

Typically, the first question I get is, “What in the world is a key-encrypting key?”  I need to push this question off, as there are other questions that need to be addressed before a meaningful discussion of the key-encrypting key (KEK) can occur.

With the key-encrypting key question put on the back burner, the second question most people ask is how does one accomplish protection of encryption keys?  In the case of PGP, that is the responsibility of the key owner not to disclose their passphrase.  It is the role of the HSM key repository to protect and manage keys.  If your encryption solution does not provide a secure repository for your encryption keys, then you need to provide such a capability.

When you have PGP or HSM, then protecting encryption keys is built-in.  A manual option is to store you encryption keys on paper or removable storage media and store the paper or removable media in a safe.  The problem with the manual option is that encryption keys are typically needed to boot the secure server or start an application that needs access to encrypted data.  The security surrounding the keys becomes problematic at best as operations personnel need regular access to the keys in order to keep systems and applications running.  As a result, most organizations desire to have their keys stored electronically for ease of access by servers and applications.

This now brings us to the concept of a KEK.  If you desire electronic storage of encryption keys, then you need to ensure their security.  If you store encryption keys in a folder on another server, you need to not only restrict access; you also need to encrypt them so that should someone copy the folder, they cannot read the keys.  To accomplish that, you encrypt the encryption keys wherever they are stored and then secure the key encrypting key (aka KEK).

This is where requirements 3.5.1 and 3.5.2 come into play.  These requirements are focused on ensuring that if an organization has taken the approach of managing their own encryption keys and electronically storing them, the keys are properly secured and managed by as few persons as feasible.

That does not mean that those of you using PGP or HSM are not responsible for also complying with requirements 3.5.1 and 3.5.2.  It is just that those of you using PGP or HSM likely have an easier time complying as both technologies can provide their own solutions for meeting these requirements.

In a future entry, I will discuss requirement 3.6.

08
Jan
12

Hashing Basics

I am catching some heat over the Encryption Basics post from some of my more ardent detractors that have called me on the carpet over making security and PCI “too simple” or “dumbed down.”  As I said in that post, it was in no way meant to be a full dissertation on the topic.  This post, as with the previous post, is not a complete dissertation either.  I just want to clarify some things so that people have a reasonable foundation on the topic of hashing.

Hashing comes up in requirement 3.4 which says:

“Render PAN unreadable anywhere it is stored (including on portable digital media, backup media, and in logs) by using any of the following approaches: one-way hashes based on strong cryptography (hash must be of the entire PAN), truncation (hashing cannot be used to replace the truncated segment of PAN), index tokens and pads (pads must be securely stored), strong cryptography with associated key-management processes and procedures.”

Before we get into hashing algorithms, I need to make sure everyone is clear that hashing is a one-way operation.  Unlike encryption, you cannot reverse data that has been hashed.  So if you are considering using hashing, you need to make sure that you will never need the original information such as a PAN.

Probably the most common one-way hash algorithm is SHA-1.  Unfortunately, it has been proved that SHA-1 has some issues that make it not as secure as its later iteration, SHA-2.  As such, those of you using SHA-1 should be migrating to SHA-2 or another hashing algorithm.  Those of you considering a solution that relies on SHA-1 should be asking the vendor if SHA-2 or another secure algorithm can be used.

The other most common one-way hashing algorithm is RSA’s Message Digest 5 (MD5) algorithm.  Like SHA-1, MD5 has also been deemed no longer acceptable due to a number of issues that make it no longer secure.  As a result, users of MD5 are recommended to migrate to RSA’s MD6, SHA-2 or another hashing algorithm.  And just so you are not confused, MD5 is still commonly used as a way to generate a checksum for confirming the validity of a download which is an entirely different purpose than what I am discussing here.

And to show you that things do not stand still in the cryptographic hashing world, the United States National Institute of Standards and Technology (NIST) is in the process of selecting the winner of their SHA-3 competition.  That winner will be announced sometime in late 2012.

While you can use a hash function “as is,” security professionals typical recommend the addition of a “salt” to complicate the result of the resulting hashed value.  A salt is two or more characters, usually non-displayable binary values, appended to the original value, in our example the PAN.  The salt adds a level of complexity to the resulting hashed value thus making the use of rainbow tables to break hashed values more difficult, if not impossible, to use.

One useful thing you can accomplish with a hashed value is that you can still use it for research as long as you are not using a rotating salt.  That means that the hashed PAN should have the same hashed value every time the same PAN is hashed.  This can be very important for fraud investigators and anyone else that needs the ability to research transactions conducted with the same PAN.  If these people do need the actual PAN, they will have to go to a different system that stores the PAN encrypted or go to their card processor for the PAN.

The PCI DSS issues a warning about using hashing and truncation together.  There is a note in requirement 3.4 that states:

“It is a relatively trivial effort for a malicious individual to reconstruct original PAN data if they have access to both the truncated and hashed version of a PAN.  Where hashed and truncated versions of the same PAN are present in an entity’s environment, additional controls should be in place to ensure that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.”

This note is more relevant to situations where the truncation is the first six digits AND the last four digits.  However, it brings up a good point regarding all methods of information obscuring including encryption.  Never, ever store the obscured value along with the truncated value.  Always separate the two values and also implement security on the obscured value so that people cannot readily get the obscured value and the truncated value together without oversight and management approval.

Hopefully now we all have a base level knowledge of hashing.

01
Jan
12

Encryption Basics

NOTE: This is a revised version of my original post to reflect readers concerns regarding statements made that do not reflect best practices surrounding encryption key management.  A big thank you to Andrew Jamieson who reviewed and commented on this revised posting.

During the last couple of years, I have run into more and more questions regarding encryption and encryption key management than I thought existed.  As a result, I have come to the realization that, for most people, encryption is some mystical science.  The stories of the Enigma machine and Bletchley Park have only seemed to add to that mysticism.  Over the years, I have collected my thoughts based on all of the questions and developed this distilled and very simplified version of guidance for those of you struggling with encryption.

For the security and encryption purists out there, I do not represent this post in any way, shape, or form as the “be all, to end all” on encryption.  Volumes upon volumes of books and Web sites have been dedicated to encryption, which is probably why it gets the bad reputation it does as the vast majority of these discussions are about as esoteric as they can be.

In addition, this post is written in regards to the most common method of encryption used in encrypting data stored in a database or file and that is the use of an encryption algorithm against a column of data or an entire file.  It does not cover public key infrastructure (PKI) or other techniques that could be used.  So please do not flame me for missing your favorite algorithm, other forms of encryption or some other piece of encryption minutiae.

There are all sorts of nuances to encryption methods and I do not want to cloud the basic issues so that people can get beyond the mysticism.  This post is for educating people so that they have a modicum of knowledge to identify hyperbole from fact.

The first thing I want to clarify to people is that encryption and hashing are two entirely different methods.  While both methods obscure information, the key thing to remember is that encryption is reversible and hashing is not reversible.  Even security professionals get balled up interchanging hashing and encryption, so I wanted to make sure everyone understands the difference.

The most common questions I get typically revolve around how encryption works.  Non-mathematicians should not need to know how an encryption algorithm works, that is for the experts that develop and prove that they work.  In my opinion, unless you are a mathematician studying cryptography, I recommend that people trust the research conducted by the experts regarding encryption algorithms.

That is not to say you should not know strong cryptography from weak cryptography.  I am just suggesting that the underlying mathematics that defines a strong algorithm can be beyond even some mathematicians, so why we expect non-mathematicians to understand encryption at this level is beyond me.  My point is that the algorithms work.  How they work is not and should not be a prerequisite for management and even security professionals to using encryption.

This leads me to the most important thing people need to know about encryption.  If you only take away one thing from this post, it would be that strong encryption comes down to four basic principles.

  • The algorithm used;
  • The key used;
  • How the key is managed; and
  • How the key is protected.

If you understand these four basic principles you will be miles ahead of everyone else that is getting twisted up in the details and missing these key points.  If you look at PCI requirement 3, the tests are structured around these four basic principles.

On the algorithm side of the equation, the best algorithm currently in use is the Advanced Encryption Standard (AES).  AES was selected by the United States National Institute of Standards and Technology (NIST) in 2001 as the official encryption standard for the US government.  AES replaced the Data Encryption Standard (DES) that was no longer considered secure.  AES was selected through a competition where 15 algorithms were evaluated.  While the following algorithms were not selected as the winner of the NIST competition, Twofish, Serpent, RC6 and MARS were finalists and are also considered strong encryption algorithms.  Better yet, for all of you in the software development business, AES, Twofish, Serpent and MARS are open source.  Other algorithms are available, but these are the most tested and reliable of the lot.

One form of DES, Triple DES (3DES) 168-bit key strength, is still considered strong encryption.  However how long that will remain the case is up for debate  I have always recommended staying away from 3DES 168-bit unless you have no other choice, which can be the case with older devices and software.  If you are currently using 3DES, I would highly recommend you develop a plan to migrate away from using it.

This brings up another key take away from this discussion.  Regardless of the algorithm used, they are not perfect.  Over time, encryption algorithms are likely to be shown to have flaws or be breakable by the latest computing power available.  Some flaws may be annoyances that you can work around or you may have to accept some minimal risk of their continued use.  However, some flaws may be fatal and require the discontinued use of the algorithm as was the case with DES.  The lesson here is that you should always be prepared to change your encryption algorithm.  Not that you will likely be required to make such a change on a moment’s notice.  But as the experience with DES shows, what was considered strong in the past, is no longer strong or should not be relied upon.  Changes in computing power and research could make any algorithm obsolete thus requiring you to make a change.

Just because you use AES or another strong algorithm does not mean your encryption cannot be broken.  If there is any weak link in the use of encryption, it is the belief by many that the algorithm is the only thing that matters.  As a result, we end up with a strong algorithm using a weak key.  Weak keys, such as a key comprised of the same character, a series of consecutive characters, easily guessed phrase or a key of insufficient length, are the reasons most often cited as why encryption fails.  In order for encryption to be effective, encryption keys need to be strong as well.  Encryption keys should be a minimum of 32 characters in length.  However in the encryption game, the longer and more random the characters in a key the better, which is why you see organizations using 64 to 256 character long random key strings.  When I use the term ‘character’ that can be printable characters of upper and lower case alphabetic as well as numeric and special characters.  But ‘character’ can also include hexadecimal values as well if your key entry interface allows for hexadecimal values to be entered.  The important thing to remember is that you should ensure that the values you enter for your key are as hard to guess or brute force as maximum key size of the algorithm you are using.  For example, using a seven character password to generate a 256 bit AES key does not provide for the full strength of that algorithm.

This brings us to the topic of encryption key generation.  There are a number of Web sites that can generate pseudo-random character strings for use as encryption keys.  To be correct, any Web site claiming to generate a “random” string of characters is only pseudo-random.  This is because the character generator algorithm is a mathematical formula and by its very nature is not truly random.  My favorite Web site for this purpose is operated by Gibson Research Corporation (GRC).  It is my favorite because it runs over SSL and is set up so that it is not cached or processed by search engines to better guarantee security.  The GRC site generates 63 character long hexadecimal strings, alphanumeric strings and printable ASCII strings, not numerical strings provided by other random and pseudo-random number generator sites.  Using such a site, you can generate keys or seed values for key generators.  You can combine multiple results from these Web sites to generate longer key values.

In addition, you can have multiple people individually go to the Web site, obtain a pseudo-random character string and then have each of them enter their character string into the system.  This is also known as split key knowledge as individuals only know their input to the final value of the key.  Under such an approach, the key generator system asks each key custodian to enter their value (called a ‘component’) separately and the system allows no key custodian to come into contact with any other custodian’s component value.  The key is then generated by combining the entered values in such a way that none of the individual inputs provides any information about the final key.  It is important to note that simply concatenating the input values to form the key does not provide this function, and therefore does not ensure split knowledge of the key value.

Just because you have encrypted your data does not mean your job is over.  Depending on how your encryption solution is implemented, you may be required to protect your encryption keys as well as periodically change those keys.  Encryption key protection can be as simple as storing the key components on separate pieces of paper in separate, sealed envelopes or as high tech as storing them on separate encrypted USB thumb drives.  Each of these would then be stored in separate safes.

You can also store encryption keys on a server not involved in storing encrypted data.  This server should not be any ordinary server as it needs to be securely configured and very limited access.  Using this approach is where those key encryption keys (KEK) come into play.  The way this works is that each custodian generates a KEK and encrypts their component with the KEK.  Those encrypted components can then be placed in an encrypted folder or zip file where computer operations have the encryption key.  This is where you tend to see PGP used for encryption as multiple decryption keys can be used so that in an emergency, operations can decrypt the archive and then the key custodians or their backups can decrypt their key components.

Finally, key changes are where a lot of organizations run into issues.  This is because key changes can require that the information be decrypted using the old key and then encrypted with the new key.  That decrypt/encrypt process can take days, weeks even years depending on the volume of data involved.  And depending on the time involved and how the decrypt/encrypt process is implemented, cardholder data can potentially be decrypted or exposed because of a compromised key for a long period of time.

The bottom line is that organizations can find out that key changes are not really feasible or introduce more risk than they are willing to accept.  As a result, protection of the encryption keys takes on even more importance because key changes are not feasible.  This is another reason why sales of key management appliances are on the rise.

That is encryption in a nutshell, a sort of “CliffsNotes” for the non-geeky out there.  In future posts I intend to go into PKI and other nuances to encryption and how to address the various PCI requirements in requirements 3 and 4.  For now, I wanted to get a basic educational foundation out there for people to build on and to remove that glassy eyed look that can occur when the topic of encryption comes up.

05
Aug
11

Tokenization

Like encryption, tokenization seems to have a mystic quality surrounding it.  And like end-to-end encryption, tokenization is seen by merchants as a way to avoid being bothered by that pesky PCI compliance process.  However, even with tokenization, merchants still have to comply with certain parts of the PCI DSS.

Before we discuss the risks and compliance requirements, let me explain tokenization.  In a nutshell, tokenization is the process of taking sensitive data as input and returning a “token” that represents that sensitive data as output.  The token typically has no relationship with the sensitive data it replaces and can be shorter or longer than the original sensitive data.  The token is typically made up of a variety of upper- and lower-case alphabetic, numeric and special characters depending on the algorithm used.  An important thing to remember is that tokenization does not imply encryption.  However, encryption may be used for tokenization as can one-way hashing.  When encryption is used as a way to tokenize sensitive information, the system receiving the token never has the capability to decrypt the token.

The way tokenization works for credit card processing is that the merchant’s point of sale (POS) system takes the cardholder data, either from a card swipe or manually input, and passes it to the credit card networks for transaction authorization.  Once the transaction is authorized, a 16 character token is generated by the processor and is passed back to the merchant’s POS where the token can be stored in place of the PAN.  This means that for credit card processing applications, the token is 16 characters long.  In addition, the token can contain the last four digits of the PAN for transaction reference.  The original cardholder data, i.e., cardholder name, PAN and expiration date is stored on the processor’s system for reference.

Note that in my example above I used POS as the system accepting the token.  There is no credit card terminal involved.  It is not that tokenization does not work with a credit card terminal, it is just that tokenization best pays for itself when used with an application that stores sensitive information and the end user does not want to store that sensitive information.  Since most credit card terminals are not configured for storing the PAN after authorization, there really is no point in using tokenization.

One feature that tokenization can provide is the ability to process recurring transactions.  In a recurring transaction scenario, the processor provides a mapping capability between tokens and the original cardholder data.  On a recurring transaction, the merchant passes the token back to the processor and the processor looks up the token and then generates a transaction based on the cardholder data associated with the token.

Depending on the tokenization method, another potential feature is the ability to analyze card usage.  In some tokenization solutions, a given PAN will always produce the same token value.  As a result, not only is the PAN protected, but it can still be used for analysis purposes such as fraud analysis and loyalty programs.

As with any technology there are risks associated with tokenization.  As we saw with end-to-end encryption, the endpoints in the tokenization process are at risk.  Because there has to be a way to get the cardholder data into the process, the merchant’s POS is the most likely point to attack.  The problem with POS systems is that they can typically provide a “target rich” environment to be leveraged either from their operating system or their applications.  While the PCI DSS has requirements to address these potential targets, ensuring security across the full range of targets requires a level of diligence that most merchants just cannot provide on a consistent basis.

Another threat shared with end-to-end encryption is the tampering of software used to process transactions.  While the PA-DSS certifies that an application properly processing and storing cardholder data, it has limited applicability as to whether an application is surreptitiously storing or transmitting cardholder data prior to starting the tokenization process.  As a result, it is relatively simple to introduce tampered with software that could gather cardholder data without the merchant knowing.

To mitigate the risks of tokenization, merchants should consider the following actions.

  • Purchase your POS software from a reputable vendor.  This is not a perfect solution to the problem, but doing this should be better than buying it off of eBay or downloading a free open source solution.  Yes, you save a few bucks, but is that worth having every one of your customers that uses a credit card being compromised?  Probably not.
  • Ask your supplier of POS workstations about what they do to test these systems to ensure that they operate as expected and are not routing cardholder data to Timbuktu as well as your bank.  Ask them to provide those procedures in writing and review them to ensure they appear adequate.
  • Use serialized tamperproof tape on the seams and doors of your POS workstations.  Require that at every Manager shift change the new manager on duty is required to log their review of the devices, inventory the devices and notate if any have been tampered with.  If a device does appear to have been tampered with, it should be taken out of service until a new, secure device can replace it.
  • If using self-checkout systems, make sure to have those systems under both video and employee monitoring.
  • Review video monitoring if any manager notates that a device may have been tampered with to determine if you can identify possible suspects that may have tampered with the device.
  • Patch your POS workstations and application as soon as possible to minimize their susceptibility to attack or compromise.
  • If the vendor of the POS solution will perform updates, make sure that you or someone in your organization schedules the updates.  If anyone shows up at a location to “update” your POS and it was not scheduled by your organization, contact law enforcement.
  • If updates will be done by the vendor remotely, make sure that someone from your organization initiates the remote access and they observe the remote update process.  At the end of the update process, the person should terminate the remote session of the vendor.

At the end of the day, tokenization is not a bad solution for those environments where investing in all new POS is not an option or where the merchant might still want the ability to do PAN analysis.  However, tokenization has risks just like any other solution.  Granted, those risks might be fewer, but there are still risks.  And it is important for merchants considering tokenization to understand those risks before blinding accepting them.

24
Jul
11

End-To-End Encryption – The Rest Of The Story

Step right up folks.  I have something that will cure all of your problems with credit card processing.  It is called end-to-end encryption.  Yes, folks, it is the be all, to end all in security.  It will cure all that ails you, particularly those nasty data breaches.  Don’t be shy, just step right up and get your own version while supplies last.

Gee, when end-to-end encryption (E2EE) is put that way, it sounds great, almost too good to be true.  And you would be right; it is too good to be true.  But if you listen to the statements of the proponents of E2EE, they make it sound like once E2EE is in place, it is like the Ronco Showtime Oven, “Just set it and forget it.”

Now, do not get me wrong.  E2EE is not a bad thing, but it does have its own set of risks.  And it is those risks that do not get discussed that concern me.  The reason for my concern is that if you discuss E2EE with any merchant, most see it as this panacea, something that will get them out of the PCI compliance game altogether.  However, nothing could be further from the truth.  If anything, E2EE may make PCI compliance even more daunting than it is today.

The first thing everyone seems to forget is that E2EE only removes those systems and networks that are between the endpoints.  That is because the data stream between the endpoints is encrypted and, therefore, out of scope for PCI compliance.  However, for a merchant, that means that the device that accepts the credit card is still in-scope for PCI compliance.  Bring this fact up to most merchants and they start complaining like no tomorrow.

That device might be as “simple” as a credit card terminal or as complex as an integrated point-of-sale (POS) workstation on a network.  However, since this device is an endpoint, the merchant or the merchant’s QSA needs to ensure that the endpoint is properly secured and cannot end up being a breach point.  Depending on the complexity of that device, that assessment might be very straight forward or very time consuming.  The reason the endpoint needs to be assessed is that security is only as good as its weakest link.  In the case of E2EE, the weakest links are the endpoints at which the data is encrypted and decrypted.

The next thing that seems to slip people’s mind is that fact that since the merchant has an endpoint, that endpoint is still a target.  Worse yet, because it is an endpoint, the level of sophistication likely required to compromise that endpoint goes up exponentially, meaning that any successful attack will likely be beyond the average merchant’s capability to readily detect.  The PCI DSS addresses this threat fairly well by requiring network monitoring, daily log reviews, anti-virus, anti-malware, firewalls and the like.  However, I can tell you from personal experience that your average merchant is not going to be equipped to deal with this new threat.

And what is the new threat?  The new threat is tampered with hardware and software.  If you think this is farfetched, think again.  It has already happened on a limited scale.  The doctoring of hardware is fairly straight forward to both accomplish and to detect.  Detection only takes looking inside the device and noticing something that does not belong.  However, doctored software is another story.  The concept of doctored software has been a concern in the health care industry since the start of using computerization for heart pacemakers.  While the health care industry has developed rigorous testing and certification procedures, the rest of the software industry has said there is no need.  That is, until now.  As the world further automates, the need for reliable, safe and secure software only increases because of the reliance people and organizations apply to that software.

So what can an organization do to stem this new threat after implementing E2EE?  Here are some thoughts.

  • Purchase your credit card processing equipment only from your acquiring bank or reputable vendor.  This is not a perfect solution to the problem, but doing this should be better than buying a used unit off of eBay or from Joe’s Guaranteed Card Equipment.  Yes, you may save a few bucks, but is that worth having every one of your customers that uses a credit card being compromised?  Probably not.
  • Ask your supplier of terminals or POS workstations about what they do to test these systems to ensure that they operate as expected and are not routing cardholder data to Timbuktu as well as your bank.  Ask them to provide those procedures in writing and review them to ensure they appear adequate.
  • Use serialized tamperproof tape on the seams and doors of your terminals and POS workstations.  Require that at every Manager shift change the new manager on duty is required to log their review of the devices, inventory the devices and notate if any have been tampered with.  If a device does appear to have been tampered with, it should be taken out of service until a new, secure device can replace it.
  • If using self-checkout systems, make sure to have those systems under both video and employee monitoring.
  • Upgrade your card processing devices to the latest devices.  Over the last few years, some of these devices have seen significant changes in their design that improves their tamper resistance.  This is particularly true of fuel pumps and certain types of terminals.
  • Review video monitoring if any manager notates that a device may have been tampered with to determine if you can identify possible suspects that may have tampered with the device.
  • Patch your devices as soon as possible to minimize their susceptibility to attack or compromise.
  • If the vendor of the equipment will perform updates, make sure that you or someone in your organization schedules the updates.  If anyone shows up at a location to “update” your equipment and it was not scheduled by your organization, contact law enforcement.
  • If updates will be done by the vendor remotely, make sure that someone from your organization initiates the remote access and they observe the remote update process.  At the end of the update process, the person should terminate the remote session of the vendor.

Even implementing these processes will not remove all of the risk.  Particularly the risk of having modified software introduced into your environment.  However, these processes will show a court that you attempted to conduct due diligence and tried to keep your equipment secure.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

June 2022
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930