Posts Tagged ‘clarification


Pre-Authorization Data

After a number of interactions with a variety of people over the last few weeks, it has become obvious that the concept of pre-authorization data is not clear to a lot of people.  And just because it is pre-authorization data does not mean that you are not required to protect it.  The Council has made it very clear that it is to be protected with the same rigor as post-authorization data.

Pre-authorization is defined as that time when an organization has a customer’s sensitive authentication data (SAD) but has not yet processed it for payment.  For most merchants, the time between collecting the SAD and processing it is measured in seconds.  For card present transactions, the SAD can be track or chip data.  For card not present transactions, it typically includes the cardholder name, primary account number (PAN), expiration date and CVV2/CVC2/CID2.

Here are some situations where that does not always happen that quickly.

Phone, facsimile and mail orders can extend the amount of time between receipt of the SAD and processing by seconds, minutes or even hours.  On the bizarre side of things, I encountered at one client people sending their physical cards in the mail for processing of their mail order.

At the fuel pump when a customer swipes their card, the system that manages the payment process will pre-authorize the purchase and then store the SAD until the customer finishes pumping fuel and hangs up the hose to complete the transaction.  That could be five to 10 minutes depending on fuel tank size.

In some industries the time of pre-authorization can be weeks, months and in some cases even a year or more.  Where this typically occurs is in the airline and hospitality industries.

When a customer makes an airline reservation, the airline will likely pre-authorize your airfare but may not actually charge your card until 7/14/60/90 days before you check in or even until you check in.  This can result in your SAD being stored for weeks or even months.

In the hospitality industry, a reservation typically does not cause a charge until a customer checks out even though they are required to have a card on file to hold the reservation.  When a customer checks into the property, the hotel’s billing system records the SAD and may also pre-authorize charges, but the actual card transaction is not processed until the customer checks out.  I have encountered SAD in hospitality systems that have been stored for more than a year due to reservations for special occasions such as graduations, birthdays, family reunions and anniversaries.  New versions of hospitality management systems encrypt pre-authorization data, however older systems did not and the security of the pre-authorization data may not be a rigorous as it should.

eWallet applications are another great example of pre-authorization data.  eWallets are just an electronic version of a consumer’s physical wallet.  eWallets can be applications on a smartphone/tablet or a specialized device such as Coin.  eWallets are not regulated by the Council or the card brands and never will be just as traditional wallets are not regulated.  That said, developers of eWallet applications should follow the PA-DSS for developing secure eWallet applications.

The most confusion over pre-authorization data typically occurs over SAD that an organization receives via email.  A lot of QSAs get their “undies in a bunch” over this and point to requirement 4.2 as the reason why this is unacceptable.  Requirement 4.2 states:

 “Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word in 4.2 is “send”.  Requirement 4.2 says nothing about receiving PANs by these methods and the reason is that it is pre-authorization data.  That does not mean that the Council recommends receiving PANs via email, IM or similar methods.  It is only recognition of what goes on in the real world.  There will always be a small percentage of people that will send their cardholder data insecurely and there is little an organization can do to stop it.  But even before you start the argument, you need to acknowledge that this data is pre-authorization data because a transaction has yet to be processed.

To keep your email system out of scope, the Council has told QSAs to make sure that organizations have a process to remove the SAD as soon as possible from the system(s) in question.  That typically involves the printing of the message, deleting the message from the system and then securely destroying the printed message once the transaction is processed.  This is all considered “incidental contact” in the eyes of the Council and the QSA can then consider the system out of scope as long as they can satisfy themselves that the manual process is reliable and consistently executed.

Web sites that store customers’ SAD for future purchases are also dealing in pre-authorization data.  Merchants providing such a service should contractually acknowledge their responsibility to protect that data with the same rigor as post-authorization data and that the stored data is only for use for purchases by the customer on the merchant’s Web site.

My final example of pre-authorization data is when an organization’s travel department stores employees’ card data for company travel reservations.  The systems used by these departments typically store the cardholder name, PAN, expiration date and the CVV2/CVC2/CID2.  When the PCI DSS originally was published, the card brands instructed QSAs that these sorts of systems were not in scope for an organization’s Report On Compliance (ROC).  However, as time has passed, some of the card brands now want those systems included in the ROC, so organizations need to contact the card brands to determine if their ROC should include such systems.  These systems can be interesting in how little protection they can provide for SAD, so regardless of whether they are included in the organization’s assessment, the organization should review the security of the application against the PCI DSS and mitigate any gaps.  As with the aforementioned Web sites that store pre-authorization data, an organization storing SAD for their employees should legally acknowledge their responsibility to protect that data as well as document how it may be used.

The bottom line is that all of these situations involve pre-authorization data and pre-authorization data can include everything recorded on a card’s magnetic stripe or chip.  Merchants can store pre-authorization data until a payment transaction is processed.  If they do store the pre-authorization data, they are obligated under the PCI DSS to protect it with the same rigor as post-authorization data, i.e., encrypted, extremely limited access, logging, monitoring, etc.

Hopefully we are now all on the same page.


Removing The Drama Of A PCI DSS Assessment

I had to prepare a presentation for a client a while back giving them some tips on how to prepare and get through a PCI assessment as easy as possible.  I thought it might be good to share those thoughts.

Trust But Verify

This famous quote from US President Ronald Reagan is the mantra of a PCI assessment.

The PCI DSS is based on the “trust” that organizations are complying with the PCI DSS.  However self-assessment processes and QSAs are used to “verify” that the organization is, in fact, complying with the PCI DSS.  As a result, the organization being assessed not only has to produce documentation to that effect, but the QSA must also observe that the PCI DSS requirements are being followed.

The net is that, just because you say something is fact, your QSA must substantiate your statements so that they, too, will treat them as fact.  If you remember nothing else but this simple truth, you will understand why a QSA must do what they do.


If PCI assessments go wrong for any reason, this is probably the primary reason.  It fascinates me that people often profess ignorance of the PCI DSS, yet somehow become experts on the subject when it comes to scoping.

Remember point number one, trust but verify.  Under that premise, the PCI SSC makes a QSA’s primary responsibility to confirm the scope of the PCI assessment as they verify the facts.  As a result, in order to confirm that scope, the QSA must look at everything and then, through investigation and evaluation, determine that the areas you deem out of scope are, in fact, truly out of scope.

Let your QSA ask their questions and conduct their observations without arguing with them about scope.  They are only doing this because they are required to confirm the facts and your fighting with them about scope is only going to making them wonder what you are trying to hide.  The bottom line is that arguing with your QSA about scope only makes your assessment all the more painful and time consuming.

If you truly want to avoid arguing over scoping, get a copy of the Open Source PCI Scoping Toolkit.  Go through your environment and determine the categories of all of your systems and networks.  This is a good annual exercise because you need to prove your scope every year.


According to the PCI SSC, there are five PCI DSS requirements that can never, ever be marked as ‘Not Applicable’: 1.2.3, 3.2.1, 3.2.2, 3.2.3 and 11.1.  I have discussed these all before but they deserve another quick discussion here.

Clients will argue ad nauseam that wireless is not implemented or is out of scope and therefore refuse to discuss wireless.  For requirement 1.2.3, a QSA is required to document the procedures they followed to rule wireless in or out of scope.  That of course means the QSA must investigate any wireless networks and evaluate if the controls are rigorous enough to keep wireless out of scope.  For requirement 11.1, the QSA must investigate and evaluate if the organization’s controls surrounding the detection of rogue wireless are appropriate regardless of whether or not the organization has implemented wireless networking.

3.2.1, 3.2.2 and 3.2.3 are all related to the securing of cardholder data when it is stored.  Even if an organization is not storing cardholder data on their systems, a QSA must document the procedures they used to confirm that cardholder data is not stored on the organization’s systems.  This usually involves a review of flat files and database schemas and the running of utilities and queries against those systems and databases looking for cardholder data.

The bottom line is do not argue about something being ‘Not Applicable’ and then hinder the QSA’s investigation to prove it is ‘Not Applicable’.  Do not get me wrong, you need to keep your QSA on point, but remember that QSAs are required to evaluate the situation and then document the process used to determine that a particular requirement is ‘Not Applicable’.  All you do by complicating that investigation is add more time to your assessment and, potentially, cause a requirement to be marked as ‘Not In Place’ instead of ‘Not Applicable’.

Yes, I Did Kind Of Ask That Earlier

Like security, the PCI DSS also works from a ‘defense in depth’ approach.  A lot of the questions QSAs ask are very similar just asked from a different perspective.  The people that develop assessment and audit programs will tell you that this is the most effective way to uncover the level of compliance with a given program.  The reason is that organizations who have not integrated a compliance program into their day-to-day operations will typically provide inconsistent or confusing answers to the similar questions.  Not that this is a perfect technique mind you, but it does work the majority of the time.

Please be patient with your QSA.  They did not write these procedures, but they are required to execute them.

Answer The Question

Most people suck when being questioned, particularly in a legal proceeding, including yours truly.  Lawyers always instruct anyone that will be called to testify in a legal proceeding to take their time, focus on the question being asked and only answer the question being asked.  Never, ever, ever provide any information outside of the question, i.e., do not elaborate.  The trouble is that lawyers know that silence is a vacuum and it is human nature to fill that vacuum with extraneous information.  Hence why they typically have long pauses between questions.

QSAs and auditors tend to operate under the same principle as a lawyer.  People get into trouble when they start talking about things that are outside of the question, out of scope or not relevant to the assessment.  Such responses will at first confuse the QSA for a moment as they try to reconcile your remarks.  But then, the QSA may question whether they truly understand the environment and, possibly, the scope of the assessment.  It is then that they may start quizzing you and your staff as they go back and reconfirm their understanding of the environment.  All of this takes time, time away from the assessment process as you cover old ground while the QSA re-verifies the facts.

The lesson to be learned here is that there is nothing wrong with saying, “I do not know.”  Or “I will have to look into that question and get back to you.”  The worst thing you can do is try and “tap dance” around the question or never really answer the question.  If you do not have the answer, then find out who does have the answer and point the QSA to that person.


And finally, the best thing you can do to avoid all of these issues is to walk through the PCI assessment process and requirements with those of your staff that will be interviewed/observed and make sure they understand the questions to be asked and how they should be answered.

If you really want to know what the QSA will ask, why they will ask and the evidence they will require, get a copy of the PCI DSS ROC Reporting Instructions from the PCI SSC Document Library.  The Reporting Instructions document is the “Bible” for QSAs as it documents how they will be assessed in a PCI SSC Quality Assurance review.  Reviewing and understanding this document will go a long way to minimizing the “What do you need that for?” questions that all QSAs encounter.

For each requirement’s tests, the Reporting Instructions will tell you:

  • What observations, if any, need to be performed and documented.
  • What documents, if any, need to be collected and reviewed and what information needs to be identified in those documents.
  • What people, if any, need to be interviewed and about what topic(s).
  • What processes, actions taken or states of equipment, if any, need to be observed and documented.
  • Whether or not sampling can be used.

Using the Reporting Instructions, you can also gather a lot of the observations ahead of time.  Your QSA will still have to conduct some observations such as that default passwords are not used, that timeouts occur, that change management operates and the like.  But by gathering screen shots and documenting what you used as testing conditions will go a long way to making your assessment go much more smoothly and quickly.

Hopefully this discussion will help you get through your next PCI assessment without all of the associated drama that can come from such an exercise.


Mobile Payments Update

This past week, Bob Russo, General Manager of the PCI SSC, held Webcasts to discuss the changes coming to version 3 of the PCI DSS and PA-DSS.  For the most part, these Webcasts were nothing special.  Just a scripted regurgitation of what was published in the Version 3.0 Change Highlights document release earlier this month.

However, at around 35 minutes into the Webcast I attended, things got very interesting and an actual useful piece of information was shared.  For it was at that point that Mr. Russo launched into a discussion about mobile payments and the use of consumer devices such as smartphones and tablets for the processing of card transactions.  Mr. Russo stated:

“The fact is that many consumer mobile devices simply cannot provide the level of security needed to adequately protect payment card data.  In other words, they cannot create a trusted environment equivalent to the PCI DSS compliant cardholder data environment.”

Whoa!  Now there is a piece of information I was not expecting.  I looked at the slide and these are the bullets on the slide.

  • PCI Standards focus on merchant acceptance
  • Mobile payment acceptance still evolving
  • Understand risk and use PCI SSC resources
  • PCI SSC is working with industry

Mr. Russo went on to say:

“We encourage merchants to understand the risk of using mobile, work with their acquirer and make their own decisions about mobile and whether they are willing to accept that risk.”

“We’re working with others in the industry including: standards bodies, vendors, banks and processors.  But we are unwilling to lower the bar for security by writing a standard that current mobile devices could meet.  If we wrote a secure standard for mobile now, no consumer devices would be able to meet it.”

The bottom line in all of this is that all of you merchants using iPads, iPhones, Android phones and tablets and similar devices are using them at your own risk.  They are not PCI compliant.  Not only that, you should have cleared that decision with your acquiring bank so that they also accepted the risk that you were using mobile devices to accept payments.

Mr. Russo referred Webcast attendees to reference the PCI SSC Information Supplement ‘Accepting Mobile Payments with a Smartphone or Tablet’ for additional information regarding the use of such devices.

Mr. Russo also reminded everyone that mobile payments from the consumer side are not within the purview of the PCI SSC and the PCI standards.  So wallet applications, near field communication (NFC) and the like will not be regulated by the PCI standards.

These statements though leave a lot of questions for QSAs.

  • How are QSAs supposed to reference these non-compliant devices in the merchant’s Report On Compliance (ROC)?
  • Do QSAs mark this in the ROC as non-compliant and which requirements do QSAs flag as non-compliant?  Is it because the devices are not PTS compliant or are their other requirements that are not met?
  • Do QSAs need a compensating control for these devices?  And exactly what would the controls be that would make these devices acceptable?
  • Do QSAs need to obtain a letter from the merchant’s acquiring bank(s) acknowledging the use of these devices and their approval of their use?

I am sure this will create many interesting discussions at the PCI Community Meeting that is just a few weeks out.  So stay tuned.


Diagramming For Your QSA

I have been reviewing network and data flow diagrams for PCI compliance engagements for years.  But it only recently dawned on me that I have never discussed the issues that keep recurring when I review organizations’ network and data flow diagrams as part of the PCI compliance efforts.  To rectify that situation, here are some comments so that all of you can provide better diagrams to your QSAs.

Network Diagrams

Some organizations treat their network diagrams better than some governments treat their Ultra Top Secret documents.  Even though all organizations require a non-disclosure agreement (NDA) to be in place before having a QSAC conduct their PCI assessment, we constantly run into networking and security personnel that keep detailed network diagrams secret from the QSA.

Unfortunately for these people, the PCI assessment process requires a level of detail that is not comfortable.  In such situations I have received diagrams from network/security personnel that can only be described at levels higher than even high level diagrams.  Such documentation is worthless to a QSA and only puts thoughts in the back of the QSA’s mind regarding what you are trying to hide.

QSAs need diagrams that show everything that is contained in the cardholder data environment (CDE) and all of the network connections to the CDE.  A QSA needs this level of detail to reconcile your configuration standards, running configurations and other relevant documentation to the network diagram as part of the process to ensure that the CDE is properly defined and securely configured.  The network diagrams are also used by the QSA to ensure that appropriate samples are taken for devices in-scope for the assessment if sampling is used.

At a minimum, the information that needs to be on network diagrams includes all virtual local area network (VLAN) numbers (if applicable), IP address ranges/blocks for relevant network segments and key firewalls, routers, switches and servers along with their DNS names and IP addresses.  Anything less just makes the PCI assessment process all the more difficult for you and your QSA.

Data Flow Diagrams

Data flow diagrams typically end up in the Details About Reviewed Environment section at the front of the ROC.  As a result, you need to make sure that they carry enough detail for readers of the ROC, but not enough detail to aid an attacker if they get a hold of the ROC.  I will talk about the details that are required in these diagrams in the next section on ROC Diagrams.

The first thing that gets noticed with data flow diagrams is that they do not even come close to matching up to the network diagrams.  This is typically because the data flow diagrams are prepared by the business owners of the card processing processes and these people typically have no idea how the data actually flows through the network.  As a result, the QSA has no point of reference between the data flow and the network to evaluate whether the CDE is properly defined.  As such, the first recommendation I make is for the business owners and the networking people sit down together and get their diagrams in synch.  Not that the same detail that is in the network diagrams needs to be in the data flow diagrams, but key security points such as firewalls, servers, public networks and business partners should be noted in these diagrams.

The second thing that I notice in data flow diagrams is that there can be a lot of lines with arrows, sometimes even colored in red, green, yellow and other colors.  But most of the time there is no description or legend that allows the reader to understand what the diagram is to represent.  Worse yet, the lines and arrows are all over the place and difficult, if not impossible, to follow in a cacophony of color or monochrome.  To add to this mess, sometimes we get this overlaid on top of the detailed network diagrams in an effort to address my first point.  The net result is that it is all clear as mud!

And if you want to add insult to injury, you do get some commentary with the data flow diagrams in the addition of callouts or balloon comments.  Unfortunately, these comments are near to impossible to understand what comments/balloons go with what line.  Or if that comment/balloon is also a data flow line.  Yuck!

What typically needs to be done is to create separate diagrams for how a transaction flows when a card is processed.  This could result in multiple diagrams for “brick and mortar” retail locations, eCommerce, call center, returns and other order processing processes.  Then a diagram that shows the data flow for settlement.  Settlement may result in multiple diagrams as well.  Then there may be other data flow diagrams that show other manual or paper-based processes related to CHD that also need to be documented.

ROC Diagrams

In the Executive Summary and Details About Reviewed Environment sections of the front part of the ROC, the QSA is required to provide three diagrams.

  • A “high level” network diagram.
  • CDE external and internal network connectivity diagram.
  • Card transaction data flow diagram.

The purpose of these diagrams is to give people such as those at your acquiring bank, the card brands, or heaven forbid, the forensic examiners in the event of a breach, an overview of your organization’s network and how cardholder data flows over that network so that they have some context.

The high level network diagram needs to include the following information.

  • All connections into and out of the network, including demarcation points between the CDE and all other networks/zones.
  • All critical components and key systems, as well as their locations and the boundaries between them.  You do not have to document every switch crossed in the network, but you need to show the key components such as core switches or other networking devices that impact the security of the CDE. You also need to document some of the key servers such as Active Directory, SIEM, patch management, backup, operations management or other operational servers that might not be in the CDE but provide support to the CDE.
  • All other necessary components or key systems, their locations and boundaries, as applicable. This would be devices such as consoles and monitoring stations as well as any user systems that have access to the CDE or to the operational support systems.

Note that there is no requirement to provide details such as DNS names, IP addresses or other identifying information that would be extremely useful for an attacker.  There needs to be sufficient detail as to provide the reader with a reasonable understanding of the architecture of the network so that they can assess whether the ROC covers everything it should cover and that the CDE was properly defined.

The CDE external and internal network connectivity diagram needs to provide the following information.

  • All boundaries of the cardholder data environment.
  • Any network segmentation points which are used to reduce scope of the assessment.
  • Boundaries between trusted and untrusted networks.
  • Wireless and wired networks.
  • All other connection points applicable to the assessment.

Again, there is not a lot of detail provided here, but enough information to give the reader the context of what is connected to what and how the CDE is structure for security.

Data flow diagrams need to provide the following.

  • Identify all transmission and processing flows of cardholder data (CHD) including: authorization, capture, settlement, chargeback and any other CHD applicable data flows.
  • For each transmission and processing flow: describe how cardholder data is transmitted and/or processed and identify the types of CHD involved (for example, full track, PAN, expiry date).

As I stated earlier, for all of these diagrams, you may need to create multiple diagrams to get your meaning across.  So do not try to pack too much information into a single drawing only to have it unreadable in the ROC.

These ROC diagrams are typically a collaborative effort between the QSA and the appropriate personnel in the organization.  In some cases, it can take quite a lot of work and creativity to get diagrams that are useful yet does not give away the store.

Hopefully you now understand what diagrams your QSA needs and the level of details required.


PCI Scoping Tool

Since September 2012, the Open Scoping Framework Group has been providing free of charge the Open PCI DSS Scoping Toolkit.  If you are a QSA, ISA or someone responsible for PCI compliance and you have not gotten a copy of this fantastic document, you need to get a copy.  A copy is easy to get, just go to the IT Revolution Web site, register and you will be allowed to get your copy.

Never heard of the Open Scoping Framework Group (OSFG)?  Neither had I until late last fall when a friend told me to go check out their PCI scoping methodology.  The OSFG was started by Gene Kim whose name should be familiar to almost everyone as the founder of Tripwire.  He got together his DevOps group to tackle the issues faced by all of us trying to scope the cardholder data environment (CDE) and the result was the toolkit.

The toolkit defines three categories of systems.

  • Category 1 – System components that process, store or transmit cardholder data or are not isolated or restricted through controlled access from other Category 1 system components.
  • Category 2 – System components that have controlled access to a Category 1 system component.
  • Category 3 – System components that are isolated from all Category 1 system components.

People always get the reason why Category 1 devices are in scope because they are contained in the CDE.  While one would think that Category 3 components would be also just as easy to categorize as Category 1, but that is not necessarily the case.  The key is that Category 3 systems cannot have any access to Category 1 components.  While attempting to ping Category 1 components from the Category 3 component could be used, a better test is to use Nmap or similar port scanner from a sample of Category 3 components to scan the CDE IP address range to determine if any ports are open.

While Category 3 components can be troublesome, it is the Category 2 devices that usually give everyone a problem including, at times, yours truly.  The reason is the connectivity issue as it can be very unclear at times whether or not a device actually has connectivity to the CDE.

To assist in identifying connected systems, the toolkit breaks Category 2 systems into four sub-categories.

  • Category 2a – System components which, through controlled access, provide security services (e.g., authentication) to a Category 1 device.
  • Category 2b – System components which, through controlled access, can initiate an inbound connection to a Category 1 device.
  • Category 2c – System components which, through controlled access, can only receive a connection from a Category 1 device (i.e., cannot initiate a connection).
  • Category 2x – System components which, through indirect and controlled access, have the ability to administer Category 1 devices. Note: Category 2x devices have no direct access to/from Category 1 devices.

The first thing we need to clarify is what is meant by “controlled access.”  Controlled access means that the system components have: (1) limited network traffic to only that which is required for business operations; and (2) are business justified and documented.

It is the first point of controlled access that typically give people the most trouble.  The concept of “limited” network traffic just does not come across the same for everyone.  I have seen people try to justify limited traffic when just about every port north of 31000 is open for dynamic use.  Do not get me wrong, there are instances where a significant number of ports need to be open (look at Windows 2003 Active Directory as a prime example), but when the answer used to justify the large number of ports is, “We did it to be safe and avoid any problems” says to an assessor you did not want to research the exact ports you needed open or, worse yet, you have no idea what ports are needed to be open.

The sub-category most people will struggle with is 2x.  2x components are those that do not have direct access to the CDE but have access to 2a, 2b and/or 2c components that do have direct access to the CDE.  The best example of this sort of situation would be a system management console for a log management server.  The console has access to the log management server which does have access to the CDE, but the console should not have access to the CDE.  Since the console could be compromised and it has access to a component that has direct access to the CDE, it needs to be included in the scope of the PCI assessment.

The final pieces of the Open PCI DSS Scoping Toolkit I really like are the decision tree and the scenarios provided.  If these do not help explain how to scope your PCI assessment, nothing will.

Again, if you do not yet have a copy of the Open PCI DSS Scoping Toolkit, hopefully this post will entice you to get a copy.


Can An ISA Sign Off On A ROC Or SAQ?

This question came up recently on one of the LinkedIn PCI groups and drove a lot of discussion.  However, one of the things that concerned me the most is that no one belonging to this group bothered to submit the question to the PCI SSC to be answered.

When such questions come up, the first thing you should do is go to the PCI SSC Web site’s FAQ page to see if the question has already been answered.  There is an amazing wealth of information contained in the FAQs.

If you search the FAQs and you do not come up with an answer to your questions, then submit your question to the PCI SSC.  Technically, anyone can submit a question to the PCI SSC.  However, if you are a QSA in a QSAC, the person listed in your QSAC listing should be the focal point and should submit all questions you have to the PCI SSC.

Questions are submitted to  Expect a few days to a few weeks to get a response.  Simple procedural questions such as whether an ISA can sign a ROC or SAQ like a QSA can get a response in a day or two.  Questions that require the PCI SSC to formulate a position, may take a number of weeks before a response is provided.

So, can an Internal Security Assessor (ISA) sign off on a Report On Compliance (ROC) or Self-Assessment Questionnaire (SAQ)?  The answer provided by Cathy Levie, Senior ISA Program Manager, PCI SSC, is as follows.

“The ISA can sign off as long as their Processor/Acquirer has approved of that. This is not up to the PCI SSC.”

In the future, if you have a question and cannot find an answer, ask the PCI SSC.  When you get your answer, please post the answer to any of the PCI groups on LinkedIn or send them to me so that the rest of the PCI world can benefit from the knowledge.  One of the unfortunate issues the PCI SSC has is that not all questions seem to make it into the FAQs or the FAQs are not updated as quickly.


Encrypted Cardholder Data – Out Of Scope?

I had an interesting exchange on Google+ the other day regarding whether or not encrypted data is in scope for PCI compliance.  In the end it was suggested that I write a blog entry regarding this topic as they said how to treat encryption has not been articulated very clearly by the PCI SSC.  I would argue that the rules regarding encryption and scope have been very clearly articulated in the PCI SSC’s FAQ #1086.  However, based on the conversation we had, it was obvious that this is not the case.  So here are the rules as practiced by most QSAs.

The key to how to interpret whether or not encrypted cardholder data is in-scope is in the FAQ.  Encrypted cardholder data (stored or transmitted) being out of scope is based on whether or not that data meets the following definition.

“It is possible that encrypted data may potentially be out of scope for a particular entity if, and only if, it is validated (for example, by a QSA or ISA) that the entity in possession of the encrypted data does not have access to the cleartext cardholder data or the encryption process, nor do they have the ability to decrypt the encrypted data.”

The important phrase in the aforementioned definition is “if, and only if.”  The only way encrypted cardholder data (CHD) is out of scope is if the entity being assessed for PCI compliance cannot decrypt the encrypted CHD.  This is a very important concept that gets constantly misinterpreted by QSAs and their clients.  However, it is up to the QSA to confirm that the organization being assessed cannot decrypt the encrypted CHD and to document the procedures conducted to prove that fact.

With that as background, let us look at storing and transmitting encrypted data and how they can be out of scope and what that means.  As you will see, out of scope can mean different things depending on the implementation of encryption.

Stored Cardholder Data

Under requirement 3.4, one of the methods recommended for the storing of the primary account number (PAN) is:

“Strong cryptography with associated key-management processes and procedures“

One of the ways organizations accomplish this is through a hardware security module (HSM) that manages the cryptographic key process.  The HSM generates the keys, manages the keys and provides an application programming interface (API) for code to access the keys.  Since the HSM is a “black box” a lot of organizations point to that fact as the reason their encryption is out of scope.

There is an additional condition to the encryption out of scope definition that usually gets forgotten.  This is what allows for the scope of the cardholder data environment (CDE) to be reduced.

“Systems performing encryption and/or decryption of cardholder data, and any systems performing key management functions, are always in scope for PCI DSS. Scope reduction for encrypted data may only be considered when that data is isolated from these processes.”

As such, since the organization using the HSM technically has access to the cryptographic keys through the HSM’s APIs, the encryption is in-scope.

Where stored encrypted CHD is out of scope is when a third party controls the encryption keys.  This most often occurs with tokenization.  Under a tokenization scheme, the CHD is sent to a third party who then securely stores the CHD and returns a token that links the CHD at the third party to the token stored by the merchant.  If the merchant needs to make any subsequent charges to the account, the merchant sends the stored token to the third party and the third party substitutes the stored CHD for the token and the transaction is completed.  But since the merchant does not have access to the token creation process, the token is out of scope because it is no longer considered CHD.

Transmitted Cardholder Data

Secure transmission of CHD can be accomplished through a number of methods.  The most common of these methods is secure sockets layer (SSL) or transport layer security (TLS).  In either case, if the organization being assessed has one of the endpoints in the SSL/TLS encryption, then the SSL/TLS process is in scope.  This is obviously most common in the conduct of e-Commerce when the merchant’s Web site has an SSL/TLS certificate for securing the transmission of the CHD to pay for the customer’s purchases from the Web site.

However we are also seeing SSL/TLS used more and more as the encryption method of choice for point-to-point encryption (P2PE) solutions.  Again, if either of the endpoints in the P2PE environment are under the control of the organization being assessed, then the endpoint or endpoints are in-scope for the PCI assessment.

One way we do see to get everything but the merchant’s endpoint out of scope is terminals that are encrypted from the terminal to the processor and the processor controls the encryption keys for the P2PE.  This is most often used in the gas station environment where the pump card reader does P2PE to the processor using derived unique key per transaction (DUKPT) or similar techniques to create an encrypted connection.

That said, what happens to the users and devices in between the two encryption endpoints on an encrypted communication link?  They are out of scope as long as they do not have the ability to decrypt the data stream.  This is another misunderstood interpretation of the FAQ.  While some personnel inside an organization have access to encryption keys, if a user or device does not have access to the encryption keys or the communication endpoints, they too are out of scope.  This is how an SSL/TLS/IPsec connection can be used for isolating the transmission of CHD from the rest of the network and satisfy network segmentation.

Another issue that comes up with managed service providers (MSP) is when the MSP has access to firewalls or routers that are encryption endpoints.  Even if the MSP does not have access to the encryption keys, they do have access to the encryption endpoint(s) and cleartext CHD, therefore their personnel and relevant device management practices are in-scope for PCI compliance.

The bottom line in all of this is; if your organization has the ability to decrypt either the stored CHD or transmissions of CHD, then you are in-scope for PCI compliance.

And as a reminder to everyone, just because something is out of scope it does not mean that it does not need to be assessed.  It is always necessary for a certain amount of testing procedures to be conducted to determine that the item is out of scope.

Hopefully we can now all operate from the same set of rules.


Scoping Clarification

At the 2012 PCI Community Meetings, the PCI SSC made a presentation titled ‘PCI Standards Updates and Future Insights’.  Embedded in that presentation were a series of slides titled ‘Scoping & Segmentation Clarified’.  A number of writers have given reference to this clarification, but I have yet to see a discussion regarding the content of these slides.  So I felt someone should share with the world the content of these slides so that we are all on the same page.

“PCI DSS requirements apply to all system components, defined as any network component, server, or application that is included in or connected to the CDE [cardholder data environment]”

The first thing discussed are the misconceptions about PCI DSS scoping and what “connected to” really means.  Those examples of misconceptions pointed out as FALSE included:

  • Encrypted cardholder data (CHD) is always out of scope
  • “Connected to connected to” systems are not in-scope
  • Only systems that connect directly to the cardholder data environment are in-scope
  • Only inbound connections are in-scope
  • Directly connected to systems are not in-scope if they pass through an in-scope firewall

Encrypted CHD Is Out Of Scope

The only way encrypted cardholder data is ever considered out of scope is if, and only if, the organization being assessed does not have the ability to decrypt the data.  That said, it is up to the organization or their QSA to prove that the organization does not have access to the keys by documenting the assessment procedures that were performed to determine that the encryption keys could not be accessed and the cardholder data could not be decrypted.  So even though it may eventually be judged out of scope, there must be some form of investigation that proves that fact.

“Connected to connected to” Systems Are Not In-Scope,

The remaining four examples given as false are all related.  The guidance being provided by the PCI SSC should have been common sense to any security professional and QSA, but apparently was not as clear as we all thought.  As a result, the PCI SSC documented their thought process and provided this “guidance.”

“If a system can impact the security of the CDE (whether it is directly or indirectly connected), it is in-scope.

To be considered entirely out of scope, a network or system must be isolated from, with no connectivity into, the CDE (such that even if the system is compromised, it cannot impact the security of the CDE).

Restricting access by IP or port may limit the exposure of the CDE but does not automatically remove systems/networks from scope since there is still connectivity.

If connections are limited to specific ports or services, those systems are included in the PCI DSS scope to verify applicable controls are in place.”

Talk about opening a can of worms.  To a lot of people, this definition sounded a lot like Stalin’s “Doctor’s Plot.”  Exactly where do you draw the line on a network and what is connected to what?  To a lot of hard-line QSAs and some participating organizations (POs), this clarification essentially put everything back in-scope for PCI compliance because, in theory, any device on the internal network can be used to ultimately compromise the CDE.  To all of you out there that think this, take a pill and chill.  That is not what the clarification is saying unless there are no systems on your network that are truly isolated from the CDE.

The PCI SSC has left this decision to your QSA/ISA to determine where the boundaries of your CDE get drawn.  And that is based on the risk presented and how you have controlled access for the outliers on your network.  So, while in theory any device sitting on a network could be used as a staging point or beachhead to ultimately compromise the CDE, it all depends on the controls in place to minimize that potential risk.

As an easy example of what the PCI SSC is getting at with this clarification, any systems with access to the CDE are in-scope for PCI compliance.  According to the PCI SSC, a lot of QSAs were ruling systems out of scope such as backup systems, domain controllers, DHCP and DNS servers, management consoles and anything else used to manage, monitor or control devices inside the CDE.  The bottom line is that should any of these systems be compromised, the CDE is also likely compromised.  Limited access or not, these systems have access to the CDE and are therefore in-scope for the assessment.

The other part of the clarification is about just because a system has access to the CDE does not imply that all PCI requirements apply.  There are big differences between the access a backup system may have from a call center operators’ or an accountants’ PCs may have to the CDE.  As a result, it is up to the QSA/ISA to determine what PCI requirements are relevant based on the risk presented by the system and how it accesses the CDE.  At a bare minimum, any PC that has access to the CDE needs to be properly configured and security hardened, patched current, have anti-virus and anti-malware software and a firewall implemented.  If the PC has direct access to the CDE through something other than HTTPS as with a backup server or domain controller, then you should be treating these devices no different than a device inside the CDE. Whether or not additional requirements may be required will depend on the assessment of the PC and how it accesses the CDE.

Given this clarification, what should you be doing to determine the boundaries of the CDE?  Here are some of my ideas.

  • Scan you network for cardholder data (CHD) using OpenDLP, custom queries of databases and similar tools.  Document all systems that are found with actual CHD.  Make sure to check that the data found is actually CHD and not just 15 – 16 digit numbers that happen to pass a Luhn check.  We have encountered a lot of random log data from firewalls and intrusion detection/prevention solutions over the years that upon further inspection turned out to not be CHD.  The purpose of doing this scanning is so that you are reasonably certain that CHD does not exist on systems you did not know about.
  • Review your virtual LANs (VLAN) and related access control lists (ACL) and document what controls are in place to isolate systems from the CDE.  If ports are open between the CDE and a network segment, document why they are open.  A lot of times this sort of review results in discovery of systems that do not need CDE access but were granted access just in case.  The purpose of this review is to make sure that the CDE is truly isolated from the rest of the networks.  If not isolated, it may be possible to move systems into a few VLAN segments and isolate those segments to minimize your total CDE.
  • If possible, locate critical servers such as domain controllers, backup servers, etc. inside the CDE.  A lot of organizations have located one or two domain controllers inside their CDE and then limited communications to/from those domain controllers and domain controllers outside the CDE.  While the domain controllers outside the CDE are still in-scope if they communicate with the CDE domain controllers, such a move puts the majority of the risk on the CDE domain controllers.  In today’s SAN world, putting a backup server with a fiber channel connection back to the SAN used for virtual tape allows you to isolate your CDE backup process.
  • For those organizations that have virtual desktop technology implemented, consider creating virtual systems for providing access to the CDE.  A lot of organizations with virtual desktop technology have segregated this technology onto servers that only provide access to the CDE therefore limiting what virtual servers are in-scope.  Depending on the controls of the virtual environment will determine how much of that environment is necessary to be included in the assessment.  These virtual desktops should be built from a standard image and be strictly locked down so that new software cannot be installed by the user as well as they should be configured to log all user actions.  When using this approach, the user must have authentication credentials (i.e., user identifier and password) different from their other credentials.  You are also going to want some form of firewall between these virtual systems and the rest of the network and granting access to those systems that require access.
  • Another opportunity to minimize the CDE is the use of “jump boxes” on the network.  Jump boxes are accessed via remote desktop technologies to then gain access to the CDE and are typically used for network and server administration and management.  The jump box is configured so that any user’s activities are recorded and logged for later review.  Jump boxes are no different than the virtual desktop solution other than jump boxes are typically physical devices versus virtual devices.  The reason for using a physical device versus a virtual device is the jump box can be physically accessed if necessary in emergencies.  As with the virtual desktop solution, users of jump boxes must have user identifiers and passwords different from their other credentials and you will also need a firewall protecting the jump box.
  • For systems that have access to virtual desktops or jump boxes, these should still be security hardened, have anti-virus and anti-malware with current signatures and should also be timely patched.

A lot of people would think a virtual private network (VPN) over SSL, TLS, SSH or IPsec solution would also meet the requirement of isolation.  The problem with the VPN solution is that there is no isolation or gap between the system used to access the general network and the CDE.  The VPN does provide isolation of data transmissions but, if the system is compromised, the CDE is also likely compromised.  With the other solutions there are multiple ways that the CDE systems are isolated from the rest of the network.

Now we should all be on the same page.


Compliance, Compliance Testing and Security

I was recently on a Webinar presented by a major security vendor and one of their points was that executive management is finally starting to realize that compliance does not equal security.  If you read this blog regularly, you know I really do not like the phrase “compliance does not equal security” and I view it as a convenient dodge by those who use it as a way to weasel out of their responsibilities.

But during this Webinar I had an epiphany regarding this topic.  It is the confusion between security, compliance testing and reporting and the act of compliance by your technology, employees and business partners with your organization’s security policies, standards and procedures that is the problem.

I know I am just asking for flame mail with this post, but I am so tired of people looking to blame everyone but themselves about their inadequacies surrounding information security.  As I have done before, to paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard.  If it wasn’t hard, everyone would do it.”

Security is not always easy, particularly when upper management does not have buy in.  But even when upper management supports security efforts, I have seen security personnel not take advantage of that fact and get the job done.  Security does not have to be hard, but it does take more than just slamming some firewalls and intrusion prevention gear down, tossing a SIEM into the mix and thinking you are done.  Security is a never ending journey because someone is always coming up with new ways to attack you.

Anyway, to start off, let us take a look at some definitions first so we are all on the same page.

Compliance is defined as:

“Conformity in fulfilling official requirements.”

“Official requirements?”  Could that possible mean your organization’s security policies, standards and procedures?  You bet.  In this instance, we are talking about those that correspond to the PCI DSS, but this also applies to ISO 27K, FISMA, HIPAA, GLBA or any multitude of frameworks and regulatory requirements.

Conformity is defined as:

“Compliance with standards, rules, or laws.”

Based on these definitions, security is all predicated on complying with what are deemed an adequate set of security policies, standards and procedures.  Conversely, if you are not complying with an adequate set of security policies, standards and procedures, then your organization cannot be as secure as it could be.  As a result, compliance has to equal security as long as the security policies, standards and procedures are considered adequate.  Therefore security professionals that quote the mantra, “compliance does not equal security” either have a problem with the compliance side of the equation (most likely) or with the standards/frameworks (the dodge).

Over the years there have been a lot of discussions about the PCI DSS, ISO 27K, FISMA and other security frameworks and whether or not they are adequate.  The important thing to remember is that all of these standards or frameworks are merely ante into the information security game.  They are the bare minimum or a baseline to get to a basic level of security.  Should you being doing more?  Definitely, but what those efforts beyond the standard/framework are depends on what you are trying to secure, your network and application architectures and a multitude of other factors related to your computing environment and how it is used.  Those are factors that cannot be taken into account by any standard/framework because they would start to become impossible for others to follow and implement.  The bottom line here is that if you want someone to tell you exactly what to do to secure your networks and applications, go hire a consultant you trust and they will tell you everything you want to know.

The rub in all of this is that, based on the breach reports from Verizon Business Services, Trustwave, as well as compliance testing reports I have reviewed, none of you out there are 100% compliant to begin with, let alone even close.  Every organization I am aware has problems complying with the basics, let alone with any advanced security requirements in the published standards/frameworks.  So if you cannot comply with what you already have, explain to me how a different framework is going to change that fact unless it is less stringent than the framework you are already trying to use?  And if that other framework is less stringent, while that may solve the compliance issue (which I seriously doubt), exactly how is a less stringent framework going to make you secure?  The answer is that it will not make you secure.

What security professionals struggle with is that compliance is a never ending, 24x7x365 effort.  Drop your guard for an instant and it can be game over.  But provided your security policies, standards and procedures are appropriate and detailed (the reason why you want to use an appropriate standard/framework), your organization is not as secure as it can be unless your personnel and devices comply 100% of the time with every defined security policy, standard and procedure.  If you want confirmation of these facts, again, just look at the breach analysis reports year after year.  The reason there are breaches is because of non-compliance with one, but usually more, of an organization’s security policies, standards and/or procedures.

This brings me to the rumblings of late regarding a rethinking of defense in depth.  Defense in depth is predicated on using layers of security devices and controls to minimize the risk that a security incident occurs not to completely prevent an incident although you might get lucky.  For example, firewalls are the sledge hammer of security tools.  However, because we need to have ports open for outsiders to access applications, we follow our firewalls with intrusion detection/prevention devices to ensure that no one abuses the protocols used by the ports.  We follow that up with monitoring of log data from the firewalls, IDS/IPS, routers, switches and servers to identify any “sneaky” attacks using the protocols we allow.  The layers are there to cover the various holes we need to have in order to make our networks and applications function.  The tighter and smaller we can make those holes, the more secure we will be, but there will still be some amount of risk.  So we bring in more layers to cover those risks until it is more expensive to address the risk than to accept the risk.  That remaining risk is the residual risk that we therefore manage and control through detection and correction.

The other thing defense in depth relies on is the control triad.  The idea being that, because you cannot entirely prevent every security incident, you need a way to detect the incident so that you can take action to stop or minimize the impact of the incident.  You follow that up with periodic assessments of your control environment to identify and correct any deficiencies or improve your program based on new information regarding security.  The follow up assessments can be activities such as a root cause analysis (RCA) of an incident, an internal audit of user accounts and user rights or brining in a network security team to assess your security architecture and controls.  All of these activities will result in findings and recommendations to make your security systems and controls better.

And that brings us full circle to the PCI assessment.  It is merely a tool used by the acquiring banks, card brands, processors and others to obtain reasonable assurance that your organization is doing what it can to minimize the possibility of a breach of cardholder data.  It is not meant to be, nor could it ever be, an absolute complete assessment of an organization’s security posture and therefore provide absolute assurance that a breach will not occur (even though the PCI SSC and card brands tend to imply that fact).  Compliance assessments are only a snapshot of personnel and device compliance at the time the reports were written.  This is no different than going to the doctor for your annual physical which results in a snapshot of your health at that point in time.  It is not that those compliance reports are worthless; they just need to be referenced and used properly based on the fact that they are a snapshot.  Just as your doctor will tell you to lose weight or stop smoking, compliance reports provide recommendations on where you can make improvements or adjustments in your policies, standards and procedures based on what compliance evidence was found, or not found, during the assessment.

So, what are the lessons to be learned?

  • Security is not and never will be perfect; there will always be residual risk that must be managed and controlled.
  • Compliance does equal security, at least as best as your preferred standard or framework defines it plus whatever enhancements you have made.
  • Compliance assessments and reports point out where your organization was not compliant and needs to do better, not to prove your organization is secure.

Use the tools at your disposal correctly, stay current on threats and monitor your security posture and you will likely live a long, prosperous and secure life.

Keep hiding behind “compliance does not equal security” and you will forever be living off of your “luck” until it runs out (usually sooner rather than later).


How To Be PCI Compliant And Still Be Breached

Bashas’ became the most recent example of a merchant claiming to be PCI compliant yet ending up breached.  A lot of naysayers I am sure are running around pointing to the PCI standards and say, “See, they are worthless.”  But the larger question most of you have is, “How can an organization be breached if it is PCI compliant?”

The first piece of the answer is security is not perfect.  Security controls have never, ever totally stopped an incident from happening.  If they were perfect, banks would no longer be robbed.  However, due to the security controls that have been implemented, the success of those robberies has dropped significantly.  This is the fact that the PCI SSC and the card brands seem to miss.  That while their standard is a good starting point, there is much more that has to be done to ensure a reasonable level of security.  And even then, an organization is never 100% secure.

The second part of the answer is that even if an organization is 100% compliant with the PCI DSS, there are still numerous ways to get around the controls and breach data as the Bashas’ breach may eventually point out.  Let us assume for this discussion that Bashas’ statement that they were PCI DSS compliant is accurate.  Then how could they have been breached?

The first clue is the statement that they discovered malware that went undetected for some period of time.  Any organization that believes that their anti-virus/anti-malware solution will address this issue is seriously lying to themselves.  AV is good, but it is also not perfect.  If the AV vendors have never seen the malware you picked up, then they have no signature to match it to, so they will likely not flag it.  This is the first indication that this attack was done by a professional.  The malware was not immediately detected which means the attacker likely developed it themselves from a variety of sources.

But how did the malware get on Bashas’ network?  The answer is social engineering and probably a spear phishing attack.  The attacker likely used PasteBin or similar Web sites, got some Bashas’ email addresses and used those to deliver the malware.  Someone unfortunately clicked on a link, opened an attachment or any other number of infection methods and the deed was done.  This is why security awareness training is so important.  Not that it stops these sorts of attacks, but it significantly reduces the likelihood that they are successful.  However, with the malware in place, now all it took was time to find the data.

But would not Bashas’ have noticed someone probing their network?  That depends on a number of factors, but based on the fact that they became aware of the malware, something eventually triggered an incident.  Unlike the security firm you hire to do vulnerability scanning and penetration testing, professional attackers do not perform their scans as quickly as possible.  They take their time and scan very, very slowly.  As a result, they usually do not generate enough traffic at once to garner an alert.  In addition to that, most of their backdoor software encrypts their external transmissions using SSL/TLS/IPsec over port 80 or 443 which are typically open to the Internet.  As a result, from a monitoring perspective, a lot of what is going on would appear “normal.”

So now that your view of the PCI DSS is dashed.  What should you do to respond?

  • Admit that security is not perfect and educate management that it is not perfect.  Breaches will still occur, but security controls are meant to minimize the number of those occurrences and the extent with which they obtain sensitive data.
  • Do not save sensitive data.  Merchants typically do not need to store cardholder data these days.  If you do, then use tokenization so your systems do not store cardholder data.
  • If possible, further isolate your sensitive data.  Look at Forrester’s “Zero Trust” model or the McGladrey Ultra Secure approaches.
  • If possible, reduce the number of actual people that can access your cardholder data to as few as possible.  The fewer people that can access cardholder data, the fewer targets that can be social engineered.
  • Use a “jump box” to provide access to your cardholder data environment so that you do not allow people direct access.  Couple this with different user credentials to gain access to the cardholder data environment.  Add in full instrumentation of the jump box to capture all activity performed on the jump box and monitor the jump box tightly.
  • More tightly monitor your communications through your firewalls.  Yes HTTP/HTTPS needs to be open these days just to do business, but do your personnel need totally unrestricted access to every possible IP address or URL?  No.  So white or black list IP addresses and URLs so that an attacker cannot just use whatever URL or IP address to work from.

Will all of this prevent a breach of your sensitive data?  No.  All these controls will do is reduce the risk of a breach to the lowest possible level.  In time, an ingenious professional attacker will find a way to compromise your controls.  However, with a rigorous control environment it is hoped that you will find them before they find your data.


If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.


April 2014
« Mar    

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 834 other followers


Get every new post delivered to your Inbox.

Join 834 other followers