Archive for the 'PCI SSC' Category

08
Sep
17

The Party Is Off

Here is the official announcement from the PCI SSC that this year’s North American Community Meeting in Orlando has been cancelled due to Hurricane Irma.

https://www.pcisecuritystandards.org/nacm2017_schedule_irma

See you all next year.

Advertisements
26
Aug
17

PCI Compliance And Financial Institutions

I remember being at one of the early PCI Community Meetings and someone from the PCI SSC promised that the PCI DSS would be periodically updated to reflect changing business conditions as well as changing threats.  Here we are more than a decade later, and we have version 3.2 of the DSS, but it has been changed more for changes in threats and very little for business.

Their rationale was that they wanted to minimize the number of compensating control worksheets (CCW) that would be needed for the majority of organizations.  This was in response to v1 of the PCI DSS that required that data encryption keys change annually.  Most large merchants who were participating organizations (PO) complained that it was taking six months to a year or more to encrypt their transaction databases and files.  Requiring annual key changes would leave those databases and files at risk because they would always be in a state of perpetual decryption/encryption.  As a result, almost everyone had a CCW for that requirement.  So, the Council changed the requirement to require the changing of encryption keys when they were believed to be compromised or if one or more persons who know the keys leave the company or change roles.

The reason I bring this up is that I have been dealing with financial institutions and their PCI compliance issues for the last few years.  If there is anything more frustrating, it is trying to apply a standard written for merchants to organizations that are not merchants.  It seems like every time I turn around; a requirement needs a CCW, particularly when concerning requirement 3.4.

I am sure the Council will point to requirement 3.2 as their token change that took into account issuers.  But that does nothing for the other requirements that financial institutions struggle.  The biggest reason a lot of the PCI requirements are a struggle is that financial institutions are in the business of; surprise, surprise; processing, storing and transmitting cardholder data.  That IS their business.  3.2 was a great change for issuers, but a lot of the rest of the PCI DSS is a huge pain for a financial institution without a lot of CCWs and the blessings of the requisite card brand(s).

Let us look at a few requirements where CCWs are needed when assessing an FI.

3.4 Render PAN unreadable anywhere it is stored (including on portable digital media, backup media, and in logs) – this can be very problematic for financial institutions.  The reason is that while they can encrypt or tokenize the data, they also need to decrypt/detokenize it as well.  In a lot of cases, they need to do those operations quickly and very often.  It is not that the FIs do not want to protect the information, it is just that they have some unique issues in meeting PCI requirements.

The best example of this situation is debit cards.  Debit cards must be tied to a demand deposit account (DDA) such as a checking or savings account.  That means somewhere there must be a mapping of the debit card into the core application system.  But to process transactions from the card networks when customers use their cards, the PAN must be decrypted/de-tokenized so that the payment can be approved or declined.  This decryption/de-tokenization process needs to meet a timing standard, so adding to the processing time is usually not an option.  As a result, it is not unusual to find that the PAN to DDA mapping file is not encrypted or tokenized.

6.4.3 Production data (live PANs) are not used for testing or development – when part of your business is all about processing, storing and transmitting sensitive authentication data (SAD) and/or cardholder data (CHD), using a few card brand test accounts like a merchant would use for testing is not going to work.  Particularly when you are testing with one of the card brands to certify your application.  In those instances, the FI and brands are going to demand the use of a large and varied set of PANs to ensure that systems are functioning properly.  The only way to do that is with live data from production.

3.2.1 Do not store the full contents of any track (from the magnetic stripe located on the back of a card, equivalent data contained on a chip, or elsewhere) after authorization

3.2.2 Do not store the card verification code or value (three-digit or four-digit number printed on the front or back of a payment card used to verify card-not-present transactions) after authorization

3.2.3 Do not store the personal identification number (PIN) or the encrypted PIN block after authorization.  – requirement 3.2 addresses that issuers that have a business reason to retain sensitive authentication data (SAD) can retain it.  However, 3.2.1, 3.2.2 and 3.2.3 say that all of this data cannot be stored right after authorization. These requirements then go on to say the QSA must inspect incoming transaction data, log data, databases, etc.  Well, guess what?  The incoming transaction data always has SAD in it of some form because the FI has to authorize the transaction.  As I said earlier, databases can have it because of the speed required to authorize.  This is the FIs’ business, yet the standard does not recognize this fact.

The bottom line is that the PCI DSS does not reflect the realities of financial institutions.  As a result, FIs require numerous CCWs to meet the PCI DSS requirements.  As I stated at the beginning, the Council promised that they would address such issues to make CCWs the exception not the rule.  Well, here we are, and in the FI world CCWs are commonplace.  And as we move forward, it will be FIs that will be the focus of the standard, not merchants.  Merchants will very soon be out of the payment card data business altogether with the exception of their POI.  So, it only makes sense to adapt the PCI DSS to securing FIs.

We have separate PCI DSS and AOC documents for service providers.  Maybe we need separate such documents in addition to revised requirements for financial institutions?

Seems like a good discussion topic to bring up at the upcoming Community Meeting.

05
Jul
17

NESA – Guidance In Search Of A Problem

On Thursday, June 29, the PCI SSC held their quarterly Assessor update webinar.  One of the more interesting discussions was on the topic of the non-listed encryption solution assessment or NESA.

For those unfamiliar with NESA, it is an attempt by the Council to have all end-to-end encryption (E2EE) solutions such as First Data’s TransArmor and Verifone’s Verishield assessed against the relevant PCI P2PE standards to ensure they are secure.  The problem is that the card brands and the banks have not gotten behind the NESA approach so it has effectively stalled out much like the P2PE program has stalled out.  But on the Thursday webinar we found out that it has really stalled out and the Council seems to be getting desperate to salvage it.

The goals of NESA are:

  • The Council reiterated that the NESA requires that a P2PE-QSA is required to conduct the assessment using the PCI P2PE assessment programs as guidance. Essentially, the NESA is a P2PE validation without the Council’s certification and listing of the solution on the Council’s Web site.
  • NESA provides a consistent approach to evaluating non-listed encryption solutions against “best practices”.
  • It provides other PCI assessors, acquiring banks and merchants with information about the risk and PCI DSS responsibilities when using a non-listed encryption solution.
  • It provides input to a merchant’s QSA to consider when conducting the merchant’s PCI assessment.

All of these are admirable goals of the NESA.  But the question still remains, do we need the NESA?

According to the Council a lot of people in the “payments community” have been clamoring for NESA.  I am not sure exactly who the Council is referring to as the “payments community” but it certainly has not been the banks or the brands.  Those two constituencies are already partnered up with E2EE and P2PE solutions and have not been clamoring for anything other than to use those solutions.

The Council did bring up the organizations behind the solutions already listed as P2PE validated.  That would make sense as they have a vested interest in forcing non-listed encryption solutions through the process.  But as to banks, the brands and QSAs pushing this agenda?  I would seriously doubt it.

Then there is the issue that the Council says that QSAs are stumped when they encounter an E2EE solution.  The process of assessing E2EE solutions has been known by QSAs since E2EE solutions were rolled out years ago by the various vendors.  But with the introduction of P2PE, I would bet that the Council’s QSA/ISA training does not cover how to handle E2EE solutions.  And I am sure since the invention of the NESA process, they have even more reasons not to instruct QSAs on how to assess an E2EE solution.  Yet I am sure that they still discuss how to assess an application that is not PA-DSS validated.  That is a “shame on them” for ignoring the realities of the real world.

But the process is not that involved.  When encountering an E2EE solution, the QSA needs to ensure that the E2EE solution is implemented according to its implementation guide (IG).  A transaction processor/gateway or an acquiring bank may also require packet captures to ensure that the data stream is encrypted.  All of that assessment and testing documentation is submitted to the acquiring bank and the bank explicitly grants the merchant scope reduction.  Then the QSA can follow the requirements in SAQ P2PE for an assessment.  All of which adds probably two hours to a merchant’s PCI assessment versus the costs of a full on P2PE assessment.  When looking at the costs of a P2PE assessment plus the listing fees to have the solution placed on the Council’s Web site, is there any wonder a lot of E2EE solution providers have eschewed the P2PE program.

First Data and Verifone have been adamant since P2PE was introduced that they will never go through P2PE because it is not needed.  Given they are partnered with most of the large processors and banks, their lack of support for P2PE means a lot and also means that until they get on board with either NESA or P2PE, both of these standards are in trouble.

But the most troubling comments occurred at the end of the Council’s brief discussion of NESA.

  • NESA is NOT a program. It is only “guidance”.
  • NESA may not result in scope reduction.
  • There is no formal NESA documentation or template.

When the Council says that something is “guidance”, there is no mandate for anyone to do anything.  This is how QSAs are to treat those Information Supplements published periodically by the Council.  In this case, NESA is only a suggestion.  So, until the brands and banks get behind the NESA process, there is no reason to have a NESA performed.

The next two comments go together.  If there is no formal deliverable for QSAs to review, how does a QSA evaluate that any NESA process was conducted adequately?  And if that is the case, of course the granting of scope reduction is not likely.  After all, if a QSA is not sure about the NESA, how is the bank supposed to evaluate it let alone pay for it.  And if scope reduction is not achieved, then what in the world is the point of NESA in the first place?  The only purpose I can see is to give P2PE QSACs an ability to push their services on the E2EE solution vendors to make their services worth the cost incurred with the Council.

The only other benefit that I can see is an opportunity for certain P2PE-QSACs to flood us all with NESA Certificates since their PCI Compliance certificates are worthless.

But in the end, you really start to wonder what the Council was thinking when they put this process together.  Time will tell, but I am guessing and hoping that NESA, like P2PE, will die a quick and quiet death.

09
Jun
17

We Need A Change To 2.3.b

I just wanted to give everyone a “heads up” about some guidance we recently received from the PCI SSC regarding jump boxes or out-of-band (OOB) system management solutions and the use of insecure protocols such as SNMPv1/2 and Telnet.

But did everyone know that this solution also requires a compensating control worksheet (CCW)?

For years (at least since the Phoenix Community Meeting years ago), the Council has been recommending the use of firewalls and jump boxes as a way to secure instances where organizations need to use insecure protocols.  These enclaves are firewalled, VLAN’d and configured so that only the jump box can be used to remotely connect to the devices over Telnet and allowing other insecure protocols to be kept away from other networks.  However, I do not recall any of those discussions ever explicitly calling out the need for a CCW.  I suppose the Council just figured we would all be bright enough to write one up.

What led me to this revelation you ask?

When I was going through my QSA Requalification this spring, they had a scenario with a jump box solution.  One of the questions related to the scenario involved how you would create a CCW for the insecure protocols used in the administrative VLAN that the jump box provided access.  While I answered the questions correctly, it triggered a new question regarding why a CCW was needed in the first place.

Then when the question was posed back to the Council, we got a reply indicating that a CCW would be required because of requirement 2.3.b which states:

“Review services and parameter files on systems to determine that Telnet and other insecure remote-login commands are not available for non-console access.”

The problem with the requirement is that it treats all Telnet with equal distain regardless of risk.  Yes, Telnet is always a clear text protocol, but when it is buried two or three layers away from any general network or the internet and requires administrator credentials and MFA, it is hardly as “at risk” as it would be when PCI started over 15 years ago and networks were as flat as a piece of paper.

As a result, I would like to recommend that the Council work to change 2.3.b to take into account the use of network segmentation, firewalls, VLANs, ACLs, MFA and jump boxes to allow the use of Telnet and insecure protocols when in a properly isolated and secure environment.  It seems silly to me that someone goes through all of the right steps to secure their environment only to be told that they still need a compensating controls to meet a requirement that does not reflect the real risk.

The other reason I feel this needs to be addressed is that a lot of banks and processors seem to see CCWs as a huge red flag.  Something to be avoided at all costs because it implies to them non-compliance.  And non-compliance is a “bad” thing.  I cannot tell you the collective hand wringing some banks go through for really simple CCWs all because they do not want to have any PCI assessments with CCWs.

Ultimately I think this all comes down to the fact that those banks and processors have no clue as to the amount of risk any CCW presents.  This is because most banks and processors staff their PCI compliance areas with auditors and compliance professionals, not technicians.  Given that the PCI DSS is predominately all about security technology and its implementation, these auditors and compliance people are not equipped to make the decisions that typically need to be made regarding CCWs.  As a result, they are all high risk in their eyes and treated accordingly.

Hopefully the Council can address this situation and we can avoid needless documentation for a preferred “best practice”.

24
May
17

What Is The Secret?

If you are a P2PE-QSA, you have likely seen the documentation required to do a Non-Listed Encryption Solution Assessment (NESA).  While the P2PE assessment work program (on which the NESA is based) is available to everyone, apparently the Council feels that only P2PE-QSAs have a right to see the new NESA documentation.

Why?

My assumption about this secrecy is that the Council is restricting access to the NESA documentation to stop any QSAs that are not P2PE-QSAs from conducting their own NESAs.

But what does that do to the rest of us that are not so fortunate?  How will the rest of the QSA/ISA community know that what they are receiving as the NESA is in fact what they should be receiving if they have never seen it and the Council has chosen to not do training?

People already complain that the Council makes statements at the Community Meetings that are never communicated to the wider PCI community that are unable to attend.  So here we are with a process that produces one or more documents (who knows unless you are a P2PE-QSA).  Yet, as a QSA/ISA, we have no idea what it looks like and have no guidance as to what we should look for in these documents to ensure that the NESA was done properly.  We could end up with anything with a PCI SSC logo on it labeled “NESA” and have no idea whether it is acceptable or not.

And if history is a guide, I guarantee you the Council will hold QSAs/ISAs responsible if they accept anything as a NESA even though they have provided no guidance.  That is what happened with the first AQM reviews.  None of the QSACs in that first round of AQM reviews had ever seen the standards by which they would be judged (they were still being developed).  But almost every QSAC went into remediation (there were a few “favorites” that dodged remediation) because they were all assessed to those standards even though the first time those standards were seen by those QSACs was at the start of their respective AQM assessment.

As QSAs/ISAs we have a right to not accept any documentation or attestations that we feel does not convey the information that we believe is necessary to prove compliance of a third party solution.  So I guess until the Council trains us in the new NESA process and what is acceptable and not acceptable, we do not have to accept any output from that process.

At least that is how I recommend QSAs/ISAs should treat the NESA documents until the Council decides to train us.

22
May
17

Answering Some Dream Team Questions

After our PCI Dream Team event on May 17, I thought I would take some questions that do not require long and involved answers and publish them in this post.  FYI – I have edited and spell checked these, so they likely do not look like you entered them but they should convey your questions as you asked them.  Hopefully I answered on of your questions.

Q: Does anything special need to be done with the use of Virtual Terminals?  We use the virtual terminals to manually enter credit cards from time to time.  The computers used are normal user computers with the basic security done, but I have been wondering if they need to have extra limitations or security put in?

A: There are a lot of solutions that imply they take the workstation/terminal out of scope or magically reduce scope when using virtual desktop (VDI) solutions.  None of it is true.  If a device is used to enter PAN (regardless of HOW), it is a Category 1 device because it is used to enter PAN.  The bottom line is that any device used to enter PAN is in-scope for full PCI compliance.  There is no “magic” to change that fact.

Q: Do all POI devices have a keypad? I’m thinking of PC’s with integrated MCR’s – will those all change to separate POI’s with a keypad?

A: All point of interaction (POI), aka card terminals, that are customer facing have a keypad because they need to be able to accept PIN entry.  Merchants that are going to P2PE/E2EE solutions end up with a separate POI that is connected to the POS PC/terminal via USB so that the POS solution can communicate the total price of the sale as well as to know if the transaction is approved or declined.  The POI securely communicates with the transaction processor over Ethernet or using the USB connection and the Ethernet connection of the POS PC.  In both cases, the POS PC never has access to the sensitive authentication data (SAD)/cardholder data (CHD) as it is encrypted at the POI.  However is using an E2EE solution, the QSA will need to validate that the E2EE solution to ensure that they do in fact encrypt at the POI and therefore the POS PC/terminal is out of scope.  In addition, the merchant will have to contact their acquiring bank to get formal approval that the E2EE solution gives scope reduction for the merchant.  This will likely require the QSA to provide their evidence and assessment procedures to the acquiring bank for that approval.

Q: Are administrator workstations always in scope for PCI DSS regardless if an administrator is connecting to CDE servers via jump box?

A: Yes, because they are “connected to” systems when they access the jump box.  They may not be entering cardholder data (CHD), but they likely can access it or influence its processing/transmission because they are administrators.  That said, I would treat them in the Open PCI Scoping Toolkit vernacular as a Category 2x system.  That means they can probably avoid the bulk of PCI requirements but, at a minimum, need to be properly security hardened, kept updated, have anti-virus/anti-malware and are monitored “periodically”.  And as a reminder, administrators will need to use multi-factor authentication (MFA) after January 31, 2018 when accessing the cardholder data environment (CDE).

Q: Are you having/forcing your clients to follow the December scoping guidance, and are you bringing administrator workstations into scope?

A: I guess I am curious as to when anyone would have thought that administrator workstations ever were out of scope?  Nothing has changed in that regard as they were always in scope for PCI compliance.

Q: Are “crash kits” in restaurants for use when the system is down in scope for compliance?

A: The kits themselves are not in scope, but when they get used, the forms that get generated which contain the embossed image or handwritten PAN and other sensitive authentication data (SAD)/cardholder data (CHD) place those forms in scope for PCI compliance.  They therefore need to be securely stored, securely transmitted and subsequently securely destroyed in accordance to the relevant requirements in section 9.

Q: Does pushing non-cardholder data out of CDE system excludes connected system out of PCI scope? For example pushing non-cardholder data such as CPU usage for monitoring or number of transactions per day used for reporting etc.

A: According to a discussion at the 2016 Community Meeting and a subsequent Assessor call, the Council has publicly stated that if it can be unequivocally proven that the flow is only outbound from the cardholder data environment (CDE) to a device and that the data does not contain cardholder data (CHD), that device can be ruled out of scope.  However you have to get your QSA to buy into that argument and I do not know too many QSAs that will agree with that decision.  In my experience, there is still too much of a risk that cardholder data (CHD) could leak through that flow and saying it is out of scope is not accurate nor is it good practice as it leads to an exfiltration point that is not monitored.  The question you have to ask yourself is, how will it look in that newspaper headline when your organization is breached that you ruled it out of scope because it was outbound only?

Q: PCI DSS requires a firewall in place, are host level firewalls meeting that requirement?

A: Yes, as long as they perform stateful packet inspection (SPI), they are properly and securely configured and they are appropriately monitored like any other in scope firewall.

Q: Regarding vulnerability assessments for internal scans, do we have to address medium vulnerabilities or only critical and high vulnerabilities?

A: The PCI DSS and the Council have been very clear on this which is why it is disconcerting when this question constantly gets asked.  The guidance for requirement 6.2 is very clear as it states, “Consider prioritizing patch installations such that security patches for critical or at-risk systems are installed within 30 days, and other lower-risk patches are installed within 2-3 months.”  The bottom line is that you need to apply ALL patches/updates to all in scope systems as soon as possible.  So get on with patching and updates, no excuses.

Q: More than once I’ve been told that the decision to implement PCI compliant controls is a financial decision. What are the expected fines and penalties for failing?

A: No organization gets to ignore any PCI requirement because of financial or any other reasons.  However in those cases where a requirement cannot be directly met, an organization must then come up with compensating controls that go above and beyond that requirement in order to be in compliance.  In my experience, it is almost always cheaper to meet the PCI requirement than to go the compensating control worksheet approach.  You will have to talk to the card brands as they are the ones that come up with the fines and penalties.

Q: Do you ever foresee the card brands implementing any sort safe harbor clauses in regard to PCI?  If a merchant is doing their best to be secure and (more importantly, as far as PCI is concerned) compliant and they are breached, as it stands right now, PCI will not help you.  Instead, PCI seems to be wielded as a weapon to extract fines from the merchant.

A: You are joking right?  LOL!  Actually, with merchants going to P2PE/E2EE and tokenization solutions, I could envision changes in the PCI compliance process at the merchant level because the risk is only with the POI.  Time will tell.

Q: Have you heard anything further regarding the FTC’s review of PCI?

A: Not a word and I would not expect to hear anything until the FTC decides to tell us anything.  I do know that issues regarding the FTC’s information requests from the QSACs were supposedly worked out and that the requested information was delivered to the FTC.  But that is the extent of my knowledge on the matter.

11
Feb
17

The Council Gets A Clue

Late this week the PCI Security Standards Council issued a new information supplement titled ‘Multi-Factor Authentication’ after the brew-ha-ha that occurred last fall at the Community Meeting in Las Vegas.  For once, the Council has issued an excellent reference regarding the issues of multi-factor authentication (MFA).  Although I still have a couple of minor bones to pick about this document, but more on that later.

If you understand the concepts of MFA, you can skip through the document to the end where the Council presents four scenarios on good and bad MFA.  These are well documented and explain the thought process behind why the scenario works or does not work for MFA.  The key takeaway of all of this is the independence of the MFA solution from the logon process.  The Council is getting in front of the curve here and stopping people from creating insecure situations where they believe they are using MFA that minimizes or stops breaches through administrators or users with access to bulk card data.

Now for a few things that I do not necessarily agree with in this document.

The first involves the Council’s continued belief that hardware security modules (HSM) are actually only hardware.  On page four, the following statement is made.

“Hardware cryptographic modules are preferred over software due to their immutability, smaller attack surfaces, and more reliable behavior; as such, they can provide a higher degree of assurance that they can be relied upon to perform their trusted function or functions.”

The Council has made similar statements over the years in the mistaken assumption that HSMs are only hardware.  HSMs are hardware that use software to manage keys.  There are standards that are followed (e.g., FIPS 140) to ensure that the HSM remains secure, but these devices are predominately software driven.  That is not to say that just any device can serve as an HSM, but a lot of us in the security community are concerned that the Council continues to perpetuate a myth that HSMs are only hardware which is patently false.

My other issue comes on page six as part of the discussion regarding the use of SMS for MFA.

“PCI DSS relies on industry standards—such as NIST, ISO, and ANSI—that cover all industries, not just the payments industry. While NIST currently permits the use of SMS, they have advised that out-of-band authentication using SMS or voice has been deprecated and may be removed from future releases of their publication.”

While everything in this statement is accurate, it gives the uninitiated the impression that SMS or voice is no longer a valid MFA solution.  I know this to be true because I have fielded some questions from clients and prospects on this subject, particularly about SMS.  The key is that this is not SSL and early TLS where NIST called them out as insecure and to no longer be used.  This is a “heads up” from NIST to everyone that there is an issue that makes SMS and voice not secure enough for MFA.

But while there is a risk, a lot of us in the security community question the viability of that risk when matched against merchant risk versus a bank or a government agency.  While I would not want any bank or government agency to use SMS or voice for MFA, a small business may not have a choice given their solution.  The reason is that the risk of an attack on SMS or voice is such that only a high-value target such as a bank or government agency would be worth such an effort.  In my very humble opinion, while a total ban is the easy solution, this is an instance where the Council should take a more nuanced approach toward the use of SMS and voice for MFA.  The bottom line to me is that small merchants using any MFA solution, even if flawed, is better than using no MFA solution.

I would recommend the following approach to manage this risk.

  • Level 4 merchants can be allowed to use SMS or voice for MFA.
  • Level 1, 2 and 3 merchants would be allowed to transition away from SMS and voice to a more secure MFA solution within one year of NIST stating that they are no longer acceptable.
  • All service providers would not be allowed to use SMS or voice for MFA once NIST states that both are no longer acceptable. This means service providers should start transitioning now if they use either.

Those are my thoughts on the subject.  I look forward to the comments I am sure to receive.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

September 2017
M T W T F S S
« Aug    
 123
45678910
11121314151617
18192021222324
252627282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,868 other followers