Archive for the 'PCI DSS' Category

11
Feb
17

The Council Gets A Clue

Late this week the PCI Security Standards Council issued a new information supplement titled ‘Multi-Factor Authentication’ after the brew-ha-ha that occurred last fall at the Community Meeting in Las Vegas.  For once, the Council has issued an excellent reference regarding the issues of multi-factor authentication (MFA).  Although I still have a couple of minor bones to pick about this document, but more on that later.

If you understand the concepts of MFA, you can skip through the document to the end where the Council presents four scenarios on good and bad MFA.  These are well documented and explain the thought process behind why the scenario works or does not work for MFA.  The key takeaway of all of this is the independence of the MFA solution from the logon process.  The Council is getting in front of the curve here and stopping people from creating insecure situations where they believe they are using MFA that minimizes or stops breaches through administrators or users with access to bulk card data.

Now for a few things that I do not necessarily agree with in this document.

The first involves the Council’s continued belief that hardware security modules (HSM) are actually only hardware.  On page four, the following statement is made.

“Hardware cryptographic modules are preferred over software due to their immutability, smaller attack surfaces, and more reliable behavior; as such, they can provide a higher degree of assurance that they can be relied upon to perform their trusted function or functions.”

The Council has made similar statements over the years in the mistaken assumption that HSMs are only hardware.  HSMs are hardware that use software to manage keys.  There are standards that are followed (e.g., FIPS 140) to ensure that the HSM remains secure, but these devices are predominately software driven.  That is not to say that just any device can serve as an HSM, but a lot of us in the security community are concerned that the Council continues to perpetuate a myth that HSMs are only hardware which is patently false.

My other issue comes on page six as part of the discussion regarding the use of SMS for MFA.

“PCI DSS relies on industry standards—such as NIST, ISO, and ANSI—that cover all industries, not just the payments industry. While NIST currently permits the use of SMS, they have advised that out-of-band authentication using SMS or voice has been deprecated and may be removed from future releases of their publication.”

While everything in this statement is accurate, it gives the uninitiated the impression that SMS or voice is no longer a valid MFA solution.  I know this to be true because I have fielded some questions from clients and prospects on this subject, particularly about SMS.  The key is that this is not SSL and early TLS where NIST called them out as insecure and to no longer be used.  This is a “heads up” from NIST to everyone that there is an issue that makes SMS and voice not secure enough for MFA.

But while there is a risk, a lot of us in the security community question the viability of that risk when matched against merchant risk versus a bank or a government agency.  While I would not want any bank or government agency to use SMS or voice for MFA, a small business may not have a choice given their solution.  The reason is that the risk of an attack on SMS or voice is such that only a high-value target such as a bank or government agency would be worth such an effort.  In my very humble opinion, while a total ban is the easy solution, this is an instance where the Council should take a more nuanced approach toward the use of SMS and voice for MFA.  The bottom line to me is that small merchants using any MFA solution, even if flawed, is better than using no MFA solution.

I would recommend the following approach to manage this risk.

  • Level 4 merchants can be allowed to use SMS or voice for MFA.
  • Level 1, 2 and 3 merchants would be allowed to transition away from SMS and voice to a more secure MFA solution within one year of NIST stating that they are no longer acceptable.
  • All service providers would not be allowed to use SMS or voice for MFA once NIST states that both are no longer acceptable. This means service providers should start transitioning now if they use either.

Those are my thoughts on the subject.  I look forward to the comments I am sure to receive.

07
Jan
17

Getting Ready For 8.3.1

I have had some interesting meetings with clients lately regarding PCI DSS requirement 8.3.1 and multi-factor authentication (MFA).  Requirement 8.3.1 is a best practice until January 31, 2018, but organizations are trying for once to get a jump on it.  As a refresher, the requirement states:

“Incorporate multi-factor authentication for all non-console access into the CDE for personnel with administrative access.”

But these calls have little to do with discussing MFA.  People seem to have that finally down cold.  What is becoming painfully obvious and somewhat disturbing from these calls is the realization that a lot of organizations have no defined cardholder data environment (CDE).

Honestly, we have been discussing scope and definition of the CDE for over a decade now.  Yet people still are having problems defining their CDE.  It makes you start to wonder what these folks have been doing for the last 10 years that they still do not have a defined CDE.

I refer a lot of these clients to the Guidance offered in the PCI DSS as a start to gaining an understanding.  That guidance says:

“This requirement is intended to apply to all personnel with administrative access to the CDE. This requirement applies only to personnel with administrative access and only for non-console access to the CDE; it does not apply to application or system accounts performing automated functions.

If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication either when logging onto the CDE network or when logging onto a system.

If the CDE is segmented from the rest of the entity’s network, an administrator would need to use multi-factor authentication when connecting to a CDE system from a non-CDE network. Multi-factor authentication can be implemented at network level or at system/application level; it does not have to be both. If the administrator uses MFA when logging into the CDE network, they do not also need to use MFA to log into a particular system or application within the CDE.”

What the Council is preventing with 8.3.1 is all of the successful spear phishing attacks against system administrators that were the ultimate root cause of the Target, Home Depot and other breaches.  The reason being is that when people with administrative privileges are breached, it is game over.  The requiring of MFA should prevent that from happening.

While the Council explicitly calls out administrators, I also explain to my clients that it is not just administrators that you need to worry about.  Anyone that has access to bulk data inside the CDE should also be using MFA to gain access.  I have seen people in accounting and customer service roles that can access and decrypt stored CHD that can also be at risk to phishing and similar attacks.  So it is a good idea that anyone with access to the CDE and bulk data should also be using MFA.  The last thing you want to have happen even if the data remains encrypted is to have entire databases be exfiltrated out of your CDE because not everyone with CDE access is using MFA.

But what is absolutely fascinating and scary is that the struggle on these calls continue to surround defining CDE systems.

The first problem I seem to encounter in these meetings revolves around the difference between systems/devices that process/store/transmit cardholder data (aka Category 1 systems/devices) versus those systems/devices that are connected to those Category 1 systems/devices (aka Category 2 systems/devices).  The guidance that I give my clients here is that if a Category 2 system has the ability to effect the security of a Category 1 system (i.e., the Category 2 system/device has inbound access to the CDE system(s)/device(s)), then administrators should also use MFA to access those Category 2 systems/devices.

The second most common problem that comes up is network segmentation.  Lately the meetings seem to be more and more involving the lack of network segmentation.  In those cases the Council’s Guidance column provides your answer.  The Guidance states in the second paragraph that:

“If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication either when logging onto the CDE network or when logging onto a system.”

What?  As usual, the Council has totally messed up the wording here so it is no wonder people have questions.  What the Council should have said in that second paragraph was:

“If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication when logging onto a CDE system.”

The key point in the second paragraph is that there is NO network segmentation, so there is NO separate CDE network.  How an administrator would use MFA to logon to a separate network that does not exist is beyond me.  The Council really needs to fix that second paragraph.

In situations where the CDE is not explicitly defined, organizations are going to have to implement MFA on a device by device basis, not on a network segment basis.  While this can be done, it is a pain in the “you know what” and another reason for segmenting your network to get an explicit CDE.

The final most common issue that comes up with 8.3.1 regards a separate CDE and how to control access to that separate CDE.  The most common way to access a CDE, particularly for administrators, is through separate out of band (OOB) administrative systems (there should always be more than just one for redundancy) also referred to as “jump boxes”.

These OOB will have two network interface cards (NIC) for connecting to the CDE network segment and connecting to a network outside of the CDE.  The external facing NIC connects to a firewall to manage and monitor the network segmentation.  This is because there are typically (or should be) many, many fewer ports required to be open from the OOB to the firewall than from the OOB into the CDE.  Access control to the OOB is typically managed through Active Directory, RADIUS or some other directory system but I have seen it managed on the OOB but I would not recommend that practice.

The OOB also needs to be fully instrumented meaning that every keyboard entry and mouse click is recorded.  All of that is sent to a separate logging system so that in the event that an issue occurs, the actions of the users of the OOB can be reviewed and a determination made as to how to correct the issue.

The next most common way organizations are controlling access to the CDE is through a virtual desktop (VDI) type of solution because they already have that technology and want to use it to connect to their CDE.  The belief being that this will reduce their scope to only the VDIs that connect to the CDE.  But to their chagrin, they quickly find out that it does not reduce scope like they think.

The first question about using VDI regards scope as in, what exactly is in scope if we implement VDI?  While the VDI is obviously in scope, I get a lot of questions and arguments regarding what in addition to the VDI is also in scope.  The Council’s previous pronouncements regarding virtualization make it clear that the virtualization solution such as Citrix or VMware is definitely in-scope in addition to the VDI.

But also in-scope are the devices that access the OOB/VDI particularly if they can be used to view or enter sensitive authentication data (SAD) or cardholder data (CHD).  This is because even with OOB/VDI, these secondary devices still can have the SAD/CHD in their memory that could be accessed by malware.  That does not mean that the full PCI DSS requirements need to be applied to these secondary devices nor do they necessarily need to be in a separate network segment.  But it does mean that appropriate controls need to be applied to protect these secondary devices from the risks they present to SAD/CHD.

I have a lot of clients that try and get these secondary devices out of scope by using virtual keyboards on the OOB/VDI.  The thinking being that if the keyboard is not used then the secondary device cannot be in-scope.  However, there is still a mouse being used to drive the virtual keyboard and that still means the secondary device is in-scope.  That is because the mouse clicks can be recorded by malware and the data retrieved.  The only sure way I have seen secondary devices put out of scope is when an encrypted keypad such as a MagTek DynaPro or IDTECH M100 are used for data entry of SAD/CHD.

The second question I get typical revolves around do the administrators use MFA before or after logging on to the OOB/VDI solution?  Either method will meet the requirement, but I would think that implementing MFA as part of the OOB/VDI logon process (i.e., before) is going to be much easier to implement than implementing it afterward when you would have to implement it for each system/device in the CDE.

Hopefully we now all understand 8.3.1

20
Dec
16

An Update On Multi-Factor Authentication

In the November 2016 Assessor Newsletter there is an update to the Council’s statements at the 2016 Community Meeting’s QSA Forum discussion regarding multi-factor authentication (MFA).

“We had a moment of excitement at the North America Community Meeting in September when we responded to a question in the Assessor Session about MFA. As several of us from the Council pointed out, some techniques historically in use are falling out of favor as acceptable approaches to MFA because, as they are becoming used, they fail to meet the basic requirements of MFA. A recent NIST announcement associated with a proposed revision to NIST Special Publication 800-63 series raised the potential of a sunset date for use of SMS as an out-of-band mechanism for a factor in MFA. Based on the questions asked, we felt a refresher on MFA would be of value.

Assessors should understand that multifactor authentication requires two or more independent factors used in such a way that the presenter of the factors gains no knowledge of whether any factor is valid until all factors have been presented. For example, if the first factor is submitted and results in an indication to the user that it is valid before the second factor is requested, then what you actually have is two, single-factor authentications. The critical issue is not when the validation is actually done; rather it is when feedback to the user is provided. If the user can’t tell which factor failed to grant access, then you have MFA. This common practice is illustrated in Figure 1. Figure 2 illustrates the better practice.

mfa-diagram

Figure 1 is sometimes referred to as a multistep authentication. Figure 2 unifies authentication into a single step. By doing the validation of both factors before providing an indication of authorization success or failure, no information is leaked about either factor.

MFA also requires that the factors be different in type. That is, at least two of the usual three types given below are required:

  • Something you know (e.g., password, PIN, security question challenge)
  • Something you possess (e.g., ICC card, physical token, cryptographic token or private key)
  • Something you are (e.g., physical biometric or behavioral biometric)

The factors must also be independent. Access to one should not grant access to the other. For example, if I use my mobile phone as my device for logging into a system and the system can validate my device with a high-degree of assurance, then it might be the something I possess. However, if it is also where I store my password (or the device to which a one-time-password (OTP) or password reset would be sent), then possession of the device may grant access to both factors. NIST acknowledges this as a risk in its DRAFT NIST Special Publication 800-63B Digital Authentication Guideline: Authentication and Lifecycle Management (5.1.3. Out-of-Band Devices).

Other circumstances may also result in loss of independence, for example, relying on a digital certificate as one factor if it is on the same device into which you are entering your password. If compromise of the device equates to having both the digital certificate and your password, then independence is lost. A similar issue exists when one factor gives access to more than one of the factors used in MFA. This is common with mobile devices that use a single factor to unlock (whether it be a passcode or a biometric) that then gains you access to other authenticators, e.g., stored passwords, the device’s identity, private keys, or software tokens. The assessor should carefully examine any method alleged to be multifactor to verify that it meets all of the requirements. For more information on this subject, consider the following publications:

  • DRAFT NIST Special Publication 800-63-3 Digital Authentication Guideline
  • DRAFT NIST Special Publication 800-63B Digital Authentication Guideline: Authentication and Lifecycle Management
  • DRAFT NIST Special Publication 800-63C Digital Authentication Guideline: Federation and Assertions
  • ISO 19092:2008 Financial Services Biometrics Security Framework
  • ISO/IEC 27040:2015 Information technology — Security techniques — Storage security

[1] Per our current PCI DSS FAQ, multistep authentication may also qualify as multifactor, as long as at least two types of factors are used and the first step is not sufficient to gain knowledge of (or constructive use of) the second authentication factor. Note that an updated version of this FAQ will be published shortly.”

So let us discuss what we probably agree with the Council on in their statements above because that is the easier discussion.

I think most security professionals would agree with the discussion that the factors must be independent of the device being used to log onto the systems.  As a result, if you have the RSA SecurID Software Token or Symantec VIP apps on a cell phone or tablet, that device should also not be able to log onto the systems you are trying to protect.  The same holds true with the practice of putting a certificate on a device for MFA.  The rationale being that if an attacker has the device and the device owner’s credentials, MFA is doing nothing because the second factor will either already be on the device or will be displayed there.

However, the “moment of excitement” occurred because that was not the discussion that occurred at the QSA session.  What was stated at that session was that ALL out-of-band MFA to anything other than a traditional fob was no longer allowed.  I know that was what I heard and I was not the only one that interpreted the statements made that way.  So it was not like I was the only one that heard something wrong as there were a lot of people in that ballroom that heard the exact same thing.  That is what we all heard and why there was a “moment of excitement”.  And rightly so, as that would have put about 90% of MFA solutions as totally non-compliant.

There has been a lot of back channel discussion between QSAs regarding the Community Meeting MFA discussion.  One of the first discussions was about the risk involved.  While we mostly agree with the Council’s position on the independence issue, we have concerns about full adoption of all of NIST’s recommendations regarding MFA.  The Council has acted like SMS and Voice MFA was killed by NIST but that is not the case.  What NIST is saying is:

“Note: Out-of-band authentication using the PSTN (SMS or voice) is deprecated, and is being considered for removal in future editions of this guideline.”

Deprecated means that it is not recommended, but is still allowed.  Why?

Because there is a risk of SMS being intercepted, but to do that is not necessarily an easy task as say a man-in-the-middle attack of Wi-Fi.  During the back channel discussions, it was questioned whether or not the Council truly realizes the real world risk of intercepting SMS and how that plays against a government entity or a bank versus your run of the mill organization.  It is not a risk that has a “one size fits all” rating because of the complexity of the task.  And that is what has the security community up in arms about is that NIST’s recommendation is probably a good thing for the government or a bank to follow, but might still be acceptable for small business versus no MFA or even worse, lying to their bank that they have MFA.

Keep in mind that this is interception, so the target will not receive the message, only the attacker will receive it.  If you want to pass something else along, that further adds to the complexity.  In order to intercept SMS, one has to accomplish one of the following.

  • Infect the target’s smartphone with a virus.
  • Reissue the target’s SIM.
  • Hack the PSTN.
  • Intercept the target’s cell service via a Stingray type of device.

It is relatively easy to infect smartphones on a large scale.  However it is very hard to infect a particular smartphone or group of smartphones without the attacker physically getting their hands on the phone(s).  Given the prevalence of using fingerprints and patterns to log onto phones, even physically having the phone makes infecting it not a quick task and requires equipment to break in and infect the device.  Doing that without the target(s) being suspicious is probably very low.

Reissuing a target’s SIM is relatively easy but creates a huge timing issue.  Because it works only once, that means the attacker must reissue the SIM right at the time the target is receiving the SMS MFA or they will miss the code.  The risk of that timing happening is very, very low even for employees of government entities.

So this leaves us with hacking the PSTN and using a Stingray device.  Hacking the PSTN is also supposedly relatively easy.  Here are the steps required to intercept SMS.

  • The attacker must create their own fake call processing capability (MSC).
  • The attacker must then get the real MSC to release the target’s phone to the fake MSC.
  • The attacker must then point his fake MSC to their own device for the SMS MFA message.
  • The attacker must then wait for the target to logon to generate an SMS MFA request.
  • The attacker must then use the SMS MFA before the target generates a new SMS MFA because they did not receive the original SMS MFA.

The first problem is creating a fake MSC.  This is not as easy as you might think for your run of the mill attacker.  Governments have them, criminal organizations have them, but your average hacker going after credit cards is not going to have such capability unless they are extremely serious about their craft as there are much easier ways to go after cardholder data (CHD).

But assuming we have someone that is truly determined and has such a capability, they must then intercept the SMS MFA message and use it before the target gets wise that their SMS is being intercepted.  This means the attacker has to hope that their target is not a heavy user of SMS.  Portio Research estimates that there are around 16 million SMS messages sent every minute in the world.  Given there are approximately 6.8 billion phones in the world, that means that your target will, on average, receive just over three messages in a day via SMS.  One of those likely to be the MFA message you are trying to intercept probably the first message of the day.  So predictability is on the side of the attacker.

That said, most users of SMS MFA are going to likely only try twice to get their SMS MFA message before they call the help desk to find out what the problem is with the MFA solution.  It will likely be at that point that any attacker will likely be found out because the help desk will discover that the user complaining is already logged onto the systems.  So just because the attacker has access does not necessarily mean they are home free and can do as they please.

As a result, hacking SMS through the PSTN, while possible, is probably only a risk at a very high value target will likely have to face.

So in this discussion of SMS MFA risk, what we have left is using a Stingray device to intercept the target’s mobile service.  This will be like drinking water through a firehose because you will not only have to grab your target’s service, but everyone else that is nearby your Stingray device.  Which brings up the next issue which is that your Stingray device will have to stay in near proximity to your target in order to grab the information you desire.  If you target is truly mobile, that could be very problematic unless you have the resources to install Stingray devices like the FBI or CIA on every cell tower in town.  Again, I would say the likelihood of such an attack is relatively low for all but the most determined attackers which will stop at nothing to get into an organization.

At the end of this mental exercise, we again question the Council adopting NIST’s recommendation regarding SMS MFA without considering the actual real world risk.  Just because a threat exists, does mean the risk is automatically high because NIST is getting ready to deprecate it.  Again, NIST is securing the government and is sharing the results of their research with the rest of us because we, as taxpayers, have paid for it and deserve the results of their research.  That said, that does not mean that everything they produce is always relevant to every organization outside of the government.  Most of it is, but not everything.  This SMS MFA deprecation is probably relevant at some point, but for the current timeframe, SMS MFA is better than no MFA.

But that brings us to the fact that NIST did not say that SMS MFA cannot be used as they did with SSL and Early TLS.  All NIST did say was that they do not recommend it and that sometime in the future they may not allow it.  As a result, if an organization is using SMS MFA, it is still allowed to be used.  NIST has only put organizations on notice that at some point, SMS MFA will no longer be allowed.

But by their statements, the Council has taken NIST’s future deprecation comment to mean that SMS MFA is dead now and that is false.  Yes, organizations should probably look at any SMS MFA solution skeptically from here on out, but SMS MFA is still allowed by NIST just not recommended because of the risk.  That said and as has been discussed, we question if the risk presented is realistic for all organizations given the effort required.

So let us bring this back to the real world.  The vast majority of large retailers have or are in the process of implementing P2PE/E2EE solutions with tokenization.  Those implementations that are in process will likely be done by the end of 2017.  Those remaining 98% of the rest of retailers will likely never ever encounter it because of the effort required to tap SMS just does not justify the reward.

There is a tremendous MFA infrastructure installation and the Council by their statements threatened the vast majority of that install base with their statements that did not match what NIST was stating.  That is what we are arguing over and what drew the “moment of excitement” at the Community Meeting.

In the end, while it is good to know that NIST believes SMS MFA to be a bad solution going forward, exactly what is the Council protecting with their statements?  With CHD no longer stored by large retailers, the risk is at the small retailers, transaction gateways, transaction processors and banks.  So the Council’s and NIST’s recommendations should be focused at those entities that actually pose a risk and not painted with a broad brush against all organizations.

The Council has chastised us all over the years for not focusing on the risk presented in our assessments.  It is time for the Council to take some of that same medicine and recognize that not every NIST pronouncement needs to be tossed out to the PCI community as though it is gold.  The Council also needs to recognize the risk presented and act accordingly.  It is no longer 2008 and organizations are not protecting SAD/CHD.

A lot has changed in the decade since the Council was founded.

10
Dec
16

The Council Releases Draft Scope And Network Segmentation Information Supplement

Quietly on Friday, December 9, 2016, the PCI SSC released the draft Information Supplement titled ‘Guidance for PCI DSS Scoping and Network Segmentation’.  As with all Information Supplements, the information documented in these does not replace any of the requirements in the PCI standards.  These documents contain only guidance and suggestions as to how organizations can comply with the PCI standards.

Overall this Information Supplement does not break much new ground regarding the clarifications that have been given over the years on these two subjects.  The Council has taken a much simpler approach to defining categories of systems than did the Open PCI DSS Scoping Toolkit (OPDST).  The Council only has three categories:

  • CDE Systems (Category 1A/B in the OPDST)
  • Connected-to and/or Security-Impacting Systems (Category 2A/B/C/X in the OPDST)
  • Out-of-scope Systems (Category 3 in the OPDST)

One thing the Council has done is provide some good examples for how to prove systems are out of scope.  If a system meets ALL of the following criteria, then it is considered out of scope.

  • System component does NOT store, process, or transmit CHD/SAD.
  • System component is NOT on the same network segment or in the same subnet or VLAN as systems that store, process, or transmit CHD.
  • System component cannot connect to or access any system in the CDE.
  • System component cannot gain access to the CDE nor impact a security control for CDE via an in-scope system.
  • System component does not meet any criteria described for connected-to or security-impacting systems, per above.

The Council goes on further to say that even though these systems are out of scope for PCI compliance, they still need to be secured and patched regularly to ensure the overall security of the organization.

However, there are two points I noted that will likely require some additional clarification from the Council as they are going to potentially cause issues with a lot of organizations.

On page 7, the second paragraph, the document states:

“The existence of separate network segments alone does not automatically create PCI DSS segmentation. Segmentation is achieved via purpose-built controls that specifically create and enforce separation and to prevent compromises originating from the out-of-scope network(s) from reaching CHD.”

The paragraph taken as a whole seems to imply that the Council is taking the conservative position that only firewalls can be considered as network segmentation controls.  It is the phrase “purpose-built controls” that needs to be further defined by the Council.  Earlier in the document there is an example provided using firewalls which the paragraph would definitely lend itself.

In the past, the Council has said that access control lists (ACL) and encrypted tunnels also constituted valid network segmentation.  However this paragraph calls into question whether those are now considered “purpose-built controls” or not.  One would assume so, but as we have all learned in the past, one should never assume with the Council.  As a result, it would be great if the Council could provide clarification on what exactly they mean by “purpose-built controls” in the final release of this document.

The next point of concern is on page 11 in the Connected-to and/or Security-Impacting Systems table.  The third bullet down in the list of criteria states:

“System component can impact configuration or security of the CDE, or how CHD/SAD is handled—for example, a web redirection server or name resolution server.”

It would appear from this statement that the Council has brought Web servers that perform a redirect into scope for PCI compliance as they are considered ‘connected to’ systems.  That will be a huge blow to merchants using redirects to keep their Web servers from having to be ASV scanned and meeting all of the other PCI requirements contained in SAQ A-EP.

The only remaining question is if those Web sites using iFrames will also now be in-scope for SAQ A-EP compliance as well?  Time will tell.

I have no idea when the final version of this document may be released.  But if the Non-Listed Encryption Solutions Information Supplement is any indication, it could be released on this coming Monday to the public.

02
Dec
16

Not Tested Clarification

In the November 2016 Assessor Newsletter from the PCI SSC, there is a clarification on what ‘Not Tested’ actually means and implies.  I am sure this will really get some service providers whipped up as it will create some issues with work they perform on behalf of their customers.

The following is taken directly from that newsletter.

“Recently, AQM has received some questions about the impact of using “Not Tested” as a response within a completed ROC. This article is intended to address a few points briefly, with published documentation to follow.

  1. Due to an oversight, the option for “Not Tested” was not included in the summary findings table within the summary overview when that table was introduced with the ROC Reporting Template for use with PCI DSS v3.2. We will publish an errata for the ROC Reporting Template shortly.
  2. Some have asked whether one can have a compliant AOC in instances where “Not Tested” was used. While PCI SSC is not able to comment on matters of compliance, we would direct you to read the verbiage at Part 3 PCI DSS Validation of the Attestation of Compliance below:aoc-part-3

How to achieve “all questions answered affirmatively” is the question. PCI SSC does not consider “Not Tested” to be an affirmative statement. The difference between “Not Tested” and “Not Applicable” is that no testing at all is performed for “Not Tested” whereas for “Not Applicable” some testing is performed to confirm a given control is truly not applicable. As such, between “Not Tested” and “Not Applicable,” only “Not Applicable” can be considered an affirmative response.

The intent in introducing “Not Tested” was to achieve a better level of transparency as to the level of compliance and this clarification supports that intent. If you have questions or suggestions, please reach out to the QSA Program Manager.”

It is that second to the last paragraph that will likely send most people off of the deep end.  Their comment that the “PCI SSC does not consider “Not Tested” to be an affirmative statement” really got me going.  What exactly then was the point of using ‘Not Tested’ if you did not consider it an affirmative statement?  Which by the way, when using affirmative as an adjective, means “asserting the truth, validity, or fact of something.”  Last I checked, ‘Not Tested’ would be considered a truth or fact.

There are a number of options for the Council to take here.

  1. Change the wording in the ‘Compliant’ box in Part 3 to reflect that an entity is compliant with all of the requirements tested.
  2. Give us a box in Part 3 that says ‘Compliant with Exceptions’ or something of that ilk which would allow those entities not testing certain requirements to still be judged compliant with what was tested.
  3. Tell QSAs that an AOC cannot be filled out for assessments that mark any requirements as ‘Not Tested’ because an AOC is not relevant.

I remember at a number of past Community Meetings various Council representatives repeatedly and emphatically told those of us from the Accounting community that PCI assessments were not SAS 70 (now SSAE 16) engagements when we would invoke SAS 70 like rules for sampling, testing and the like.  Well, I hate to say it, but the Council is sure turning them into one with all of these pronouncements.

UPDATE: On the Council’s Webinar on Thursday, December 15, it was announced that the Council will be making changes to the AOC and will issue new guidance on this topic sometime in the first Quarter of 2017.  So stay tuned for an update.

04
Oct
16

The Great Multi-Factor Authentication Debate

The Council brings back the Assessor Session to this year’s Community Meeting and it takes only one question to get passions flowing.  The question was to get a clarification of a comment made by Ralph Poore, Director, Emerging Standards at the Council, about multi-factor authentication (MFA).

First a little background to get everyone up to speed remembering that the US National Institute of Standards and Technology (NIST) SP800-63B standard in question is still a draft and has not been finalized.  However, everyone expects this standard to be adopted largely unchanged and with only minor wording revisions that would not affect the overall recommendations in the standard.

What NIST stated about SMS was in section 5.1.3.2. Out-of-Band Verifiers of SP800-63B which states:

“Due to the risk that SMS messages or voice calls may be intercepted or redirected, implementers of new systems SHOULD carefully consider alternative authenticators. If the out-of-band verification is to be made using the public switched telephone network (PSTN), the verifier SHALL verify that the pre-registered telephone number being used is not associated with a VoIP (or other software-based) service. It then sends the SMS or voice message to the pre-registered telephone number. Changing the pre-registered telephone number SHALL NOT be possible without two-factor authentication at the time of the change. OOB using the PSTN (SMS or voice) is deprecated, and may no longer be allowed in future releases of this guidance.”

NIST is only calling out that new implementations of SMS or voice MFA should consider the security implications of using SMS or voice for MFA.  But NIST has not totally invalidated any existing SMS and voice MFA solutions.  They just do not want any new implementations unless there is no choice because the process is already underway.  So while SMS or voice MFA can still be used in existing implementations, NIST is saying that future implementation of SMS and voice MFA are out of the question, have basically killed those solutions.

With that as our background, in a Community Meeting session, Ralph Poore stated that MFA to devices such as smartphones or back to the same device or browser (i.e., “soft” solutions) were not considered secure because of statements in the NIST Draft of SP800-63B.  I was attending a different session when Ralph made his statements, but I can tell you that my cell phone started buzzing with text messages from various people asking if we had all heard what we had heard.  But since there was no Q&A at that session, there was no way to clarify Ralph’s statements.

As a result, this issue was brought up in the Assessor Session to clarify those MFA comments.  Ralph stood and reiterated his remarks and that sent the room into an absolute tizzy.  It was pointed out that NIST had only invalidated SMS and voice for future two-factor authentication, not all soft token solutions such as RSA’s or Symantec’s application solutions.  However, Ralph continued to repeat his remarks saying that they had invalidated all soft solutions.  That brought the house down and people were loudly explaining that his comments were invalidating decades of recommendations for OOB MFA solutions.  Eventually the room calmed down and the Council agreed to review their position on such “soft” MFA solutions.

So that is where we are with this subject.  Time will tell if the Council revises its statements on MFA and comes into line with what NIST is saying on the subject.

30
Sep
16

2016 North American PCI Community Meeting

It was a hectic week out in Las Vegas at the Community Meeting this year.  I wish I had more time this year to just hang out with everyone, but I was in the middle of a number of assessments that needed to get done, so I was working at night and attending sessions during the day.

By the time you read this, the slide decks from the sessions will have been posted on the Council’s Web site.  So all of you that attended will be able to download those presentations.  You go to the link provided in the program guide, provide your name, organization name, email address and the password from the program guide (ve4eqepR) and you are in.

The Council tried the 20 minute “TED Talk” format again with the Wednesday sessions.  A number of the sessions I attended could have easily used an extra 10 minutes if not a complete hour.  I know the Council is trying to move things along and get a lot of information covered, but trying to discuss topics like “the cloud” or EMV standards just cannot be properly accomplished in 20 minutes.  I do not care how good a speaker or organized the presentation.

Here are some of the more notable highlights.

The Assessor Session Is Back

Possibly the most anticipated session of the Community Meeting this year was the return of the Assessor Session after being missing for two years.  But unlike previous years where this session occurred before the start of the Community Meeting, the return of the Assessor Session was moved to the end of the Community Meeting.  I heard a number of complaints throughout the week from assessors about being at the end of the meeting.  Yet when Thursday lunch came around, there were a lot of QSAs, ISAs and ASVs that adjusted their travel schedules (Guru included) to attend this session.

While I originally agreed with people that moving the Assessor Session to the end was not a good idea, the more I have thought about it, the more I think it was better at the end.  That way assessors can have questions covering topics that come up during the meeting get answered while we are all together.  I know we all want to get home, but I think the Assessor Session offers more value to all of us being at the end.

On the not so good side, the Council chose to use up an hour and 10 minutes to present a variety of topics, some of which took way too long to discuss.  But the larger question was why was this material not presented during the main conference?  Not only did all of the meeting attendees miss out, but there were people that did not get their questions asked.  I am also sure that running long discouraged a lot of people from asking questions as well.

That said, there were a number of good questions asked during this session and the Council rewarded five people with large PCI SSC coffee mugs for their “good” questions.

One question though really created a stir.  I will address that question regarding multi-factor authentication (MFA) as a separate post to be published later.  However I will say this about this discussion.  The Council really needs to go back and re-think their position on MFA if what they said is accurate.

The Council was asked about SAQ A and where it is headed.  The concern in the assessor community is that the mechanism that issues/controls the iFrame/redirect needs protection.  However the changes to SAQ A for v3.2 did not seem to address this obvious risk.  Based on how the question was answered, I am guessing that the hosting community is trying to keep SAQ A as simple and easy as possible regardless of the risk.

Another area that the Council agreed to review was the change to requirement 3.2 in the ROC Reporting Template.  In v3.2 of the template you can no longer mark those requirements as Not Applicable however it was pointed out that an ‘NA’ was still allowed in the SAQ D.  The reason for seeking this clarification was related to past comments from the Council to follow SAQs for P2PE (SAQ P2PE) and outsourced eCommerce (SAQ A) when filling out a ROC for merchants with these solutions.  It was pointed out that neither of these SAQs has requirement 3.2 in them, so how is a QSA/ISA supposed to respond to it in the reporting template if it cannot be marked as ‘NA’.

Understanding The Current Data Breach Landscape (aka Verizon DBIR Report Discussion)

When Verizon sends out Chris Novak, you know you will get a great presentation on the data breach incident report aka ‘The DBIR’.  This year was no exception albeit somewhat depressing as Chris again pointed out that most breaches are the result of sloppy operations, lax security and insecure applications.  Essentially security issues that we should have gotten past a long, long time ago but have not.

Architecting for Success

Who better to talk about success than a representative from the Jet Propulsion Laboratory (JPL) talking about how to develop spacecraft to explore the most inhospitable environment we know, outer space and planetary bodies.  Brian Muirhead was the keynote speaker on Wednesday and is the Chief Engineer for the Mars Science Laboratory, the group that designed and developed the various Mars exploration rovers.  He gave a great discussion on how to look out for problems and develop self-managing devices.  Very interesting and I am sure an eye opener for people that we need to stop accepting the sloppy and messy solutions we get for handling cardholder data.

Internet of Things Keynote

The Thursday keynote was just a great time.  While there seemed to be very little directly relevant to PCI compliance presented by Ken Munro and an associate from Pen Test Partners, it was a fabulous time exploring the wonderful world of flawed technology from a tea kettle, to a refrigerator to a child’s doll.  In the case of the child’s doll, they removed the word filter database and therefore allowed the doll to say things that no child’s toy should say.

What was relevant to PCI was the ease with which these folks were able to reverse engineer firmware and software used by these devices.  It gave a lot of people unfamiliar with IoT and penetration testing in the room pause as to how seemingly sophisticated technology can be easily abused.

Cloud Security

While it was great to see Tom Arnold from PSC, the even better thing about this presentation was the fact that Amazon provided an actual human being, in the form of Brad Dispensa, to talk about Amazon’s EC2 Cloud.  While billed as a discussion on incident response, the session provided great insight into AWS’s EC2 service offering as well as the variety of new tools available to manage the EC2 environment and also provide auditors and assessors with information regarding the configuration of that environment.  The key take away from this session is that organizations using EC2 can provide everything needed for conducting a PCI assessment using their EC2 Master Console.

EMVCo

Brian Byrne from EMVCo gave a great 20 minute session on EMV.  The slide deck will be more valuable than the presentation because he had so much content to share and so little time to share it in.  Of note was his discussion of version 2.0 of three domain secure otherwise known as 3D Secure or 3DS.  While v1.0 will remain under the control of Visa, EMVCo has taken over management and development of the 3DS standard.  The new version is in draft and only available to EMVCo members, so this was the first time I had been able to see what the new version has to offer.  But because of the time constraint, I will need to wait for the slide deck to be published to know more.

PCI Quality Assurance Program

Brandy Cumberland of the Council provided a great presentation on the Council’s quality assurance program that all QSAs have become familiar.  I appreciated her discussion of James Barrow who took over the AQM program after most of us wanted to kill his predecessor for creating one of the most brutal QA programs we had ever seen.  James efforts to make the AQM program more relevant cannot be underestimated as he took over a very troubled affair.  This was a bittersweet discussion as James passed away right after last year’s Community Meeting and will be greatly missed by those of us that came to know and respect him.  Brandy took over the AQM program when James left the Council and has been doing a great job ever since.  She is possible one of the best resources the Council has and does the AQM program proud.

Application Security at Scale

The last great session of the conference I saw was from Jeff Williams of Contrast Security.  The reason this session was great was it discussed what application developers can do to instrument their applications for not only security, but also for operational issues.  He introduced us to interactive AppSec testing (IAST) and run-time application self-promotion (RASP).  The beauty of this approach is that applications get security in the form of embedded instrumentation that results in actionable analytics which then allow decisions to be made to respond to threats to these applications.  It sounds like an interesting approach and concept and I cannot wait to see it in action.

As always, it was great to see and catch up with all of my friends in Las Vegas at the PCI Community Meeting.  It was also great to meet a lot of new people as well.  I look forward to seeing all of you again next year in Orlando.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

February 2017
M T W T F S S
« Jan    
 12345
6789101112
13141516171819
20212223242526
2728  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,775 other followers