Author Archive for PCI Guru

24
Mar
17

Service Provider AOCs and Section 2g

It is becoming obvious that there are a lot of QSAs out there did not get the message when v3 of the PCI DSS came out and the new AOC for service providers was introduced.  This has been a big topic at the last few community meetings as well and recently became a big topic with a number of my clients as I continue to see service provider AOCs that are not correct.  I have even mentioned this problem already in a post about service providers, but the problem continues.

As a result, I have decided this is a great time to discuss the problem and get everyone to ensure it is fixed so that we stop the arguments over something that is clearly documented in the service provider AOC form and needs to be done correctly.  Because there is no excuse for messing this up.

Section 2a

Before we get to the actual problem, we need to talk about section 2a in the service provider AOC as it drives the problem.

PCI AOC SP Section 2a

In section 2a of the service provider AOC, a QSA is call out in the ‘Name of service(s) assessed’ and to check every box in the ‘Type of Service(s) assessed’ for every service named as part of the service provider’s PCI assessment.

QSAs seem to be doing very well in checking the appropriate boxes for ‘Type of Service(s) assessed’ on the AOCs that I encounter.  However for the ‘Name of service(s) not assessed’, QSAs seem to not necessarily doing quite as well.  The reason will become obvious when I discuss section 2g.

One important note though.  When checking the ‘Others’ box (or any of the ‘Other’ boxes), please make sure to list ALL the other services that were assessed and NEVER, EVER use “etc” in that explanation.  All the services in the ‘Others’ category MUST BE listed individually and specifically.  Again, this will become obvious as to why when we get to section 2g.

And before we move on, I get questions about cloud services, i.e., SaaS, PaaS and IaaS.  Those are services and should be listed as such in the ‘Name of service(s) assessed’.

Section 2g

PCI AOC SP Section 2g

Notice that shaded ‘Note’ that is in bold and italics that states:

“One table to be completed for each service covered by this AOC. Additional copies of this section are available on the PCI SSC website.”

What this note means is you need to have the same number of section 2g’s as you have named services in section 2a.  And this is where a lot of QSAs and their QA reviewers are going wrong with their service provider AOCs

For example, if you have named five services in 2a, there had better be five pages of 2g filled out.  One for each of those five named services.  By the same token, if you are relying on check boxes under the ‘Type of Service(s) assessed’ section to define the services covered, then you should call those out separately in 2g.

The bottom line though is that, however a QSA breaks things out, there must be multiple 2g sections for each individual service provided.

In some very rare instances there can be some services that might have the same coverages/responsibilities for the requirements in the matrix and those may be combined into one table.  The Council has even said as much in clarifying this form.  However the Council has also been very clear that when combining those services into one 2g section, those services MUST have EXACTLY the same responsibilities and that is where a lot of QSAs get into trouble.  So the recommendation I would make is just do one 2g for every service and stop trying to combine things.

Now the QSAs that I have had discussions (arguments) with over their flawed service provider AOCs always bring up the fact that the AOC Word document is locked and they cannot make changes.  I always point them back to that ‘Note’ in 2g which states:

“Additional copies of this section are available on the PCI SSC website.”

According to the guidance provided by the Council at the Community Meetings, QSAs are to append those additional 2g sections to the end of the AOC.

That said, some of us in the QSA community have unlocked the Word document (NOT approved by the Council) and just copy section 2g and insert it inline in the AOC for the number of services we need sections for and fill them out.

One final note about section 2g.  Please follow the instructions to the letter when filling out the table/matrix for the service.  I cannot tell you the number of those that I encounter where ‘Partial’ or ‘None’ are checked and then there is nothing documented in the ‘Justification’ column.  The instructions are very clear in how you are supposed to fill the ‘Justification’ column out so there is no excuse for leaving it blank.

And for the merchants that have to deal with these service provider AOCs.  It is up to you to police these documents.  If you receive an AOC and it is not properly filled out, it is up to you to point out your concerns to the service provider.  If the service provider does not address your concerns, you have a couple of options at your disposal.

  • Contact the PCI SSC with your concerns at qsa@pcisecuritystandards.org. Document your concern(s) in your email as well as including the AOC in question.
  • If the service provider is listed on either the Visa or MasterCard service provider lists on their respective Web site, you should notify them as well. This is because both of those card brands should have caught this error before listing the service provider on their Web site.  For Visa, go to http://www.visa.com/splisting/learnmore.html and use the appropriate email address for your region under the PCI DSS Validated Service Providers row.  For MasterCard, use the pcireports@mastercard.com email address and as with the Council document your concern(s) in an email as well as including the AOC in question.

By contacting the Council, you will provide the Council feedback that a QSAC is not conducting their assessments for service providers appropriately and that the Council may need to conduct an assessor quality management (AQM) process for that QSAC.

Notifying the card brands will do two things.  It will point out a potential flaw in their service provider listing process that needs to be addressed.  But it could also potentially put the service provider in a different status on the card brands’ lists.

21
Mar
17

Stripe Questions Come Back

I have had a couple of readers ask this question, so I thought it was time to go back and take a look at it again.  It has been since 2013 that I first brought up Stripe as a potential compliance scoping issue.

The question being posed is:

“How can Stripe claim on its Web site that its JavaScript checkout solution allows for a merchant to use SAQ A?”

The first thing to notice is the sidebar regarding the various Stripe solutions.  There are three distinct solutions offered by Stripe:

  • Checkout
  • Elements
  • Stripe.js (the original solution)

In the PCI DSS Guidelines section is the following:

“Elements and Checkout host all form inputs containing card data within an IFRAME served from Stripe’s domain.

As long as you serve your payment pages over TLS, and use either Checkout or Elements as the only way of handling card information, Stripe automatically creates a combined SAQ A and Attestation of Compliance (AOC) for you.”

The first important point is that, if a merchant is using the Stripe.js solution, it does NOT qualify for the SAQ A.  This is the original solution that I wrote about back in 2013.  But the fact that Stripe.js is not SAQ A eligible is an important point for all developers to note as it could easily be missed.

What has changed is Stripe has created two new methods for processing payments: Checkout and Elements.  Those methods create an iFrame that, in theory, would comply with scope minimization and allowing SAQ A to be used by the merchant.

But, this statement “As long as you serve your payment pages over TLS, and use either Checkout or Elements as the only way of handling card information …” is all in the execution by the merchant’s Web site as not all iFrames are created equal.  What a merchant and their developer must do is ensure that the iFrame is created ONLY on the customer’s PC and NOT on the merchant’s Web server.  If done that way, then the statement regarding SAQ A is accurate.

The reason I bring this fact up is that I have encountered solutions using an iFrame but where the iFrame is built on the merchant’s server and not in the customer’s browser.  The merchant points to the fact that the solution is an iFrame and therefore their Web server out of scope.  However, since the iFrame is constructed on the merchant’s Web server and then sent to the customer, it is no longer eligible for SAQ A and the merchant must follow SAQ A-EP.

As a result, it is important that a QSA look very closely at how a merchant’s Web site executes to ensure that the iFrame is never created on the merchant’s Web server.

Based on the examples of what I saw regarding the Checkout and Element solutions, as long as the code samples for Checkout or Element only execute in the customer’s browser, SAQ A would be a valid assessment option.

15
Mar
17

Why We Should Be Concerned About The Verifone Breach

On March 7 Brian Krebs broke the news that Verifone, one of the largest card terminal manufacturers, has suffered a breach. The next day Verifone told the world that the breach was no big deal. No big deal right? Probably not and here is my rationale.

For those of you unfamiliar with Verifone, Verifone is not only a manufacturer of points of interaction (POI, aka card/transaction terminals), it also provides transaction processing services to merchants. As a result, any breach of such an organization puts a lot of the security of the card processing ecosystem at tremendous risk.

Extent Of The Breach

Here is what Verifone has told us about the extent of the breach.

“According to third-party forensic teams, this cyber attempt was limited to approximately two dozen U.S. gas station convenience stores and occurred over a short time period. No other merchants were targeted and the integrity of our payment networks and Verifone’s payment terminals remained secure and fully operational.

Verifone’s information security team identified evidence of this very limited cyber intrusion into our corporate network in January 2017, and we proactively notified Visa, MasterCard and other card schemes.

In concert with our partners, Verifone immediately implemented additional security controls across its corporate networks and began work to determine the type of information that may have been targeted.

It is also worth noting that there have been no adverse events or misuse of any data resulting from this incident. Verifone, partner agencies, and law enforcement remain vigilant and will continue to monitor for this.

We believe that our immediate response and coordination with partners and agencies has made the potential for misuse of information extremely limited.”

The first thing that any forensic examiner will tell you is that determining the extent of a breach is not a trivial process. It takes time. Most times, a lot of time. The reason is that attackers can be very stealthy in how they cover their tracks by wiping logs, leave behind malware/backdoors, and other techniques to obscure what they did and how they did it.  Even though Verifone took almost two months to acknowledge the breach and tell everyone that things are fine, all may not necessarily be well within Verifone.  But only time will tell if that is true.

The troubling thing about Verifone’s statement and likely demanded by their lawyers is the wording at the very end of their statement as they start their last sentence – “We believe”. Legalese that will give them an out should their forensic teams find more issues or issues turn up later.

“Asked about the breach reports, a Verifone spokesman said the company saw evidence in January 2017 of an intrusion in a “limited portion” of its internal network, but that the breach never impacted its payment services network.”

This was followed up by an update by Mr. Krebs after his original post. Verifone stated:

“According to the forensic information to-date, the cyber attempt was limited to controllers at approximately two dozen gas stations, and occurred over a short time frame. We believe that no other merchants were targeted and the integrity of our networks and merchants’ payment terminals remain secure and fully operational.”

Hold on a moment.  What is a “short time frame”?  Oh, and by the way, the attackers had access to controllers and around two dozen gas stations?  And then there is that “According to the forensic information to-date” comment.  That statement would seem to imply that Verifone is not necessary complete with their forensic examination.

So did Verifone or someone else find this breach?

“But a source with knowledge of the matter told KrebsOnSecurity.com that the employee alert Verifone sent out on Jan, 23, 2017 was in response to a notification that Verifone received from the credit card companies Visa and Mastercard just days earlier in January.”

So like most organizations, they were notified by a third party that they likely had been breached.  In this case, two card brands recognized fraudulent transactions that came from merchants serviced by Verifone.

But follow that statement with this one regarding what happened once they were notified.

 “Verifone’s information security team identified evidence of this very limited cyber intrusion into our corporate network in January 2017 …”

My concern with this and the prior statement is that it takes a while for the card brands to recognize fraud.  I have seen it take brands as little as a month to as much as two years for the brands to notify a merchant or service provider that they think there has been a breach.  The reason is that it depends on the extent of the breach (i.e., small versus large merchants, small versus large service provider(s), number of transactions/cards involved), how quickly the cards are used for committing fraud, how quickly those fraudulent transactions are reported back to banks by their customers, how quickly the brands determine a pattern and then that pattern traces back to a likely source or sources.  As a result, I am very suspect as to how long the intruders were in their network and the likelihood that the intrusion was truly as “limited” as Verifone is leading us to believe.

The bottom line in all of this, in my very humble opinion, is that this could just be the tip of the iceberg and this breach could be more extensive than Verifone knows and could have larger ramifications.

Why You Should Care

Given that I suspect that the attackers were in Verifone’s network for a while, I would assume that not just Verifone’s service provider operation was targeted and compromised.

The first clue to this suspicion is that Visa and MasterCard were the ones that notified Verifone that something was going on.  As I stated earlier, the brands take a while to determine a breach which likely means that the attackers were inside Verifone for more than just a short period of time.  In addition, it is rare that PANs collected in a breach are used immediately after they are obtained.  The reason is that there are bigger rewards if they are not used immediately.

The next piece clue in our puzzle is this statement from the Krebs post.

“The source said his employer shared with the card brands evidence that a Russian hacking group known for targeting payment providers and hospitality firms had compromised at least a portion of Verifone’s internal network.”

If this is accurate then it is highly likely that not just card information was gathered.  What also was likely gathered was source code to things like card terminal firmware and software such as Verishield, Verifone’s end-to-end encryption (E2EE) solution.  Any attackers that are focused on targeting payment providers would know that if they were inside of an organization that provides such solutions as Verifone that they should get their software as well as cardholder data (CHD).  If you have the ability to exfiltrate CHD, why not exfiltrate other useful information such as source code, certificates, encryption keys and other sensitive information.

The only good news in this regard is that while a lot of transaction gateways and processors use Verishield, they all have their own certificates and encryption keys.  So the attackers would have only gotten certificates and keys for the merchants processing through Verifone.  Since Verifone is an encryption endpoint, it is possible that the attackers did not get the certificates or encryption keys because they would not necessarily need them to get at the clear text CHD.  However one should ever assume that is the case.

Now What?

The net of all of this is that if you have Verifone terminals and/or Verishield or other Verifone applications, you should probably be doing a lot more monitoring of that hardware and software since there is no reason to believe that it has not been compromised.

It will be interesting as time goes on to see if this is the end of the discussion or if more will come out on the Verifone breach.

07
Mar
17

Verifone Investigating Breach

Just a quick note to everyone since this could affect a lot of merchants and service providers.  Brian Krebs is reporting that Verifone is investigating a possible breach of their systems.  More on it here.

11
Feb
17

The Council Gets A Clue

Late this week the PCI Security Standards Council issued a new information supplement titled ‘Multi-Factor Authentication’ after the brew-ha-ha that occurred last fall at the Community Meeting in Las Vegas.  For once, the Council has issued an excellent reference regarding the issues of multi-factor authentication (MFA).  Although I still have a couple of minor bones to pick about this document, but more on that later.

If you understand the concepts of MFA, you can skip through the document to the end where the Council presents four scenarios on good and bad MFA.  These are well documented and explain the thought process behind why the scenario works or does not work for MFA.  The key takeaway of all of this is the independence of the MFA solution from the logon process.  The Council is getting in front of the curve here and stopping people from creating insecure situations where they believe they are using MFA that minimizes or stops breaches through administrators or users with access to bulk card data.

Now for a few things that I do not necessarily agree with in this document.

The first involves the Council’s continued belief that hardware security modules (HSM) are actually only hardware.  On page four, the following statement is made.

“Hardware cryptographic modules are preferred over software due to their immutability, smaller attack surfaces, and more reliable behavior; as such, they can provide a higher degree of assurance that they can be relied upon to perform their trusted function or functions.”

The Council has made similar statements over the years in the mistaken assumption that HSMs are only hardware.  HSMs are hardware that use software to manage keys.  There are standards that are followed (e.g., FIPS 140) to ensure that the HSM remains secure, but these devices are predominately software driven.  That is not to say that just any device can serve as an HSM, but a lot of us in the security community are concerned that the Council continues to perpetuate a myth that HSMs are only hardware which is patently false.

My other issue comes on page six as part of the discussion regarding the use of SMS for MFA.

“PCI DSS relies on industry standards—such as NIST, ISO, and ANSI—that cover all industries, not just the payments industry. While NIST currently permits the use of SMS, they have advised that out-of-band authentication using SMS or voice has been deprecated and may be removed from future releases of their publication.”

While everything in this statement is accurate, it gives the uninitiated the impression that SMS or voice is no longer a valid MFA solution.  I know this to be true because I have fielded some questions from clients and prospects on this subject, particularly about SMS.  The key is that this is not SSL and early TLS where NIST called them out as insecure and to no longer be used.  This is a “heads up” from NIST to everyone that there is an issue that makes SMS and voice not secure enough for MFA.

But while there is a risk, a lot of us in the security community question the viability of that risk when matched against merchant risk versus a bank or a government agency.  While I would not want any bank or government agency to use SMS or voice for MFA, a small business may not have a choice given their solution.  The reason is that the risk of an attack on SMS or voice is such that only a high-value target such as a bank or government agency would be worth such an effort.  In my very humble opinion, while a total ban is the easy solution, this is an instance where the Council should take a more nuanced approach toward the use of SMS and voice for MFA.  The bottom line to me is that small merchants using any MFA solution, even if flawed, is better than using no MFA solution.

I would recommend the following approach to manage this risk.

  • Level 4 merchants can be allowed to use SMS or voice for MFA.
  • Level 1, 2 and 3 merchants would be allowed to transition away from SMS and voice to a more secure MFA solution within one year of NIST stating that they are no longer acceptable.
  • All service providers would not be allowed to use SMS or voice for MFA once NIST states that both are no longer acceptable. This means service providers should start transitioning now if they use either.

Those are my thoughts on the subject.  I look forward to the comments I am sure to receive.

07
Jan
17

Getting Ready For 8.3.1

I have had some interesting meetings with clients lately regarding PCI DSS requirement 8.3.1 and multi-factor authentication (MFA).  Requirement 8.3.1 is a best practice until January 31, 2018, but organizations are trying for once to get a jump on it.  As a refresher, the requirement states:

“Incorporate multi-factor authentication for all non-console access into the CDE for personnel with administrative access.”

But these calls have little to do with discussing MFA.  People seem to have that finally down cold.  What is becoming painfully obvious and somewhat disturbing from these calls is the realization that a lot of organizations have no defined cardholder data environment (CDE).

Honestly, we have been discussing scope and definition of the CDE for over a decade now.  Yet people still are having problems defining their CDE.  It makes you start to wonder what these folks have been doing for the last 10 years that they still do not have a defined CDE.

I refer a lot of these clients to the Guidance offered in the PCI DSS as a start to gaining an understanding.  That guidance says:

“This requirement is intended to apply to all personnel with administrative access to the CDE. This requirement applies only to personnel with administrative access and only for non-console access to the CDE; it does not apply to application or system accounts performing automated functions.

If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication either when logging onto the CDE network or when logging onto a system.

If the CDE is segmented from the rest of the entity’s network, an administrator would need to use multi-factor authentication when connecting to a CDE system from a non-CDE network. Multi-factor authentication can be implemented at network level or at system/application level; it does not have to be both. If the administrator uses MFA when logging into the CDE network, they do not also need to use MFA to log into a particular system or application within the CDE.”

What the Council is preventing with 8.3.1 is all of the successful spear phishing attacks against system administrators that were the ultimate root cause of the Target, Home Depot and other breaches.  The reason being is that when people with administrative privileges are breached, it is game over.  The requiring of MFA should prevent that from happening.

While the Council explicitly calls out administrators, I also explain to my clients that it is not just administrators that you need to worry about.  Anyone that has access to bulk data inside the CDE should also be using MFA to gain access.  I have seen people in accounting and customer service roles that can access and decrypt stored CHD that can also be at risk to phishing and similar attacks.  So it is a good idea that anyone with access to the CDE and bulk data should also be using MFA.  The last thing you want to have happen even if the data remains encrypted is to have entire databases be exfiltrated out of your CDE because not everyone with CDE access is using MFA.

But what is absolutely fascinating and scary is that the struggle on these calls continue to surround defining CDE systems.

The first problem I seem to encounter in these meetings revolves around the difference between systems/devices that process/store/transmit cardholder data (aka Category 1 systems/devices) versus those systems/devices that are connected to those Category 1 systems/devices (aka Category 2 systems/devices).  The guidance that I give my clients here is that if a Category 2 system has the ability to effect the security of a Category 1 system (i.e., the Category 2 system/device has inbound access to the CDE system(s)/device(s)), then administrators should also use MFA to access those Category 2 systems/devices.

The second most common problem that comes up is network segmentation.  Lately the meetings seem to be more and more involving the lack of network segmentation.  In those cases the Council’s Guidance column provides your answer.  The Guidance states in the second paragraph that:

“If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication either when logging onto the CDE network or when logging onto a system.”

What?  As usual, the Council has totally messed up the wording here so it is no wonder people have questions.  What the Council should have said in that second paragraph was:

“If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication when logging onto a CDE system.”

The key point in the second paragraph is that there is NO network segmentation, so there is NO separate CDE network.  How an administrator would use MFA to logon to a separate network that does not exist is beyond me.  The Council really needs to fix that second paragraph.

In situations where the CDE is not explicitly defined, organizations are going to have to implement MFA on a device by device basis, not on a network segment basis.  While this can be done, it is a pain in the “you know what” and another reason for segmenting your network to get an explicit CDE.

The final most common issue that comes up with 8.3.1 regards a separate CDE and how to control access to that separate CDE.  The most common way to access a CDE, particularly for administrators, is through separate out of band (OOB) administrative systems (there should always be more than just one for redundancy) also referred to as “jump boxes”.

These OOB will have two network interface cards (NIC) for connecting to the CDE network segment and connecting to a network outside of the CDE.  The external facing NIC connects to a firewall to manage and monitor the network segmentation.  This is because there are typically (or should be) many, many fewer ports required to be open from the OOB to the firewall than from the OOB into the CDE.  Access control to the OOB is typically managed through Active Directory, RADIUS or some other directory system but I have seen it managed on the OOB but I would not recommend that practice.

The OOB also needs to be fully instrumented meaning that every keyboard entry and mouse click is recorded.  All of that is sent to a separate logging system so that in the event that an issue occurs, the actions of the users of the OOB can be reviewed and a determination made as to how to correct the issue.

The next most common way organizations are controlling access to the CDE is through a virtual desktop (VDI) type of solution because they already have that technology and want to use it to connect to their CDE.  The belief being that this will reduce their scope to only the VDIs that connect to the CDE.  But to their chagrin, they quickly find out that it does not reduce scope like they think.

The first question about using VDI regards scope as in, what exactly is in scope if we implement VDI?  While the VDI is obviously in scope, I get a lot of questions and arguments regarding what in addition to the VDI is also in scope.  The Council’s previous pronouncements regarding virtualization make it clear that the virtualization solution such as Citrix or VMware is definitely in-scope in addition to the VDI.

But also in-scope are the devices that access the OOB/VDI particularly if they can be used to view or enter sensitive authentication data (SAD) or cardholder data (CHD).  This is because even with OOB/VDI, these secondary devices still can have the SAD/CHD in their memory that could be accessed by malware.  That does not mean that the full PCI DSS requirements need to be applied to these secondary devices nor do they necessarily need to be in a separate network segment.  But it does mean that appropriate controls need to be applied to protect these secondary devices from the risks they present to SAD/CHD.

I have a lot of clients that try and get these secondary devices out of scope by using virtual keyboards on the OOB/VDI.  The thinking being that if the keyboard is not used then the secondary device cannot be in-scope.  However, there is still a mouse being used to drive the virtual keyboard and that still means the secondary device is in-scope.  That is because the mouse clicks can be recorded by malware and the data retrieved.  The only sure way I have seen secondary devices put out of scope is when an encrypted keypad such as a MagTek DynaPro or IDTECH M100 are used for data entry of SAD/CHD.

The second question I get typical revolves around do the administrators use MFA before or after logging on to the OOB/VDI solution?  Either method will meet the requirement, but I would think that implementing MFA as part of the OOB/VDI logon process (i.e., before) is going to be much easier to implement than implementing it afterward when you would have to implement it for each system/device in the CDE.

Hopefully we now all understand 8.3.1

20
Dec
16

An Update On Multi-Factor Authentication

In the November 2016 Assessor Newsletter there is an update to the Council’s statements at the 2016 Community Meeting’s QSA Forum discussion regarding multi-factor authentication (MFA).

“We had a moment of excitement at the North America Community Meeting in September when we responded to a question in the Assessor Session about MFA. As several of us from the Council pointed out, some techniques historically in use are falling out of favor as acceptable approaches to MFA because, as they are becoming used, they fail to meet the basic requirements of MFA. A recent NIST announcement associated with a proposed revision to NIST Special Publication 800-63 series raised the potential of a sunset date for use of SMS as an out-of-band mechanism for a factor in MFA. Based on the questions asked, we felt a refresher on MFA would be of value.

Assessors should understand that multifactor authentication requires two or more independent factors used in such a way that the presenter of the factors gains no knowledge of whether any factor is valid until all factors have been presented. For example, if the first factor is submitted and results in an indication to the user that it is valid before the second factor is requested, then what you actually have is two, single-factor authentications. The critical issue is not when the validation is actually done; rather it is when feedback to the user is provided. If the user can’t tell which factor failed to grant access, then you have MFA. This common practice is illustrated in Figure 1. Figure 2 illustrates the better practice.

mfa-diagram

Figure 1 is sometimes referred to as a multistep authentication. Figure 2 unifies authentication into a single step. By doing the validation of both factors before providing an indication of authorization success or failure, no information is leaked about either factor.

MFA also requires that the factors be different in type. That is, at least two of the usual three types given below are required:

  • Something you know (e.g., password, PIN, security question challenge)
  • Something you possess (e.g., ICC card, physical token, cryptographic token or private key)
  • Something you are (e.g., physical biometric or behavioral biometric)

The factors must also be independent. Access to one should not grant access to the other. For example, if I use my mobile phone as my device for logging into a system and the system can validate my device with a high-degree of assurance, then it might be the something I possess. However, if it is also where I store my password (or the device to which a one-time-password (OTP) or password reset would be sent), then possession of the device may grant access to both factors. NIST acknowledges this as a risk in its DRAFT NIST Special Publication 800-63B Digital Authentication Guideline: Authentication and Lifecycle Management (5.1.3. Out-of-Band Devices).

Other circumstances may also result in loss of independence, for example, relying on a digital certificate as one factor if it is on the same device into which you are entering your password. If compromise of the device equates to having both the digital certificate and your password, then independence is lost. A similar issue exists when one factor gives access to more than one of the factors used in MFA. This is common with mobile devices that use a single factor to unlock (whether it be a passcode or a biometric) that then gains you access to other authenticators, e.g., stored passwords, the device’s identity, private keys, or software tokens. The assessor should carefully examine any method alleged to be multifactor to verify that it meets all of the requirements. For more information on this subject, consider the following publications:

  • DRAFT NIST Special Publication 800-63-3 Digital Authentication Guideline
  • DRAFT NIST Special Publication 800-63B Digital Authentication Guideline: Authentication and Lifecycle Management
  • DRAFT NIST Special Publication 800-63C Digital Authentication Guideline: Federation and Assertions
  • ISO 19092:2008 Financial Services Biometrics Security Framework
  • ISO/IEC 27040:2015 Information technology — Security techniques — Storage security

[1] Per our current PCI DSS FAQ, multistep authentication may also qualify as multifactor, as long as at least two types of factors are used and the first step is not sufficient to gain knowledge of (or constructive use of) the second authentication factor. Note that an updated version of this FAQ will be published shortly.”

So let us discuss what we probably agree with the Council on in their statements above because that is the easier discussion.

I think most security professionals would agree with the discussion that the factors must be independent of the device being used to log onto the systems.  As a result, if you have the RSA SecurID Software Token or Symantec VIP apps on a cell phone or tablet, that device should also not be able to log onto the systems you are trying to protect.  The same holds true with the practice of putting a certificate on a device for MFA.  The rationale being that if an attacker has the device and the device owner’s credentials, MFA is doing nothing because the second factor will either already be on the device or will be displayed there.

However, the “moment of excitement” occurred because that was not the discussion that occurred at the QSA session.  What was stated at that session was that ALL out-of-band MFA to anything other than a traditional fob was no longer allowed.  I know that was what I heard and I was not the only one that interpreted the statements made that way.  So it was not like I was the only one that heard something wrong as there were a lot of people in that ballroom that heard the exact same thing.  That is what we all heard and why there was a “moment of excitement”.  And rightly so, as that would have put about 90% of MFA solutions as totally non-compliant.

There has been a lot of back channel discussion between QSAs regarding the Community Meeting MFA discussion.  One of the first discussions was about the risk involved.  While we mostly agree with the Council’s position on the independence issue, we have concerns about full adoption of all of NIST’s recommendations regarding MFA.  The Council has acted like SMS and Voice MFA was killed by NIST but that is not the case.  What NIST is saying is:

“Note: Out-of-band authentication using the PSTN (SMS or voice) is deprecated, and is being considered for removal in future editions of this guideline.”

Deprecated means that it is not recommended, but is still allowed.  Why?

Because there is a risk of SMS being intercepted, but to do that is not necessarily an easy task as say a man-in-the-middle attack of Wi-Fi.  During the back channel discussions, it was questioned whether or not the Council truly realizes the real world risk of intercepting SMS and how that plays against a government entity or a bank versus your run of the mill organization.  It is not a risk that has a “one size fits all” rating because of the complexity of the task.  And that is what has the security community up in arms about is that NIST’s recommendation is probably a good thing for the government or a bank to follow, but might still be acceptable for small business versus no MFA or even worse, lying to their bank that they have MFA.

Keep in mind that this is interception, so the target will not receive the message, only the attacker will receive it.  If you want to pass something else along, that further adds to the complexity.  In order to intercept SMS, one has to accomplish one of the following.

  • Infect the target’s smartphone with a virus.
  • Reissue the target’s SIM.
  • Hack the PSTN.
  • Intercept the target’s cell service via a Stingray type of device.

It is relatively easy to infect smartphones on a large scale.  However it is very hard to infect a particular smartphone or group of smartphones without the attacker physically getting their hands on the phone(s).  Given the prevalence of using fingerprints and patterns to log onto phones, even physically having the phone makes infecting it not a quick task and requires equipment to break in and infect the device.  Doing that without the target(s) being suspicious is probably very low.

Reissuing a target’s SIM is relatively easy but creates a huge timing issue.  Because it works only once, that means the attacker must reissue the SIM right at the time the target is receiving the SMS MFA or they will miss the code.  The risk of that timing happening is very, very low even for employees of government entities.

So this leaves us with hacking the PSTN and using a Stingray device.  Hacking the PSTN is also supposedly relatively easy.  Here are the steps required to intercept SMS.

  • The attacker must create their own fake call processing capability (MSC).
  • The attacker must then get the real MSC to release the target’s phone to the fake MSC.
  • The attacker must then point his fake MSC to their own device for the SMS MFA message.
  • The attacker must then wait for the target to logon to generate an SMS MFA request.
  • The attacker must then use the SMS MFA before the target generates a new SMS MFA because they did not receive the original SMS MFA.

The first problem is creating a fake MSC.  This is not as easy as you might think for your run of the mill attacker.  Governments have them, criminal organizations have them, but your average hacker going after credit cards is not going to have such capability unless they are extremely serious about their craft as there are much easier ways to go after cardholder data (CHD).

But assuming we have someone that is truly determined and has such a capability, they must then intercept the SMS MFA message and use it before the target gets wise that their SMS is being intercepted.  This means the attacker has to hope that their target is not a heavy user of SMS.  Portio Research estimates that there are around 16 million SMS messages sent every minute in the world.  Given there are approximately 6.8 billion phones in the world, that means that your target will, on average, receive just over three messages in a day via SMS.  One of those likely to be the MFA message you are trying to intercept probably the first message of the day.  So predictability is on the side of the attacker.

That said, most users of SMS MFA are going to likely only try twice to get their SMS MFA message before they call the help desk to find out what the problem is with the MFA solution.  It will likely be at that point that any attacker will likely be found out because the help desk will discover that the user complaining is already logged onto the systems.  So just because the attacker has access does not necessarily mean they are home free and can do as they please.

As a result, hacking SMS through the PSTN, while possible, is probably only a risk at a very high value target will likely have to face.

So in this discussion of SMS MFA risk, what we have left is using a Stingray device to intercept the target’s mobile service.  This will be like drinking water through a firehose because you will not only have to grab your target’s service, but everyone else that is nearby your Stingray device.  Which brings up the next issue which is that your Stingray device will have to stay in near proximity to your target in order to grab the information you desire.  If you target is truly mobile, that could be very problematic unless you have the resources to install Stingray devices like the FBI or CIA on every cell tower in town.  Again, I would say the likelihood of such an attack is relatively low for all but the most determined attackers which will stop at nothing to get into an organization.

At the end of this mental exercise, we again question the Council adopting NIST’s recommendation regarding SMS MFA without considering the actual real world risk.  Just because a threat exists, does mean the risk is automatically high because NIST is getting ready to deprecate it.  Again, NIST is securing the government and is sharing the results of their research with the rest of us because we, as taxpayers, have paid for it and deserve the results of their research.  That said, that does not mean that everything they produce is always relevant to every organization outside of the government.  Most of it is, but not everything.  This SMS MFA deprecation is probably relevant at some point, but for the current timeframe, SMS MFA is better than no MFA.

But that brings us to the fact that NIST did not say that SMS MFA cannot be used as they did with SSL and Early TLS.  All NIST did say was that they do not recommend it and that sometime in the future they may not allow it.  As a result, if an organization is using SMS MFA, it is still allowed to be used.  NIST has only put organizations on notice that at some point, SMS MFA will no longer be allowed.

But by their statements, the Council has taken NIST’s future deprecation comment to mean that SMS MFA is dead now and that is false.  Yes, organizations should probably look at any SMS MFA solution skeptically from here on out, but SMS MFA is still allowed by NIST just not recommended because of the risk.  That said and as has been discussed, we question if the risk presented is realistic for all organizations given the effort required.

So let us bring this back to the real world.  The vast majority of large retailers have or are in the process of implementing P2PE/E2EE solutions with tokenization.  Those implementations that are in process will likely be done by the end of 2017.  Those remaining 98% of the rest of retailers will likely never ever encounter it because of the effort required to tap SMS just does not justify the reward.

There is a tremendous MFA infrastructure installation and the Council by their statements threatened the vast majority of that install base with their statements that did not match what NIST was stating.  That is what we are arguing over and what drew the “moment of excitement” at the Community Meeting.

In the end, while it is good to know that NIST believes SMS MFA to be a bad solution going forward, exactly what is the Council protecting with their statements?  With CHD no longer stored by large retailers, the risk is at the small retailers, transaction gateways, transaction processors and banks.  So the Council’s and NIST’s recommendations should be focused at those entities that actually pose a risk and not painted with a broad brush against all organizations.

The Council has chastised us all over the years for not focusing on the risk presented in our assessments.  It is time for the Council to take some of that same medicine and recognize that not every NIST pronouncement needs to be tossed out to the PCI community as though it is gold.  The Council also needs to recognize the risk presented and act accordingly.  It is no longer 2008 and organizations are not protecting SAD/CHD.

A lot has changed in the decade since the Council was founded.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

March 2017
M T W T F S S
« Feb    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,797 other followers