Archive for March, 2019

11
Mar
19

The New NIST Password Guidance

In June 2017, the US National Institute of Standards and Technology (NIST) issued new guidance on authentication in the form of four Special Publications (SP).

  • SP 800-63 is an overview of digital identity and the other three publications in the series.
  • SP 800-63A discusses digital enrollment and identity proofing.
  • SP 800-63B discusses authentication and lifecycle management.
  • SP 800-63C discusses federation and assertions.

This post is about SP 800-63B which covers the new password guidance from NIST.  In the vernacular of NIST, a password/passphrase is referred to as ‘Memorized Secret Authenticator’.  Here are the key attributes offered by this new NIST guidance:

  • A Memorized Secret Authenticator must be at least a minimum of eight characters in length and should allow for at least 64 characters.
  • All printable ASCII characters should be allowed for comprising a Memorized Secret Authenticator.
  • A replacement Memorized Secret Authenticator used to reset a forgotten/corrupted/compromised Memorized Secret Authenticator must be at least six characters long.
  • A Memorized Secret Authenticator can be forced to comply with blacklisted words/phrases to avoid guessing or brute force attacks.
  • No hints are allowed to be provided to unauthenticated users.
  • When changing a Memorized Secret Authenticator, the provider should ensure that the new Memorized Secret Authenticator is not known to be compromised, a known word or expected value such as ‘12345678password’ or similar.
  • The Memorized Secret Authenticator can be displayed if it cannot be readily observed by others.
  • Verifiers should not impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets.
  • Verifiers should not require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers shall force a change if there is evidence of compromise of the authenticator.

A lot of clients are pushing hard to use these new NIST rules in place of the PCI DSS requirements.  As a reminder, the PCI DSS requires the following when it comes to passwords.

  • 8.2.3 Passwords/passphrases must meet the following: require a minimum length of at least seven characters, contain both numeric and alphabetic characters. Alternatively, the passwords/ passphrases must have complexity and strength at least equivalent to the parameters specified above.
  • 8.2.4 Change user passwords/passphrases at least once every 90 days.
  • 8.2.5 Do not allow an individual to submit a new password/passphrase that is the same as any of the last four passwords/passphrases he or she has used.
  • 8.2.6 Set passwords/passphrases for first-time use and upon reset to a unique value for each user, and change immediately after the first use.

So where are we in regard to NIST versus PCI DSS?

On length, we are good.  Both documents have a minimum of 8 characters.

Complexity is a sticking point as the PCI DSS imposes complexity rules on the composition of passwords, NIST states that the authentication system should not impose such composition rules.

NIST is more restrictive on the checking of password/passphrase changes to include ensuring that they have not been used somewhere else that was compromised.  In addition, it also requires that if believed to be compromised, the authentication system should force a change.

However, it is when we get to changing passwords/passphrases on a specific interval that we run into trouble.  NIST advises that arbitrary changing of passwords/passphrases are not required whereas the PCI DSS states that passwords/passphrases should be changed every 90 days.  NIST counts on the fact that they require to monitor that credentials have not been compromised to support their not requiring an arbitrary change of passwords.

The first thing that comes to peoples’ mind is the guidance to requirement 8.2.3 which states:

“Strong passwords/passphrases are the first line of defense into a network since a malicious individual will often first try to find accounts with weak or non-existent passwords. If passwords are short or simple to guess, it is relatively easy for a malicious individual to find these weak accounts and compromise a network under the guise of a valid user ID.

This requirement specifies that a minimum of seven characters and both numeric and alphabetic characters should be used for passwords/ passphrases. For cases where this minimum cannot be met due to technical limitations, entities can use “equivalent strength” to evaluate their alternative. For information on variability and equivalency of password strength (also referred to as entropy) for passwords/passphrases of different formats, refer to industry standards (e.g., the current version of NIST SP 800-63.)

Note: Testing Procedure 8.2.3.b is an additional procedure that only applies if the entity being assessed is a service provider.”

What people focus on is the last sentence before that note that states:

“For information on variability and equivalency of password strength (also referred to as entropy) for passwords/passphrases of different formats, refer to industry standards (e.g., the current version of NIST SP 800-63.)”

They then refer the QSA to that statement and say that gives them license to apply that guidance to requirement 8.2.4 and the other password related requirements.  Unfortunately, that guidance only applies to 8.2.3 as it clearly references “password strength” and nothing about change interval or anything else related to password attributes.

Another key point is that the guidance for 8.2.4 makes no reference to SP 800-63.  The Council will tell you that if SP 800-63 applied to 8.2.4, they would have included the same sort of reference in the guidance for 8.2.4 as they did in 8.2.3.  Without that reference, a QSA should not be using the new NIST guidance to replace the requirements specified in 8.2.4.

So, with that path ruled out, the second thing that comes to peoples’ mind is, we will write a compensating control for following the NIST guidance.

There is only one thing wrong with the compensating control approach and that is that a compensating control must go “above and beyond” the PCI DSS requirement.  Above and beyond 90 days would be a value less than 90, not more than 90.  The test is very specific that the change interval must be no more than 90 days.  As a result, there is no compensating control that will get you above and beyond the intent of a 90-day change interval.

That is not to say that you and your QSA cannot write such a compensating control.  The question then becomes if you can get your acquiring bank to sign off on such a compensating control?  There are a number of banks that are not so diligent with their reviews of PCI ROC filings and such a compensating control would sail under the radar. But that is no guarantee.

However, such a compensating control puts your QSAC at risk of remediation if the PCI ROC is selected as part of the Council’s Assessor Quality Management (AQM) review.  Such a compensating control would not be viewed favorably by the Council because it flagrantly violates the rules of a compensating control.  Remediation, while not a death nell to a QSAC, does adversely impact sales of PCI assessments and services and makes current clients uncomfortable, so going into remediation is avoided by QSACs like the plague.

The bottom line is that until the Council makes a change to the PCI DSS (i.e., v4), you are stuck with its password/passphrase requirements regardless of what other standards setting bodies state.

UPDATE – June 28, 2019

This is from the PCI SSC June 2019 Assessor Newsletter.

PCI DSS v3.2.1 to NIST Cybersecurity Framework Mapping
In July, the PCI Security Standards Council (PCI SSC) will be releasing new resources that show how the PCI Data Security Standard (PCI DSS) maps to the NIST Cybersecurity Framework. PCI DSS and the NIST Cybersecurity Framework share the common goal of securing data. The Mapping of PCI DSS to the NIST Cybersecurity Framework will provide a resource for stakeholders to use in understanding how to align security efforts to meet objectives in both PCI DSS and the NIST Cybersecurity Framework.

Details about the Mapping of PCI DSS to the NIST Cybersecurity Framework will be coming soon to the PCI SSC website.”

Advertisement
08
Mar
19

Sins Of The Past

This was a question we got from our last PCI Dream Team session on the Cloud.

“Issue – found CVV in historical call recordings that need to be purge/delete. We are not able to purge the entire call record and still need to keep it for record retention. What tools should be evaluated to help address this issue?”

A lot of organizations are discovering that how they did things in the past did not meet PCI, GDPR or other legal or regulatory requirements when data in their possession needs to be protected.  Gone are those freewheeling days of collecting data without worrying about how to secure and protect it.  Customers and the public at large are realizing the value of their information and the need to protect it.  All of which starts organizations thinking about how to reduce the risk they have because they have all of this data and they are being held responsible for having it.  The patchwork of state laws in the US hold a lot of organizations at risk, some higher than others.

There are also the sins that come to light down the road.  It is not unusual to have a PCI in scope application crawl out of the woodwork years down the road at large organizations.  It should have been identified way back when the organization was starting out in PCI, but somehow was missed and just now turned up.  Unfortunately, these discoveries tend to turn up at the 11th hour of the organization’s current PCI assessment and there is no way to include the application without causing a delay in issuing the ROC.

Surprise!

So, let us talk about the last case first.  The application that we uncover very late in the PCI assessment.

What should happen and in the example cited did happen, was a conversation with the acquiring bank.  The situation was explained as well as the risk involved (it was storing encrypted PAN) and the bank was asked do we delay filing the ROC and assess this application (likely a delay of longer than 90 days) or do we keep moving ahead as planned and pick up the newly disclosed application in the next assessment?

The bank decided that they did not want to delay the ROC filing since it was just out of our QA process, had been sent to the client for their review and was due in around 30 days.

The client looked further into the application and determined that it could be easily remediated with tokenization from their gateway.  As a result, when time came for the next year’s assessment, the application had been remediated with tokenization.  We took a look at it and confirmed it no longer contained encrypted PAN and explained to the bank that it would no longer be in scope.

However, things do not always end that well.  I have also had occasions where no remediation was possible for a variety of reasons and had to go in the following year and assess the new discovered application in all its PCI compliance (and in some cases non-compliance) glory.

Remediate

Getting back to our original sin, so to speak.

First and foremost, you may not be able to remediate your files due to legal or regulatory constraints.  So, before you go charging ahead on your remediation efforts, make sure you discuss it with your legal and compliance folks to ensure you are not creating an even bigger problem.  Assuming you are allowed to remediate data, you can proceed with reading the rest of this section.

Structured data is typically easy to remediate.  You find out what XML tags to look for, fields or what database columns are involved, you develop a program to remediate the data to first six and/or last four for the PAN or erasing data for any information you were not supposed to keep and execute.  Easy.  Well, easy until you take into account backups which can complicate remediation if you cannot just erase the backups.

Unstructured data as with call recordings and notes/comments fields can be a nightmare to remediate.  The reason of course is that the data has no structure and does not necessarily occur in the same place.  Unlike XML or a database where data is at least tagged or in a column, unstructured data exists wherever it exists and programs to remediate the sensitive data need to find it and then eradicate it.  That introduces the problem of false positive results.  I wrote all about the “fun” of trying to find cardholder data (CHD) five years ago and it has not necessarily gotten any better.  The bottom line with unstructured data is that it may not be possible to completely remediate the problem.

However, the best you may be able to do is to remediate the data when it is encountered.  Going back to call recordings, if the quality assurance review process or any process that has someone review recordings encounters CHD they redact the information from the file so that it is no longer in that file.  Not perfect, but slowly you reduce the amount you are storing.  You still have to encrypt the files for protection, but you are making an effort to reduce risk by reducing the amount of viable data.

Isolate It

This most commonly occurs with call recordings, but I have encountered the occasional “legacy” application that it applied to as well.

In the either case, the old system is being decommissioned and a new solution (usually outsourced) is being implemented.  The question comes up, “what do we do with the old system?”  The reason is that for customer service, legal and/or regulatory reasons it cannot just be wiped and destroyed.  It needs to be retained for some period of time before that can happen.

The answer is to keep the system powered up, but off any other network.  If people need access, they need to go to a PC or workstation that is connected to a private, air gapped, isolated network that consists of the old system and the PCs or workstations to be used to access the old system.  No internet or other network access is provided, only a network that contains those few isolated systems.  This will allow the system and workstations to age yet remain protected because of the air gap.  Remember, the PCs and workstations will also age along because it is highly likely that new software may not allow connectivity to the old system.  This is why everything will need to be air gapped.

I usually get asked for the reason to keep the old solution powered up.  That comes from a study done long ago by IBM.  What the IBM study found was that systems that get powered off after years of operation have a tendency to fail after being powered off for any extended length of time (i.e., long enough to cool down).  As a result, if you intend to keep the system around and available, you best keep it powered up albeit isolated as discussed earlier.

One of the larger issues with isolation will be monitoring of the air gapped network to ensure it remains air gapped and how you respond if that air gapped is breached.  There are a number of ways to address this issue, so pick the solution that best fits your environment.

Isolation is not a perfect solution.  It will likely require a number of compensating control worksheets (CCW) to address the fact that you have a number of “antique” systems around.  So be prepared for that work effort as it will likely not be small.

01
Mar
19

Will The Council Kill Off TLS?

On February 6, 2019, a technical paper was published regarding a new attack on TLS 1.2 and 1.3 had been identified.  Of course, the first thing that a lot of us wondered was, “Will the PCI SSC now kill off TLS 1.2 and 1.3?”

Before panic sets in, I am guessing that TLS 1.2/1.3 will not go away like SSL v3 and TLS 1.0/1.1 did before.  The reason is that this is just another variation of the Bleichenbacher attacks that seem to crop up every so often regarding SSL and TLS.  What is different about this attack is the new side-channel leak approach that was used.

The risk in this attack is best described from the researchers’ technical paper.

 “… even though  the  use  of  RSA  in  secure  connections  is  diminishing (only ≈6% of TLS connections  currently  use  RSA  [1,  51]), this  fraction  is  still  too  high  to  allow  vendors  to  drop  this mode.  Yet,  as  we  show  in  Section  VI,  supporting  this  small fraction of users puts everyone at risk, as it allows the attacker to perform a downgrade attack by specifying RSA as the only public key algorithm supported by the server.”

The problem is all related to the use of RSA PKCS#1 v1.5 in TLS.  The rest of protocol is just fine.  So, at worst case I could see the Council recommending that RSA PKCS#1 v1.5 not be allowed to be used.

Which reminds me of years ago when the US banking regulators came out and stated that by a certain date, Internet Explorer 6 would no longer be allowed to be used for internet banking.  According to the banks at the time, such a move by the regulators would create a support nightmare or, even worse, kill off internet banking.  However, the date came, the banks turned off IE6 and little happened.  Yes, there were a few days of higher than normal support calls about customers not being able to get into their accounts, but those quickly died off.

The issue with RSA PKCS#1 v1.5 is similar to the banking story.  At what point do we draw the line on these sorts problems?  10% of users?  2% of users?  1% of users?  In this case, 6% of the internet users are putting the remaining 94% at risk.  Is it worth it?  Each organization will have to determine if that risk is acceptable and justify why.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2019
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031