Archive for the 'Requirement 8 – Assign a unique ID to each person' Category

17
May
20

DevOps And PCI – Part 1

DevOps are all the rage in organizations that develop applications.  The move to become “Agile” through the implementation of methodologies such as Scrum to replace the traditional waterfall SDLC is ongoing in most organizations.  But these changes can create compliance issues with the PCI standards regarding software development.

Understanding The Terminology

First and foremost, we need to address the terminology surrounding DevOps.

But before we talk about those specific terms, we need to address the elephant in the room which is “Agile”.  The Agile approach to development traces its history back to early 2001 when a group of developers met at a Utah ski resort.  The result of that meeting was ‘The Agile Manifesto’.  However, the roots for Agile were sown even earlier as application development became unable to keep pace with business changes starting in the late 1980s.

The important thing to remember about Agile is that it is not a methodology.  It is merely a set of values (4) and principles (12) related to the development of software.  The Agile Manifesto never describes a roadmap or steps to follow as to how those values and principles should be used.  So, to refer to Agile as a methodology is a misnomer but you will constantly encounter it being referred to as though it were a methodology.

Interestingly enough, the methodologies used with the Agile approach were actually developed before Agile.  Of the number of them that sprang up in the 1990s, Scrum seems to have won out when it comes to a methodology.  Scrum was one of many methodologies such as Kanban, Crystal Clear, Extreme Programming, Feature Driven Development and Dynamic Systems Development Method that came out at that time to address the delivery of software solutions in a timelier manner.  But while Scrum is the most followed methodology, it can also include some of these other methodologies such as Extreme Programming (XP) for example when used.

Scrum involves three types of roles.

  • Product Owner: The Product Owner needs to be a person with vision, authority, and availability because they are responsible for continuously communicating the vision and priorities to the development team.
  • Scrum Master: The Scrum Master is not a project manager.  The Scrum Master’s primary responsibility is to remove any impediments that are obstructing the team from achieving its sprint goals.  The Scrum Master also is the primary contact with the Product Owner.
  • Team: The Scrum team is responsible for completing the work.  For application development, a Scrum team can contain anywhere from three to nine members.  For software projects, a typical team includes a mix of software engineers, architects, information security personnel, programmers, analysts, QA experts, testers, and UI designers.  The team is responsible for determining how it will accomplish the work to be completed.

The final term from Scrum that needs to be defined is Sprint.  A Sprint is a one month or less in duration project that will result in a releasable increment of a product, in this case, an application or application enhancements.  When a Sprint’s horizon is too long the definition of what is being built may change, complexity may change, and risk may change.  The concepts of Sprints are to enable predictability by ensuring inspection and adaptation of progress toward a Sprint Goal at least every calendar month. Another benefit is that Sprints limit risk to one calendar month of cost.

Once defined, some of the key characteristics of Sprints are:

  • No changes are made that would endanger the Sprint Goal;
  • Quality goals do not decrease; and,
  • Scope may be clarified and re-negotiated between the “Product Owner” and “Team” as more is learned.

With these behind us, let us now turn to the terms DevOps.

DevOps is a merging of development and operations staff to work together to develop and implement solutions that will essentially run 24x7x365 with (hopefully) minimal operational interaction.  DevSecOps merely formally adds in the collaboration of information security into that mix even though information security should be included in DevOps as well.

The final topic in our discussion of terminology regards the tools used by DevOps.  While there are a number of vendors in this space, the “Big Dog” at the moment in DevOps is Atlassian with their tools Confluence and Jira.  Another “Big Dog” is Microsoft’s GitHub and the other “Big Dog” in the DevOps tool world is Jenkins which is open source from CloudBees.

  • Confluence is used as a documentation repository for such items as policies, standards and procedures as well as business, application, network and other important documentation.
  • Jira is used as a project and change management ticketing system.
  • GitHub is used to manage the versions of applications.
  • Jenkins is used for automating the build, testing and deployment of applications into production.

All of these tools have competitors from vendors such as ServiceNow, Puppet, Ansible, Chef, Google, and other commercial and open source development and operations tool vendors.  Regardless of vendor, all solutions seem to have these three basic components of documentation repository, project/change management and deployment automation.  It is also not unusual to find multiple tools in place particularly with Jenkins, Ansible, Puppet and Chef.

Segregation Of Duties

The first and most contentious issue that comes up with DevOps is the segregation of duties.  This is typically one of the biggest discussions/arguments an assessor/auditor will get into regarding DevOps is when Agile fans argue that segregation of duties is inconsistent with Scrum, Agile and DevOps.  Their primary reason will be to point to the fact that nowhere in any of the documentation regarding these topics is the term ‘segregation of duties’ or the requirement to ensure segregation of duties.  They would be correct in that regard.

Unfortunately, corporate life is not driven by Scrum, Agile or DevOps in a vacuum.  Corporations are still required to comply with laws, contracts and regulations promulgated on them by government entities, business partners, financial institutions and other parties regardless of what is in their methodologies and approaches.  So, while the argument can be made that the methods and approaches do not state anything on the subject, there are other documents, contracts and requirements that do state it is required.

Whether we are discussing PCI, NIST, SOC, COBIT or any other recognized audit or compliance programs, segregation of duties between roles is and always has been required.  It is one of the key principles to ensure that people do not have the ability to corrupt a process because they have too much control over that process.  The reason it is, that time and again one of, if not the primary root cause of such illicit activities, is the failure to segregate duties and roles thus allowing one person too much control over a process.  The concept of segregation of duties being that the more individuals involved in a process the less likely a process can or will be abused.

In DevOps, the issue of segregation of duties gets complicated because it gets extended into the tools used in the process.  The concept of continuous implementation (CI)/continuous deployment (CD) relies heavily on the use of tools such as Jira and Jenkins to enable such an approach.  This means that the assessor/auditor needs to look into who has access to these tools and what rights they have to influence the workflows that exist in these tools.

This gets even more complicated by the fact that this requires analysis of user and access control information from tools such as Active Directory, RADIUS and even the tools themselves.  In my experience, it is not unusual to peel the onion on these access controls to reveal the fact that segregation of duties really does not exist as thought because all roles are granted to everyone in DevOps and the organization is relying on individuals’ honesty to ensure compliance.

DevOps also can suffer from segregation between production, quality assurance (QA), test and development environments.  This is because a lot of organizations that move to DevOps have the mistaken belief that the “Operations” and “Security” components become part of the development group.  The argument will be made that Agile is all about “breaking down silos”.  While that is true, the mistake they make is that Agile and Scrum were not a call to abrogate the knowledge and controls that all of the players involved bring to the table as separate disciplines.  The goal is to make the disciplines better work together to achieve a common goal in a Sprint.

Where this manifests itself most often is that developers have unfettered access to the production environment.  In a DevOps environment, it is not unusual to find developers scattered all throughout the environment.  They are developing code, they are operating production, they are diagnosing bugs, they are everywhere with no delineation of roles and responsibilities.  It is essentially a free for all.  Everyone pitches in where they need to be involved.

This organized chaos is supposedly “controlled” by Jira through its ticketing.  Agile advocates will claim that since everything has a ticket (not always a true statement) that they maintain segregation through Jira.  They will show the tickets to the assessor/auditor and display that there are different names on the ticket for the developer, the QA person, the people who approved promotion, etc.  While this is true, as I described earlier, the access controls will show that virtually everyone they gave as evidence of segregation can fulfill any of those roles whenever they so choose.  By definition, that is not segregation of duties because there are no actual controls in place to stop someone from running the whole process.

The bottom line in this discussion is that the segregation of duty controls in an Agile environment is usually illusory.  As such, it is management’s responsibility to periodically ensure that segregation of duty controls are truly implemented and testable.

In the next post we will discuss documentation, change control and PCI scope in an Agile environment.

Advertisement
22
Apr
19

More On The NIST Password Standard

Apparently, I touched a nerve with my post on the National Institute of Standards and Technology (NIST) password standards discussed in Special Publication (SP) 800-63B.  As a result, I thought I would walk you through my logic by using a compensating control worksheet (CCW) approach since this is what you will have to do for your PCI assessment if you chose to rely on the NIST guidance.

[SPOILER ALERT: It is possible, but I doubt it is worth all the effort.]

First, let us review all of what a CCW needs to comply with the Council’s requirements.  From Appendix B of the Report On Compliance (ROC) Reporting Template.

“Compensating controls must satisfy the following criteria:

  1. Meet the intent and rigor of the original PCI DSS requirement.

  2. Provide a similar level of defense as the original PCI DSS requirement, such that the compensating control sufficiently offsets the risk that the original PCI DSS requirement was designed to defend against. (See Guidance Column for the intent of each PCI DSS requirement.)

  3. Be “above and beyond” other PCI DSS requirements. (Simply being in compliance with other PCI DSS requirements is not a compensating control.)

  4. Be commensurate with the additional risk imposed by not adhering to the PCI DSS requirement.”

QSAs can get stuck on the third point because the Council also seems to focus on that point in their Assessor Quality Management (AQM) reviews because QSAs miss that point so often.  However, the other three are also very important to apply to the compensating controls being discussed.

Now let us focus on is section 4 of the CCW where the organization being assessed is required to describe the controls they have in place that go “above and beyond” the requirement being compensating which in this case is requirement 8.2.4 which requires password changes every 90 days or less.  I pick that requirement because that is the one most often cited by clients as why they want to use the NIST standard.  Most want to go to a 12-month password change interval.  These controls are going to come from pages 13 through 15 of the SP800-63B.

  • All passwords are required to be [value greater than eight] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service], [name of dictionary word list used], repetitive or sequential characters and context specific words are checked and rejected if found.
  • Authentication is only conducted using [encrypted authentication protocol(s)].
  • Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique].
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

To comply with the NIST guidelines for passwords an organization needs to implement all of these controls.

So how do they match up with the four criteria for a CCW?

Above and Beyond

This is the easiest one to tackle because almost all of the controls are above and beyond.  What?  Almost?

There are a couple of controls that do not meet the above and beyond test.

The first is the easiest to discuss and that is “Authentication is only conducted using [encrypted authentication protocol(s)].”.  That control does not pass above and beyond because it is required by requirement 8.2.1 under transmission must use strong cryptography.  As such, that control cannot be relied upon in the CCW and must be removed.

The second one is the “Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique]” control.  This discussion gets sticky because requirement 8.2.1 states that storage of credentials must also use strong cryptography which is not very specific.  I would argue that any sort of reasonable response here would be required by requirement 8.2.1 and therefore this requirement would also be ineligible to be used.

Only the password length is specified by the PCI DSS and as long as a value greater than eight is picked, that meets above and beyond.  However, we need to discuss this value further under intent and rigor.

All of the remaining controls are not specified in the PCI DSS, so those are all considered above and beyond.

Intent and Rigor

For intent and rigor, we need to look to the guidance provided for requirement 8.2.4.

“Passwords/passphrases that are valid for a long time without a change provide malicious individuals with more time to work on breaking the password/phrase.”

Remember, we are looking at a 12 month password change interval, so we need to consider intent and rigor under that concept that we need controls that will allow a password to remain unchanged for 12 months.

So let us look at the length attribute again.  Nine characters in today’s world without any complexity requirements can result in passwords able to be cracked in minutes.  Ten characters can be done in hours.  Only when we get to 12 characters and above do we get a value of at least 12 months or greater to crack.  As such, I would argue that you need 12 character long passwords or greater to pass the rigor requirement for justifying a 12 month change interval.

Passwords are assessed against a dictionary word list, context specific words and repetitive/sequential characters.  The key to this part of the second bullet is the extent of the dictionary word list.  The dictionary needs to be sufficiently large to provide the control that NIST desires.  The QSA is going to need to know how large the dictionary is, what is used as a reference to ensure that the dictionary has the appropriate words in its list and how often is the dictionary updated.  That would likely mean that these controls would need to be separated from the credential breach service control so that those aforementioned additional controls can be documented in the CCW. This would all have to be backed up by a proper risk assessment that documents that the review and updatee intervals of the dicutionary are appropriate and mitigate the risks.

Passwords being assessed to some credentialed breach source/service introduces an interesting twist to ensuring the security of a password.  But it also introduces an interesting discussion into the intent of requirement 8.2.4 which is to ensure the security of credentials.  NIST is only requiring that credentials be tested at the point they are changed.  But what happens if sometime during the 12 month interval that those credentials are compromised?  The intent of requiring a 90 day change interval was to reduce the risk of credentials becoming compromised for an extended length of time by changing one of those credentials at least every 90 days.

But NIST does not require monitoring of the credentials other than when they change.  Without constant monitoring of the credentials from a compromise service, how do you know when they need to be changed which is the intent of the change interval?

The PCI DSS does provide a bit of guidance on how the Council would likely approach this issue.  For reference I point you to requirement 3.6.5 which discusses this in regard to encryption keys that are suspected to have been compromised.  The reason I believe this is relevant here is that the PCI DSS does not require specific change intervals for encryption keys.  I would argue that the PCI DSS would view passwords changing at long intervals as requiring the same sort of control.  If the credentials are ever suspected of being compromised, then they should be changed.

Which brings up an interesting dilemma.  How do you monitor something that you have hashed and cannot recover?  Do we really want to have encrypted passwords in our authentication systems so that we can monitor them for compromise?  I seriously doubt that would be a good practice.

So with that said, we would need some sort of monitoring and alerting capability to warn if credentials do appear to be compromised such as monitoring for excessive logons, logons when the user is out of the office, logons from systems outside of the user’s area or building or other characteristics that would provide some sort of indication of credential compromise.  These controls would have to be added to the monitoring of the credential breach source to show that the credentials are changed when suspected of being compromised.

Similar Level of Defense and Be Commensurate

At this point, I think we have covered these two requirements for a CCW with our discussions about above and beyond and intent and rigor.

Where Are We With The CCW Controls?

Based on our discussion, here is what I think section 4 of the CCW would now have to look like.

  • All passwords are required to be [value of 12+] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service]
  • Passwords are monitored for excessive logons, excessive failed logon attempts, logons when the user is out of the office and logons that occur from systems outside of the user’s area or building to provide an indication of credential compromise.
  • When passwords are modified, [name of dictionary word list/source used], repetitive or sequential characters and context specific words are checked, and the password is rejected if any of these characteristics are found. The dictionary is updated every [month/quarter/six months] and reviewed [semi-annually/annually] to ensure the dictionary contains an appropriate list of words.
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

After looking at these controls, I would venture to say it is simpler and easier to meet the PCI DSS requirements than to implement these controls and make them work consistently and effectively.  Because remember, this is just section 4 of the CCW.  For section 5, you have to produce evidence that all of these controls are in place and working as designed.  Never mind section 6 where you explain how you maintain all of these controls.

So for those of you bent on using NIST, there you have it but I doubt it is worth the effort you think it is.  And this does not address the CCWs you will also need to write for 8.2.3 because you no longer enforce complexity and 8.2.5 because you no longer track the last four passwords used.  But those could be another post.  Yeah, I do not think so.  Not worth the effort because those CCWs will revolve around the controls in this one.

As I said in my original post, it might be better to wait for the Council to issue their guidance in v4 of the PCI DSS.

UPDATE: The PCI Council has created an FAQ to address this situation. https://pcissc.secure.force.com/faq/articles/Frequently_Asked_Question/Can-organizations-use-alternative-password-management-methods-to-meet-PCI-DSS-Requirement-8

11
Mar
19

The New NIST Password Guidance

In June 2017, the US National Institute of Standards and Technology (NIST) issued new guidance on authentication in the form of four Special Publications (SP).

  • SP 800-63 is an overview of digital identity and the other three publications in the series.
  • SP 800-63A discusses digital enrollment and identity proofing.
  • SP 800-63B discusses authentication and lifecycle management.
  • SP 800-63C discusses federation and assertions.

This post is about SP 800-63B which covers the new password guidance from NIST.  In the vernacular of NIST, a password/passphrase is referred to as ‘Memorized Secret Authenticator’.  Here are the key attributes offered by this new NIST guidance:

  • A Memorized Secret Authenticator must be at least a minimum of eight characters in length and should allow for at least 64 characters.
  • All printable ASCII characters should be allowed for comprising a Memorized Secret Authenticator.
  • A replacement Memorized Secret Authenticator used to reset a forgotten/corrupted/compromised Memorized Secret Authenticator must be at least six characters long.
  • A Memorized Secret Authenticator can be forced to comply with blacklisted words/phrases to avoid guessing or brute force attacks.
  • No hints are allowed to be provided to unauthenticated users.
  • When changing a Memorized Secret Authenticator, the provider should ensure that the new Memorized Secret Authenticator is not known to be compromised, a known word or expected value such as ‘12345678password’ or similar.
  • The Memorized Secret Authenticator can be displayed if it cannot be readily observed by others.
  • Verifiers should not impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets.
  • Verifiers should not require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers shall force a change if there is evidence of compromise of the authenticator.

A lot of clients are pushing hard to use these new NIST rules in place of the PCI DSS requirements.  As a reminder, the PCI DSS requires the following when it comes to passwords.

  • 8.2.3 Passwords/passphrases must meet the following: require a minimum length of at least seven characters, contain both numeric and alphabetic characters. Alternatively, the passwords/ passphrases must have complexity and strength at least equivalent to the parameters specified above.
  • 8.2.4 Change user passwords/passphrases at least once every 90 days.
  • 8.2.5 Do not allow an individual to submit a new password/passphrase that is the same as any of the last four passwords/passphrases he or she has used.
  • 8.2.6 Set passwords/passphrases for first-time use and upon reset to a unique value for each user, and change immediately after the first use.

So where are we in regard to NIST versus PCI DSS?

On length, we are good.  Both documents have a minimum of 8 characters.

Complexity is a sticking point as the PCI DSS imposes complexity rules on the composition of passwords, NIST states that the authentication system should not impose such composition rules.

NIST is more restrictive on the checking of password/passphrase changes to include ensuring that they have not been used somewhere else that was compromised.  In addition, it also requires that if believed to be compromised, the authentication system should force a change.

However, it is when we get to changing passwords/passphrases on a specific interval that we run into trouble.  NIST advises that arbitrary changing of passwords/passphrases are not required whereas the PCI DSS states that passwords/passphrases should be changed every 90 days.  NIST counts on the fact that they require to monitor that credentials have not been compromised to support their not requiring an arbitrary change of passwords.

The first thing that comes to peoples’ mind is the guidance to requirement 8.2.3 which states:

“Strong passwords/passphrases are the first line of defense into a network since a malicious individual will often first try to find accounts with weak or non-existent passwords. If passwords are short or simple to guess, it is relatively easy for a malicious individual to find these weak accounts and compromise a network under the guise of a valid user ID.

This requirement specifies that a minimum of seven characters and both numeric and alphabetic characters should be used for passwords/ passphrases. For cases where this minimum cannot be met due to technical limitations, entities can use “equivalent strength” to evaluate their alternative. For information on variability and equivalency of password strength (also referred to as entropy) for passwords/passphrases of different formats, refer to industry standards (e.g., the current version of NIST SP 800-63.)

Note: Testing Procedure 8.2.3.b is an additional procedure that only applies if the entity being assessed is a service provider.”

What people focus on is the last sentence before that note that states:

“For information on variability and equivalency of password strength (also referred to as entropy) for passwords/passphrases of different formats, refer to industry standards (e.g., the current version of NIST SP 800-63.)”

They then refer the QSA to that statement and say that gives them license to apply that guidance to requirement 8.2.4 and the other password related requirements.  Unfortunately, that guidance only applies to 8.2.3 as it clearly references “password strength” and nothing about change interval or anything else related to password attributes.

Another key point is that the guidance for 8.2.4 makes no reference to SP 800-63.  The Council will tell you that if SP 800-63 applied to 8.2.4, they would have included the same sort of reference in the guidance for 8.2.4 as they did in 8.2.3.  Without that reference, a QSA should not be using the new NIST guidance to replace the requirements specified in 8.2.4.

So, with that path ruled out, the second thing that comes to peoples’ mind is, we will write a compensating control for following the NIST guidance.

There is only one thing wrong with the compensating control approach and that is that a compensating control must go “above and beyond” the PCI DSS requirement.  Above and beyond 90 days would be a value less than 90, not more than 90.  The test is very specific that the change interval must be no more than 90 days.  As a result, there is no compensating control that will get you above and beyond the intent of a 90-day change interval.

That is not to say that you and your QSA cannot write such a compensating control.  The question then becomes if you can get your acquiring bank to sign off on such a compensating control?  There are a number of banks that are not so diligent with their reviews of PCI ROC filings and such a compensating control would sail under the radar. But that is no guarantee.

However, such a compensating control puts your QSAC at risk of remediation if the PCI ROC is selected as part of the Council’s Assessor Quality Management (AQM) review.  Such a compensating control would not be viewed favorably by the Council because it flagrantly violates the rules of a compensating control.  Remediation, while not a death nell to a QSAC, does adversely impact sales of PCI assessments and services and makes current clients uncomfortable, so going into remediation is avoided by QSACs like the plague.

The bottom line is that until the Council makes a change to the PCI DSS (i.e., v4), you are stuck with its password/passphrase requirements regardless of what other standards setting bodies state.

UPDATE – June 28, 2019

This is from the PCI SSC June 2019 Assessor Newsletter.

PCI DSS v3.2.1 to NIST Cybersecurity Framework Mapping
In July, the PCI Security Standards Council (PCI SSC) will be releasing new resources that show how the PCI Data Security Standard (PCI DSS) maps to the NIST Cybersecurity Framework. PCI DSS and the NIST Cybersecurity Framework share the common goal of securing data. The Mapping of PCI DSS to the NIST Cybersecurity Framework will provide a resource for stakeholders to use in understanding how to align security efforts to meet objectives in both PCI DSS and the NIST Cybersecurity Framework.

Details about the Mapping of PCI DSS to the NIST Cybersecurity Framework will be coming soon to the PCI SSC website.”

03
Nov
18

Open Source

One of the questions we received at the last PCI Dream Team session was:

“What about open source for 6.5?”

I am sure the person asking wanted to know whether open source payment solutions must comply with the PCI DSS requirements in 6.5.x?

The quick and simple answer is of course, ‘Yes’!  Why would it not?  It is source code after all, so therefore it must comply with the requirements in 6.5.x (as well as other requirements in section 6 and throughout the PCI DSS).  The PCI DSS does differentiate between different sources of application code.  For PCI compliance purposes, code is code is code, regardless of the source.

Now what does come into play is whether or not the PA-DSS validation standard applies to an application.  As PA-DSS relates to open source, I wrote about that over eight years ago, but it is still relevant today.  For the purposes of this post, I am not talking about PA-DSS validated applications.

The next question a QSA typically gets is, “Well 6.5 only applies to internet-facing payment applications, right?”

Wrong!  Any payment application needs to meet the requirements in 6.5.x whether it is internet-facing or internal facing.  Also, it does not matter whether a browser is involved or not although a significant number of the requirements in 6.5.x are related to browser-based applications.

But ensuring open source is PCI compliant goes beyond just 6.5.x.  There are other requirements that, at a minimum, must be applied as well.  Not every requirement in a section or group or requirements may apply, but some will be needed to be covered depending on how the application works.

  • Section 3 related to encryption of stored data and encryption key management;
  • Section 4 related to encryption of communications;
  • Requirements 6.1 and 6.2 for patching and vulnerability management. This can become problematic for open source because as time goes on applications can develop vulnerabilities that the developer community does not address.  This is most likely because the community moved on and your application became an orphan;
  • Requirements 6.4 for application development. Remember, just because your organization did not develop the application, if it is not PA-DSS validated, then it is your responsibility to ensure the code securely processes, stores or transmits sensitive authentication data and/or cardholder data;
  • Requirement 6.6 is also in play regardless of whether or not the application is browser-based. At a minimum, code reviews must be performed.  If the application is browser-based, then you can add in a Web application firewall (WAF) for additional security;
  • Sections 7 and 8 related to access control and user management; and
  • Section10 related to application log data.

Remember, every time a new release of your open source solution becomes available, you have to go through all of this all over again if you intend to use the new release.

So those of you thinking that you can somehow leverage open source to reduce your PCI compliance footprint, think again.  All you have done is outsourced the development of your solution.  The rest is still on you.  In the end, it is really not much of a savings.

08
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 2

In the last post I discussed what the SOC reports are and what, in general, to look for in a SOC 2/3 report.  Now I want to take you through the more detailed analysis of the SOC reporting so that you can understand why it might not give you the result you desire and how to address that fact.

How Do I Analyze The SOC Report?

Based on the testing in the ‘Information Provided by Independent Service Auditor’ section, you are going to need to map that testing into the PCI ROC tests, if they even fit.  I typically use the Prioritized Approach spreadsheet to do this as it provides a way of documenting the requirements covered and a quick dashboard regarding what is covered.

As you reviewed the domains listed under the SOC 3 report, I am sure you thought; “What is not to like?  It looks like most of what I need for PCI is covered.”  But you would be wrong.  You will find after you map the controls from any SOC 2 report that covers all the TSP domains into the Prioritized Approach that the report will likely only cover around 20% to 25% of the PCI DSS requirements.  That is because the level of detail in the SOC tests are just not as detailed as they are in the PCI DSS.  As a result, SOC 2 reporting does not provide the kind of reliance you need to respond to all of the relevant PCI DSS requirements.

For example, while SOC will likely test that password controls are in place, you will be unable to ascertain if the organization enforces seven character or greater password lengths, password complexity, nor if they require passwords to be changed every 90 days or less.  Let alone if the special requirements for vendor password management are enforced.  It is these missing details that create the coverage problems with using the SOC reporting results.

The same can be said for change management.  When tested, the SOC report will likely call out a lot about change management, but not at the level of detail required in the PCI DSS for requirements under 6.4.  You will also find that coverage in requirements 1 and 2 regarding network and server configurations will be lacking in specificity to meet the PCI DSS testing.

Now as a QSA, you have a decision to make.  Can you accept only 20% to 25% of coverage of PCI DSS requirements as being PCI compliant?  I know I cannot.  I need much more to work with before I can get comfortable that a SOC report provides the necessary coverage for PCI compliance.

Now What?

You and your client have expended all this effort and are no closer to the result desired than when this process started.

So, what to do next?

Work with your service providers that provide you SOC reports to include testing that adds the PCI DSS details that are missing.  There will likely be a bit of push back from these service providers because adding testing to their SOC reports will cause the cost of their SOC reports to increase, sometimes significantly.  So be prepared for it.

What you need to do is to have their auditors add the necessary testing details to the description of controls and then have them test that they are in place.  Examples include:

  • Password length, complexity, change frequency and the procedures followed to perform a password reset.
  • Details surrounding privileged and general user management including provisioning, management approvals, users are implemented with least privilege and users are disabled or removed when terminated.
  • Changes tested for segregation of duties between developers and operations, segregation of test, QA and production environments, production data not used for testing, developers do not have unrestricted access to production, test data and accounts removed before applications are promoted to production, changes document impact, they are appropriately authorized, they have been tested, they have been vulnerability assessed and they document backout procedures.
  • If encryption is used to protect data, document the algorithms used, are key custodian agreements in place, are split key processes in place if performing manual key management, indicate if a hardware security module (HSM) is used and are keys changed when their crypto-periods expire or they are believed to be compromised.
  • Document the configuration standards that are followed by device classes such as firewalls, switches, servers and test that they have been implemented.
  • Document that anti-virus is implemented on systems commonly affected by viruses and malware, what the anti-virus solution is that is implemented, the anti-virus solution cannot be disabled and that the anti-virus solution is actively running on all systems it is installed.
  • Document that vulnerability scanning is performed, how often scanning is performed and that vulnerabilities are remediated.
  • Document that penetration testing is performed, how often penetration testing is performed and that findings are remediated.
  • Document that log data is collected from all devices, it is reviewed at least daily and that it contains a date/time stamp, device name, type of log entry and other relevant information.

There are a lot of other areas that could be added to the SOC report, but these are, in my opinion, the bare minimum that need to be added to make the SOC report more relevant for PCI.  I am trying to balance the amount of additional information needed versus the cost of providing it in the SOC report.

By adding all of this will it cover all of the gaps between SOC and PCI?  No.  But it should give your QSA significantly more comfort that the controls in place to meet PCI than what is currently being provided by CPAs.

04
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 1

This is a common question that QSAs encounter from clients.  The client has an SSAE 18 Controls at a Service Organization (SOC) report from one of their service providers and they want to know if they can use it to satisfy any or all of the requirements in 12.8, 12.9 and 12.11 related to vendor management?

The biggest caveat in this discussion is that the PCI SSC does not sanction the use of any report other than a PCI Attestation Of Compliance (AOC) and/or a PCI Report On Compliance (ROC) in addition to any other PCI reports.  The Council has repeatedly stated that if a QSA chooses to rely on an SSAE 18 SOC 2 report (or any other compliance report for that matter), the QSAC and their client accepts the risk if the SSAE 18 SOC 2 does not cover what the QSA claims it covers and therefore relies upon it for fulfilling PCI ROC requirements.  As a result, most QSAs will not accept an SSAE 18 SOC 2 report (or any other non-PCI compliance reports) for any reason.

For those of us “recovering” certified public accountant (CPA) types that have conducted SSAE18 audits, we know how to read and interpret these reports.  As a result, when we are asked about SSAE 18 SOC 2 reports being relevant, our answer is that, “It depends on what the SOC 2 covers and how it was tested.”

Before we get too deep into this discussion though, we need to define the terminology surrounding this topic.  The first thing is that SSAE 18 replaced SSAE 16 as of 2017 even though nothing else appears to have changed.  The next key thing anyone needs to know about SSAE 18 is that there are three reports that can come from this reporting series: SOC 1, SOC 2 and SOC 3.

The first, SOC 1, is for financial auditors only.  It used to be called a SAS 70 years ago.  It is a report focused on financial controls that an external auditor needs to ensure that the financial numbers coming from the third party can be relied upon in their annual audit of their client.  Yes, these SOC 1 reports can cover security controls, but that is only in regard to financial systems, not necessarily the third party’s entire environment.  In addition, the control coverage is typically not as deep as required for PCI compliance.  The bottom line is that any reliance on a SOC 1 report outside of financial systems should never be assumed.

I am going to cover the SOC 3 report next because it covers all of the security domains.  The SOC 3 report (also sometimes referred to as the ‘SysTrust’ report) covers the following domains:

  • Organization and Management – The criteria relevant to how the organization is structured and the processes the organization has implemented to manage and support people within its operating units.
  • Communications – The criteria relevant to how the organization communicates its policies, processes, procedures, commitments, and requirements to authorized users and other parties of the system and the obligations of those parties and users to the effective operation of the system.
  • Risk Management and Design and Implementation of Controls – The criteria relevant to how the entity (i) identifies potential risks that would affect the entity’s ability to achieve its objectives, (ii) analyzes those risks, (iii) develops responses to those risks including the design and implementation of controls and other risk mitigating actions, and (iv) conducts ongoing monitoring of risks and the risk management process.
  • Monitoring of Controls – The criteria relevant to how the entity monitors the system, including the suitability, and design and operating effectiveness of the controls, and takes action to address deficiencies identified.
  • Logical and Physical Access Controls – The criteria relevant to how the organization restricts logical and physical access to the system, provides and removes that access, and prevents unauthorized access to meet the criteria for the principle(s) addressed in the engagement.
  • System Operations – The criteria relevant to how the organization manages the execution of system procedures and detects and mitigates processing deviations, including logical and physical security deviations, to meet the objective(s) of the principle(s) addressed in the engagement.
  • Change Management – The criteria relevant to how the organization identifies the need for changes to the system, makes the changes following a controlled change management process, and prevents unauthorized changes from being made to meet the criteria for the principle(s) addressed in the engagement.

There are also some additional considerations that are related to Confidentiality specified in the Trust Services Principals and Criteria (TSP), but those are not required to be covered in the SOC 3 report.

Finally, there is the SOC 2 report.  The SOC 2 report uses the same TSP as the SOC 3 but with a twist.  The third party can select any or all of the seven domains to be assessed.  Think of it as a “cafeteria style” assessment.  With the SOC 2, the AICPA does not require that all domains be covered (as with the SOC 3), the assessed entity can select only those domains they wish audited.  As a result, a third party could select only the ‘Organization and Management’ domain to be assessed and nothing else in their SOC 2 report.  Therefore, just because you have a SOC 2 does not mean it covers the domains necessary for your PCI assessment.  Like the SOC 3, in addition to the seven domains, the SOC 2 can also cover none, any or all of the additional considerations documented in the TSP.

Within each of these SOC reports there is a Type I and a Type II report.  A Type I report is basically worthless from a reliance perspective because no testing of the controls is ever performed.  With a Type I report, the auditor is signing off on the fact that the third party has controls defined and formally documented.  But without testing, there really is no point to this report.  Yet every now and then, I encounter a Type I report that an organization has relied upon for years.

The only report worth anything is a Type II report which tests the control environment to ensure that the controls are functioning as designed.  So, when you get that SOC 2 report, you need to make sure you have a Type II report where testing has been performed by the auditor.  Even then though, the report might not be as useful as you might think.

I Have A SOC 2 Type II Report From A Service Provider

While you want to read the whole report in detail, when I am pressed for time and cannot read it in its entirety, here is where I focus so that I can get a quick view of what I have.  Some CPA firms provide a one-page Executive Summary that gives the reader a quick overview of the report, provides the timeframe the report covers, opinion, exceptions and other useful information.  But that is not required by the AICPA so you cannot always rely on such an overview being in every report you receive.  When they are available, they can help you focus your quick review efforts even better.

The first thing to do is to read the auditor’s opinion which should be the first section of the report.  It is in the form of a letter on the auditor’s letterhead and signed by the auditing firm.  The opinion the auditor provides will be either:

  • Unqualified – no material control weaknesses or failures were identified.
  • Qualified – some material control weaknesses or failures were identified.
  • Adverse – significant control weaknesses or failures were identified.

An unqualified opinion is what all organizations desire and what most reports document.  But do not be fooled by an unqualified opinion.  There still could have been control weaknesses or failures identified but they did not rise to the level of being considered “material”.  I have seen some unqualified reports with control weaknesses that I would have considered material as their auditor, so you might still want to contact the organization to get clarification on any weaknesses identified.

A report with a qualified opinion is not the end of the world, but that will all depend upon what control weaknesses or failures created the qualification.  Someone misusing their access can be minor compared to not performing backups of servers for months.  As a result, you need to read each control weakness to determine the criticality of the control failure as well as review management’s responses to how they addressed or will address the failure.  Again, you may find yourself contacting the organization to clarify weaknesses documented.

In my experience, reports with an adverse opinion never get issued to the public.  Management sees all of the control failures and weaknesses and then embarks on the long arduous task of cleaning up their control environment.

The next section to look at is the one labeled ‘Information Provided by Independent Service Auditor’ or similar.  This is the section that will contain the testing results and will define which of the domains were covered as well as the timeframe the report covers.  Most organizations issue SOC reports annually, so you always want to make sure that you have the most current report.  If the coverage end date is getting within three months of a year old or more, you should contact the third party and ask them when the next report will be issued.  They should inform you that the new report is in progress and give you an estimated date the report will be issued.  If they do not give you a succinct answer, I would be concerned.

You need to go through this section looking at a couple of things.  The first is to determine which of the domains were covered.  While documenting those domains, you also need to review the testing that was performed and at what level of detail those tests were conducted.  For example, it is not unusual to see tests for change control cover five random changes but not test those changes for having appropriate documentation, backout instructions and testing, only that the changes were approved.  At some point you will need to read this section carefully to determine what, if anything, will cover the testing required by the PCI DSS.  But a quick perusal will usually give you an idea of what you are likely going to get out of the SOC 2 for PCI compliance, if you are going to get anything at all.

This leads to the next section of the report you should read.  The last section of all SOC reports is usually titled ‘Supplemental Information Provided By [Organization Name]’.  This section contains information that was provided by the entity being audited but is not covered by the auditor’s opinion.  There can be all sorts of information presented here but the important point to remember is that the auditor did not test or assess the accuracy of that information.  So, you need to take any information provided in this section with a bit of skepticism.

It is in the Supplemental Information section that you want to look for a sub-section titled ‘Management’s Response to Control Exceptions’ or similar.  Even when an organization has an unqualified opinion, there can still be items listed in this section.  If there are items listed, you want to carefully read what those items were and how management addressed or corrected the condition.  If you find any control issues and responses that concern you, you should contact the entity and get those discussed so that you are comfortable with the situation.  If you cannot get comfortable with the situation, then you may want to consider additional controls at your end to compensate for the control weakness with the third party.

In the next postpost I will take you through a more thorough review of the SOC report.

17
Mar
18

Can Every Requirement Be Met With A Compensating Control?

“In theory, theory works.” – Jeff Hall

Some years back, the PCI SSC came out at the Community Meeting and stated that every PCI DSS requirement could be addressed by a compensating control worksheet (CCW).  A rather broad statement but it started a bunch of us in the PCI community thinking, “Is that really the case?”

Before reading this post, I highly recommend reading my post on writing CCWs so that you can fully appreciate why not every requirement can be met by a CCW.

That said, it turns out that there are a lot of requirements where there is no way to develop a CCW.  Here are just a few examples.

1.1.2 – Network Diagram(s) and 1.1.3 – Data Flow Diagram(s)

What would be the mitigating controls here?  There are none because diagrams are diagrams.  There is nothing you can do to compensate for these missing other than provide them.

1.1.6 – Firewall Rules

As with 1.1.2 and 1.1.3, what could possibly serve as a mitigating control?  If the firewall rules are not able to be reviewed, there is nothing you can rely upon to go above and beyond the control.

I have had people suggest that the QSA could rely on Nmap and vulnerability scans of the firewalls.  But that does not necessarily confirm all of the ports/services that are configured for the firewall nor does it necessarily confirm that the devices using those ports are the same ones that are in scope for PCI compliance.

1.2.3 – Wireless Networking

QSAs have repeatedly been told that this requirement can never be marked as ‘Not Applicable’.  The QSA is required to respond to how they confirmed at wireless was either in or out of scope.  But can you create a CCW for these requirements?

The controls that you need to assess to meet these requirements are the same controls you have to use in the CCW for mitigation.  So, if you have to document and evaluate the controls regardless, why would you bother to write a CCW?  You would not.  You would document and meet the requirements and move on.

3.2 – No Storage of SAD

This is the requirement that started the whole CCW debate.  When the PCI DSS was originally issued, QSAs were trained that this requirement could NEVER, EVER have a compensating control.  But that changed when the Council issued their proclamation a few years back.  But is that really the case?

Remember, a CCW must go above and beyond the intent of the original requirement.  3.2 also states in a note that SAD cannot be stored even if encrypted.  Encryption would be the only mitigating control available to an organization that wants to store SAD.  So what replaces encryption if that cannot be used?  Tokenization by a third party would be an option, but if you go that route, you are not storing the SAD, so the discussion becomes moot.

8.3 – Multifactor Authentication

Some form of multifactor authentication (MFA) is required for non-console administrative access to cardholder data environment (CDE) systems and remote access to an in-scope network.  Since the Council has clearly defined MFA and also knocked down multiple logons with different credentials, what is left?  In the end, there is no way around meeting this requirement other than doing what the requirement states.

10.1 – 10.3 and 10.6 – Log Data

Here is another example of where there really is no way to write a CCW.  You are either gathering log data (centrally or on individual systems) or you are not.  You are either reviewing the log data daily or you are not.  Then there is the requirement of sending log data from internet facing devices to an internal device.  No matter how creative you think you are, there are no controls that will mitigate this situation and also go above and beyond.

As I said at the beginning of this post, these are just some of the examples where a CCW is just not going to make it.  So, the next time you think about meeting a PCI DSS requirement by using a CCW, make sure you understand the requirement and that there are controls that will mitigate the risk and go above and beyond the original intent of the requirement.  You will save yourself and your QSA a lot of time and consternation.

10
Apr
17

MFA – It Is All In The Implementation

I have been challenged over the last few weeks over requirement 8.3.1 along with the implications of the Council’s latest Information Supplement on multi-factor authentication (MFA).  Requirement 8.3.1 does not go into effect until February 1, 2018, but there are a lot of organizations trying to get a jump on it.  As a result I am hearing from QSAS that they are getting more and more questions and scenarios to see if they are PCI compliant.

As a reminder, requirement 8.3.1 states:

“Incorporate multi-factor authentication for all non-console access into the CDE for personnel with administrative access.”

The most common and biggest challenge has come from organizations that have implemented MFA across their entire network and therefore believe that they are automatically in compliance with 8.3.1.

Not so fast.  The guidance for 8.3.1 states:

“If the CDE is segmented from the rest of the entity’s network, an administrator would need to use multi-factor authentication when connecting to a CDE system from a non-CDE network. Multi-factor authentication can be implemented at network level or at system/application level; it does not have to be both. If the administrator uses MFA when logging into the CDE network, they do not also need to use MFA to log into a particular system or application within the CDE.”

According to this guidance, it is the cardholder data environment (CDE) that is the border for the MFA, not the network as a whole.  So while an organization might have implemented MFA as part of their general security, having MFA for the entire network does not meet the requirement of 8.3.1.

We need to remember what drove the development of requirement 8.3.1 was a lesson learned from the Target and similar breaches.  In all of these breaches, system administrators were spear phished allowing the attackers to access the CDE in one way or another.  Requirement 8.3.1 minimizes this threat by requiring MFA to gain access to the CDE.  So even if an attacker obtains an administrator’s credentials or compromises an administrator’s system, that fact in and of itself would not compromise the CDE.

This is why the guidance for 8.3.1 puts the MFA border at the CDE.  If you have MFA implemented in order to gain access to your network, how does that stop the threat of phishing?  It does not.  A spear phishing attack against such an MFA implementation defeats the MFA because it has already been applied.  The MFA in this scenario does not stop access to the CDE.

But keep in mind, MFA only minimizes the risk to administrators.  You still need to be vigilant in ensuring that administrator systems remain secure and free of viruses and malware.  As such, it is not unusual to find that organizations are taking more active approaches to securing administrator systems including adding other technologies such as file integrity monitoring, white listing and/or black listing in addition to anti-virus.

But it is not just administrators you need to worry about.  Anyone that has access to bulk cardholder data (CHD) that is stored is also at risk.  As a result, we are starting to see organizations also requiring these users to use MFA to access the CDE as well as having their systems implement enhanced security to ensure they remain uncompromised.

Just some things to think about as you got through your MFA discussions.

11
Feb
17

The Council Gets A Clue

Late this week the PCI Security Standards Council issued a new information supplement titled ‘Multi-Factor Authentication’ after the brew-ha-ha that occurred last fall at the Community Meeting in Las Vegas.  For once, the Council has issued an excellent reference regarding the issues of multi-factor authentication (MFA).  Although I still have a couple of minor bones to pick about this document, but more on that later.

If you understand the concepts of MFA, you can skip through the document to the end where the Council presents four scenarios on good and bad MFA.  These are well documented and explain the thought process behind why the scenario works or does not work for MFA.  The key takeaway of all of this is the independence of the MFA solution from the logon process.  The Council is getting in front of the curve here and stopping people from creating insecure situations where they believe they are using MFA that minimizes or stops breaches through administrators or users with access to bulk card data.

Now for a few things that I do not necessarily agree with in this document.

The first involves the Council’s continued belief that hardware security modules (HSM) are actually only hardware.  On page four, the following statement is made.

“Hardware cryptographic modules are preferred over software due to their immutability, smaller attack surfaces, and more reliable behavior; as such, they can provide a higher degree of assurance that they can be relied upon to perform their trusted function or functions.”

The Council has made similar statements over the years in the mistaken assumption that HSMs are only hardware.  HSMs are hardware that use software to manage keys.  There are standards that are followed (e.g., FIPS 140) to ensure that the HSM remains secure, but these devices are predominately software driven.  That is not to say that just any device can serve as an HSM, but a lot of us in the security community are concerned that the Council continues to perpetuate a myth that HSMs are only hardware which is patently false.

My other issue comes on page six as part of the discussion regarding the use of SMS for MFA.

“PCI DSS relies on industry standards—such as NIST, ISO, and ANSI—that cover all industries, not just the payments industry. While NIST currently permits the use of SMS, they have advised that out-of-band authentication using SMS or voice has been deprecated and may be removed from future releases of their publication.”

While everything in this statement is accurate, it gives the uninitiated the impression that SMS or voice is no longer a valid MFA solution.  I know this to be true because I have fielded some questions from clients and prospects on this subject, particularly about SMS.  The key is that this is not SSL and early TLS where NIST called them out as insecure and to no longer be used.  This is a “heads up” from NIST to everyone that there is an issue that makes SMS and voice not secure enough for MFA.

But while there is a risk, a lot of us in the security community question the viability of that risk when matched against merchant risk versus a bank or a government agency.  While I would not want any bank or government agency to use SMS or voice for MFA, a small business may not have a choice given their solution.  The reason is that the risk of an attack on SMS or voice is such that only a high-value target such as a bank or government agency would be worth such an effort.  In my very humble opinion, while a total ban is the easy solution, this is an instance where the Council should take a more nuanced approach toward the use of SMS and voice for MFA.  The bottom line to me is that small merchants using any MFA solution, even if flawed, is better than using no MFA solution.

I would recommend the following approach to manage this risk.

  • Level 4 merchants can be allowed to use SMS or voice for MFA.
  • Level 1, 2 and 3 merchants would be allowed to transition away from SMS and voice to a more secure MFA solution within one year of NIST stating that they are no longer acceptable.
  • All service providers would not be allowed to use SMS or voice for MFA once NIST states that both are no longer acceptable. This means service providers should start transitioning now if they use either.

Those are my thoughts on the subject.  I look forward to the comments I am sure to receive.

07
Jan
17

Getting Ready For 8.3.1

I have had some interesting meetings with clients lately regarding PCI DSS requirement 8.3.1 and multi-factor authentication (MFA).  Requirement 8.3.1 is a best practice until January 31, 2018, but organizations are trying for once to get a jump on it.  As a refresher, the requirement states:

“Incorporate multi-factor authentication for all non-console access into the CDE for personnel with administrative access.”

But these calls have little to do with discussing MFA.  People seem to have that finally down cold.  What is becoming painfully obvious and somewhat disturbing from these calls is the realization that a lot of organizations have no defined cardholder data environment (CDE).

Honestly, we have been discussing scope and definition of the CDE for over a decade now.  Yet people still are having problems defining their CDE.  It makes you start to wonder what these folks have been doing for the last 10 years that they still do not have a defined CDE.

I refer a lot of these clients to the Guidance offered in the PCI DSS as a start to gaining an understanding.  That guidance says:

“This requirement is intended to apply to all personnel with administrative access to the CDE. This requirement applies only to personnel with administrative access and only for non-console access to the CDE; it does not apply to application or system accounts performing automated functions.

If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication either when logging onto the CDE network or when logging onto a system.

If the CDE is segmented from the rest of the entity’s network, an administrator would need to use multi-factor authentication when connecting to a CDE system from a non-CDE network. Multi-factor authentication can be implemented at network level or at system/application level; it does not have to be both. If the administrator uses MFA when logging into the CDE network, they do not also need to use MFA to log into a particular system or application within the CDE.”

What the Council is preventing with 8.3.1 is all of the successful spear phishing attacks against system administrators that were the ultimate root cause of the Target, Home Depot and other breaches.  The reason being is that when people with administrative privileges are breached, it is game over.  The requiring of MFA should prevent that from happening.

While the Council explicitly calls out administrators, I also explain to my clients that it is not just administrators that you need to worry about.  Anyone that has access to bulk data inside the CDE should also be using MFA to gain access.  I have seen people in accounting and customer service roles that can access and decrypt stored CHD that can also be at risk to phishing and similar attacks.  So it is a good idea that anyone with access to the CDE and bulk data should also be using MFA.  The last thing you want to have happen even if the data remains encrypted is to have entire databases be exfiltrated out of your CDE because not everyone with CDE access is using MFA.

But what is absolutely fascinating and scary is that the struggle on these calls continue to surround defining CDE systems.

The first problem I seem to encounter in these meetings revolves around the difference between systems/devices that process/store/transmit cardholder data (aka Category 1 systems/devices) versus those systems/devices that are connected to those Category 1 systems/devices (aka Category 2 systems/devices).  The guidance that I give my clients here is that if a Category 2 system has the ability to effect the security of a Category 1 system (i.e., the Category 2 system/device has inbound access to the CDE system(s)/device(s)), then administrators should also use MFA to access those Category 2 systems/devices.

The second most common problem that comes up is network segmentation.  Lately the meetings seem to be more and more involving the lack of network segmentation.  In those cases the Council’s Guidance column provides your answer.  The Guidance states in the second paragraph that:

“If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication either when logging onto the CDE network or when logging onto a system.”

What?  As usual, the Council has totally messed up the wording here so it is no wonder people have questions.  What the Council should have said in that second paragraph was:

“If the entity does not use segmentation to separate the CDE from the rest of their network, an administrator could use multi-factor authentication when logging onto a CDE system.”

The key point in the second paragraph is that there is NO network segmentation, so there is NO separate CDE network.  How an administrator would use MFA to logon to a separate network that does not exist is beyond me.  The Council really needs to fix that second paragraph.

In situations where the CDE is not explicitly defined, organizations are going to have to implement MFA on a device by device basis, not on a network segment basis.  While this can be done, it is a pain in the “you know what” and another reason for segmenting your network to get an explicit CDE.

The final most common issue that comes up with 8.3.1 regards a separate CDE and how to control access to that separate CDE.  The most common way to access a CDE, particularly for administrators, is through separate out of band (OOB) administrative systems (there should always be more than just one for redundancy) also referred to as “jump boxes”.

These OOB will have two network interface cards (NIC) for connecting to the CDE network segment and connecting to a network outside of the CDE.  The external facing NIC connects to a firewall to manage and monitor the network segmentation.  This is because there are typically (or should be) many, many fewer ports required to be open from the OOB to the firewall than from the OOB into the CDE.  Access control to the OOB is typically managed through Active Directory, RADIUS or some other directory system but I have seen it managed on the OOB but I would not recommend that practice.

The OOB also needs to be fully instrumented meaning that every keyboard entry and mouse click is recorded.  All of that is sent to a separate logging system so that in the event that an issue occurs, the actions of the users of the OOB can be reviewed and a determination made as to how to correct the issue.

The next most common way organizations are controlling access to the CDE is through a virtual desktop (VDI) type of solution because they already have that technology and want to use it to connect to their CDE.  The belief being that this will reduce their scope to only the VDIs that connect to the CDE.  But to their chagrin, they quickly find out that it does not reduce scope like they think.

The first question about using VDI regards scope as in, what exactly is in scope if we implement VDI?  While the VDI is obviously in scope, I get a lot of questions and arguments regarding what in addition to the VDI is also in scope.  The Council’s previous pronouncements regarding virtualization make it clear that the virtualization solution such as Citrix or VMware is definitely in-scope in addition to the VDI.

But also in-scope are the devices that access the OOB/VDI particularly if they can be used to view or enter sensitive authentication data (SAD) or cardholder data (CHD).  This is because even with OOB/VDI, these secondary devices still can have the SAD/CHD in their memory that could be accessed by malware.  That does not mean that the full PCI DSS requirements need to be applied to these secondary devices nor do they necessarily need to be in a separate network segment.  But it does mean that appropriate controls need to be applied to protect these secondary devices from the risks they present to SAD/CHD.

I have a lot of clients that try and get these secondary devices out of scope by using virtual keyboards on the OOB/VDI.  The thinking being that if the keyboard is not used then the secondary device cannot be in-scope.  However, there is still a mouse being used to drive the virtual keyboard and that still means the secondary device is in-scope.  That is because the mouse clicks can be recorded by malware and the data retrieved.  The only sure way I have seen secondary devices put out of scope is when an encrypted keypad such as a MagTek DynaPro or IDTECH M100 are used for data entry of SAD/CHD.

The second question I get typical revolves around do the administrators use MFA before or after logging on to the OOB/VDI solution?  Either method will meet the requirement, but I would think that implementing MFA as part of the OOB/VDI logon process (i.e., before) is going to be much easier to implement than implementing it afterward when you would have to implement it for each system/device in the CDE.

Hopefully we now all understand 8.3.1




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031