Archive for the 'PA-DSS' Category

31
Jul
21

PCI Dream Team LIVE! Is Coming In October

The PCI Dream Team will be appearing LIVE at the (ISC)2 Security Congress in Orlando this Fall, Monday, October 18 through Wednesday, October 20, 2021.   Our session is scheduled for Tuesday, October 19, at 11:45 AM ET/ 1545 UTC.

While we will be live at the conference, you can also attend the conference and our session virtually.  So other than training budget limitations, there is no other good reason you cannot join us.

As usual, we will be taking questions live and via email at pcidreamteam AT gmail DOT com.  We also monitor Twitter if you use #pcidreamteam.

We are expecting our usual lively discussion of all topics PCI and other security standards if time allows.

We really are looking forward to physically seeing people at the conference.

Advertisement
20
May
20

DevOps And PCI – Part 2

In the first post on this topic we discussed the terminology of DevOps and how segregation of duties can get complicated with DevOps.  In this post we will continue to investigate DevOps and discuss the issues you can encounter with change control, documentation and PCI scope.

Change Control

These days it is not unusual to hear DevOps people be proud of hundreds or even thousands of implementations or deployments per day.  That is until someone like a PCI assessor starts inquiring about what, if anything, is done to formally approve all those deployments?  The conversation with developers typically begins to deteriorate as you discuss requirement 6.4.5.2 which states:

“Documented change approval by authorized parties.”

The normal response is that the approval is provided in Jira, ServiceNow or whatever change management tool is being used.  That leads to a discussion of the guidance for requirement 6.4.5.2 which states:

“Approval by authorized parties indicates that the change is a legitimate and approved change sanctioned by the organization.”

With the rapidity and the volume of changes, then next question asked is how can an authorized party assess a change is legitimate and sanctioned if they never actually physically see and review the change that is deployed?

This leads to a discussion of how Jira, Jenkins, Puppet, whatever CI/CD toolsets work along with the automation involved in the change process as well as the “controls” embedded in the workflow of those tools.  The bottom line of which is usually that the only potential human intervention in the process might occur if the code needs a manual code review.

What requirement 6.4.5.2 is about is ensuring that the change process involves human intervention by getting management to approve what is being put into production and that segregation of duties has been ensured and no fraud or other illegal activity has been introduced into the process.  The reason is that we are talking about code that is processing, storing or transmitting sensitive authentication data (SAD) or cardholder data (CHD).  The potential for implementing code that skims that information or does other nefarious actions is too great to just trust a fully automated process with no human intervention.  This risk can all be driven home with a discussion of the 2013 Target breach where the CI/CD process was compromised to repeatedly push malware to thousands of point of sale devices.

While I am only talking PCI in-scope code in this case, fair warning, HIPAA, SOX, GDPR and other regulations are going to require a similar control.  Sensitive information is worth too much today and there is just too much risk that people will take any opportunity to siphon off sensitive data any way they can if appropriate controls are not in place.  If your process is totally automated and cannot detect such fraudulent activities, the ability to put who knows what into the code is too easy.  The last thing any organization wants is to be breached and then try to defend itself when they had poor or no internal controls to prevent the breach.

The bottom line here is that in our haste to push out software we have compromised the controls in that process.  Those controls need to be put back into place to minimize the risk presented by pushing malicious software into applications without a thorough vetting by management.

Documentation

Another area where compliance falters with DevOps is documentation.  Confluence, SharePoint Wiki or a similar tool will be used for documentation and that is where most assessors/auditors will be pointed for their requests for formal documentation.

The first problem with this is when you are an outside assessor/auditor, because you do not have access to the internally used tool.  That can be remedied several ways, but it is always a hurdle because insiders are so used to the fact that everyone they typically work with has access.

Once the assessor/auditor has access, the next problem for all assessors/auditors is finding what they need for their assessment/audit.  Regardless of whether the assessor/auditor gets PDFs or has online access, the most common reason for this issue is terminology.  A lot of times what an assessor/auditor is trying to find will be referred to by the organization in terms that are not consistent with industry or technology accepted terminology.  While all of these documentation tools have search capabilities, searching the document trove for what an assessor/auditor needs for evidence can be highly problematic.  Never mind the fact that clients get frustrated as well because the evidence exists, but the assessor/auditor cannot find it.

Related to these documentation systems is the fact that it can be difficult, if not impossible, for the assessor/auditor to get hardcopy or even usable PDFs of the documentation.  Let us face it, screen shots while readable can miss sentences at the end of a screen and therefore be missed altogether.  As a result, obtaining usable and legible evidence for the assessor’s/auditor’s work papers is not readily possible let alone having it searchable.  The fix to this is to use a browser extension or addon that will create a PDF or image of an entire page.  But that too can run into issues if the organization has locked down their browsers and do not allow such installations.

Regardless of Agile/Scrum or Waterfall, the next problem with documentation is the fact that the documentation is limited or simply does not exist.  I have encountered more and more organizations that again point to The Agile Manifesto, Scrum and the like and state that none of these approaches specify that documentation is required.  It seems that the age-old adage of “if it was hard to develop, it should be hard to understand” is back in vogue.  Never mind the fact that with hundreds or thousands of deployments a day, keeping up with documentation outside of can be impossible.

Consistent use of a change management ticketing system such as Jira or ServiceNow also can be an issue.  It seems that some organizations have exceptions that do not require every change to their environment be entered into their change management solution.  Worse, the criteria used to determine what is and what is not entered is not consistently applied because the criteria were never officially documented nor formally approved.  As a result, there is no way to rely upon the information contained in the change management system to determine that change management is performed as required.

As a result, I am never surprised to have organizations scrambling to develop even business and IT policies, standards and procedures in addition to network diagrams, data flow diagrams, application documentation, database schemas, operations documentation and a whole host of other missing or incomplete documentation.

PCI Scope Implications

Lastly, there is the scoping issue related to the DevOps infrastructure.  Not all of it is usually in scope, but that all depends on how it has been implemented.

At the very least, the Jenkins, Puppet or Ansible portion of the infrastructure are going to be in-scope for PCI.  The reason is that those components feed the application updates into the cardholder data environment (CDE).  So those are considered “Connected To” systems and must be properly configured and secured to be PCI compliant.

Because these CI/CD solutions are “Connected To”, this can become problematic because of who has access to Jenkins, Puppet, et. al.  As I spoke of earlier, because of poor segregation of roles in the Active Directory system, it can turn out that developers have access to these systems and therefore come into scope for PCI compliance as well.  As a result, the whole concept of development separate from production for requirement 6.4.1 does not exist.

Obviously, this segregation of development and production problem only gets worse if you drag even more of the development infrastructure into scope.  Therefore, you want to ensure that only the Jenkins, Puppet, Ansible portion of CI/CD is in scope.

This will mean moving Jenkins, Puppet, Ansible, etc. into your “Connected To” or “Shared Services” network segment.  This can create some issues with the rest of the development environment because of firewall rules and access to it through a Jump Server.  So simply moving that solution into the new network segment may not be as simple as it appears.

Development Metadata

Before we go, there is one more topic that needs to be discussed and that is the metadata in all these development solutions.

We have touched on the controls surrounding the development toolsets, but we have not discussed securing these toolsets and the risks they present.  This may seem a bit odd because since when have we worried about the security of Visual Studio or other integrated development environments (IDE).  However, with the implementation of CI/CD solutions, all these tools become interlinked and integrated.  Essentially, all these tools make up an automated assembly line for building applications.

But even more importantly, for these tools to work together seamlessly, they need to share metadata about what they are doing.  This metadata might seem like it is benign information, but it is particularly important and controls how the applications are built, tested and deployed.  Essentially the metadata is the “secret sauce” that makes your application work as an application within your organization.

We have already discussed the security controls that will be required around the deployment toolset.  But the rest of the development toolset is also going to require security and controls to be implemented to ensure that your software factory’s assembly line does not become a huge risk for attacks.  Attacks that could maliciously modify your applications to stopping the assembly line altogether.

Before you think that this is unrealistic, I would again point you to the infamous 2013 Target breach.  I wrote about the breach at the time and walked people through how what was known about the breach at that time would have made the breach possible.  The success of that attack was to compromise the CI/CD process to implement their malware that skimmed cards in the point of sale system.  So, the idea that development environments are not a viable attack point is not out of the realm of possibility.  And it gets even worse when you add in the use of contract workers to the development process.

So, what should an organization do to address these risks?  I would recommend securing the entire application development environment to PCI configuration standards so that security monitoring of the entire environment can be performed.  That does not mean that all the environment needs to reside in your ‘Connected To” or “Shared Services” DMZ with the CI/CD solution.  But I would create another DMZ to contain the rest of the toolset that feeds the CI/CD solution.  Servers should be properly security hardened and monitored as though they are in-scope for PCI compliance even though they are not in-scope.

There you have it.  The basics of how Agile and PCI can coexist.

17
May
20

DevOps And PCI – Part 1

DevOps are all the rage in organizations that develop applications.  The move to become “Agile” through the implementation of methodologies such as Scrum to replace the traditional waterfall SDLC is ongoing in most organizations.  But these changes can create compliance issues with the PCI standards regarding software development.

Understanding The Terminology

First and foremost, we need to address the terminology surrounding DevOps.

But before we talk about those specific terms, we need to address the elephant in the room which is “Agile”.  The Agile approach to development traces its history back to early 2001 when a group of developers met at a Utah ski resort.  The result of that meeting was ‘The Agile Manifesto’.  However, the roots for Agile were sown even earlier as application development became unable to keep pace with business changes starting in the late 1980s.

The important thing to remember about Agile is that it is not a methodology.  It is merely a set of values (4) and principles (12) related to the development of software.  The Agile Manifesto never describes a roadmap or steps to follow as to how those values and principles should be used.  So, to refer to Agile as a methodology is a misnomer but you will constantly encounter it being referred to as though it were a methodology.

Interestingly enough, the methodologies used with the Agile approach were actually developed before Agile.  Of the number of them that sprang up in the 1990s, Scrum seems to have won out when it comes to a methodology.  Scrum was one of many methodologies such as Kanban, Crystal Clear, Extreme Programming, Feature Driven Development and Dynamic Systems Development Method that came out at that time to address the delivery of software solutions in a timelier manner.  But while Scrum is the most followed methodology, it can also include some of these other methodologies such as Extreme Programming (XP) for example when used.

Scrum involves three types of roles.

  • Product Owner: The Product Owner needs to be a person with vision, authority, and availability because they are responsible for continuously communicating the vision and priorities to the development team.
  • Scrum Master: The Scrum Master is not a project manager.  The Scrum Master’s primary responsibility is to remove any impediments that are obstructing the team from achieving its sprint goals.  The Scrum Master also is the primary contact with the Product Owner.
  • Team: The Scrum team is responsible for completing the work.  For application development, a Scrum team can contain anywhere from three to nine members.  For software projects, a typical team includes a mix of software engineers, architects, information security personnel, programmers, analysts, QA experts, testers, and UI designers.  The team is responsible for determining how it will accomplish the work to be completed.

The final term from Scrum that needs to be defined is Sprint.  A Sprint is a one month or less in duration project that will result in a releasable increment of a product, in this case, an application or application enhancements.  When a Sprint’s horizon is too long the definition of what is being built may change, complexity may change, and risk may change.  The concepts of Sprints are to enable predictability by ensuring inspection and adaptation of progress toward a Sprint Goal at least every calendar month. Another benefit is that Sprints limit risk to one calendar month of cost.

Once defined, some of the key characteristics of Sprints are:

  • No changes are made that would endanger the Sprint Goal;
  • Quality goals do not decrease; and,
  • Scope may be clarified and re-negotiated between the “Product Owner” and “Team” as more is learned.

With these behind us, let us now turn to the terms DevOps.

DevOps is a merging of development and operations staff to work together to develop and implement solutions that will essentially run 24x7x365 with (hopefully) minimal operational interaction.  DevSecOps merely formally adds in the collaboration of information security into that mix even though information security should be included in DevOps as well.

The final topic in our discussion of terminology regards the tools used by DevOps.  While there are a number of vendors in this space, the “Big Dog” at the moment in DevOps is Atlassian with their tools Confluence and Jira.  Another “Big Dog” is Microsoft’s GitHub and the other “Big Dog” in the DevOps tool world is Jenkins which is open source from CloudBees.

  • Confluence is used as a documentation repository for such items as policies, standards and procedures as well as business, application, network and other important documentation.
  • Jira is used as a project and change management ticketing system.
  • GitHub is used to manage the versions of applications.
  • Jenkins is used for automating the build, testing and deployment of applications into production.

All of these tools have competitors from vendors such as ServiceNow, Puppet, Ansible, Chef, Google, and other commercial and open source development and operations tool vendors.  Regardless of vendor, all solutions seem to have these three basic components of documentation repository, project/change management and deployment automation.  It is also not unusual to find multiple tools in place particularly with Jenkins, Ansible, Puppet and Chef.

Segregation Of Duties

The first and most contentious issue that comes up with DevOps is the segregation of duties.  This is typically one of the biggest discussions/arguments an assessor/auditor will get into regarding DevOps is when Agile fans argue that segregation of duties is inconsistent with Scrum, Agile and DevOps.  Their primary reason will be to point to the fact that nowhere in any of the documentation regarding these topics is the term ‘segregation of duties’ or the requirement to ensure segregation of duties.  They would be correct in that regard.

Unfortunately, corporate life is not driven by Scrum, Agile or DevOps in a vacuum.  Corporations are still required to comply with laws, contracts and regulations promulgated on them by government entities, business partners, financial institutions and other parties regardless of what is in their methodologies and approaches.  So, while the argument can be made that the methods and approaches do not state anything on the subject, there are other documents, contracts and requirements that do state it is required.

Whether we are discussing PCI, NIST, SOC, COBIT or any other recognized audit or compliance programs, segregation of duties between roles is and always has been required.  It is one of the key principles to ensure that people do not have the ability to corrupt a process because they have too much control over that process.  The reason it is, that time and again one of, if not the primary root cause of such illicit activities, is the failure to segregate duties and roles thus allowing one person too much control over a process.  The concept of segregation of duties being that the more individuals involved in a process the less likely a process can or will be abused.

In DevOps, the issue of segregation of duties gets complicated because it gets extended into the tools used in the process.  The concept of continuous implementation (CI)/continuous deployment (CD) relies heavily on the use of tools such as Jira and Jenkins to enable such an approach.  This means that the assessor/auditor needs to look into who has access to these tools and what rights they have to influence the workflows that exist in these tools.

This gets even more complicated by the fact that this requires analysis of user and access control information from tools such as Active Directory, RADIUS and even the tools themselves.  In my experience, it is not unusual to peel the onion on these access controls to reveal the fact that segregation of duties really does not exist as thought because all roles are granted to everyone in DevOps and the organization is relying on individuals’ honesty to ensure compliance.

DevOps also can suffer from segregation between production, quality assurance (QA), test and development environments.  This is because a lot of organizations that move to DevOps have the mistaken belief that the “Operations” and “Security” components become part of the development group.  The argument will be made that Agile is all about “breaking down silos”.  While that is true, the mistake they make is that Agile and Scrum were not a call to abrogate the knowledge and controls that all of the players involved bring to the table as separate disciplines.  The goal is to make the disciplines better work together to achieve a common goal in a Sprint.

Where this manifests itself most often is that developers have unfettered access to the production environment.  In a DevOps environment, it is not unusual to find developers scattered all throughout the environment.  They are developing code, they are operating production, they are diagnosing bugs, they are everywhere with no delineation of roles and responsibilities.  It is essentially a free for all.  Everyone pitches in where they need to be involved.

This organized chaos is supposedly “controlled” by Jira through its ticketing.  Agile advocates will claim that since everything has a ticket (not always a true statement) that they maintain segregation through Jira.  They will show the tickets to the assessor/auditor and display that there are different names on the ticket for the developer, the QA person, the people who approved promotion, etc.  While this is true, as I described earlier, the access controls will show that virtually everyone they gave as evidence of segregation can fulfill any of those roles whenever they so choose.  By definition, that is not segregation of duties because there are no actual controls in place to stop someone from running the whole process.

The bottom line in this discussion is that the segregation of duty controls in an Agile environment is usually illusory.  As such, it is management’s responsibility to periodically ensure that segregation of duty controls are truly implemented and testable.

In the next post we will discuss documentation, change control and PCI scope in an Agile environment.

11
Mar
20

Remote Assessment Guidance Issued

The PCI SSC has issued guidance in response to the Covid-19 pandemic and conducting on-site fieldwork for PCI assessments.  Their blog post can be found here.

Given that governments around the world are saying that this pandemic could be ongoing until the summer, I would suspect that the Council will have to issue better guidance than what is in their latest blog post.  So I would expect more to come on this topic in the coming weeks.

03/19/2020 UPDATES: The Council has set up a Web page to track any Covid-19 updates. Also, remote assessments guidance has been provided and are allowed given the current pandemic conditions. Key is to discuss a remote assessment with the banks and/or brands involved.

22
Apr
19

More On The NIST Password Standard

Apparently, I touched a nerve with my post on the National Institute of Standards and Technology (NIST) password standards discussed in Special Publication (SP) 800-63B.  As a result, I thought I would walk you through my logic by using a compensating control worksheet (CCW) approach since this is what you will have to do for your PCI assessment if you chose to rely on the NIST guidance.

[SPOILER ALERT: It is possible, but I doubt it is worth all the effort.]

First, let us review all of what a CCW needs to comply with the Council’s requirements.  From Appendix B of the Report On Compliance (ROC) Reporting Template.

“Compensating controls must satisfy the following criteria:

  1. Meet the intent and rigor of the original PCI DSS requirement.

  2. Provide a similar level of defense as the original PCI DSS requirement, such that the compensating control sufficiently offsets the risk that the original PCI DSS requirement was designed to defend against. (See Guidance Column for the intent of each PCI DSS requirement.)

  3. Be “above and beyond” other PCI DSS requirements. (Simply being in compliance with other PCI DSS requirements is not a compensating control.)

  4. Be commensurate with the additional risk imposed by not adhering to the PCI DSS requirement.”

QSAs can get stuck on the third point because the Council also seems to focus on that point in their Assessor Quality Management (AQM) reviews because QSAs miss that point so often.  However, the other three are also very important to apply to the compensating controls being discussed.

Now let us focus on is section 4 of the CCW where the organization being assessed is required to describe the controls they have in place that go “above and beyond” the requirement being compensating which in this case is requirement 8.2.4 which requires password changes every 90 days or less.  I pick that requirement because that is the one most often cited by clients as why they want to use the NIST standard.  Most want to go to a 12-month password change interval.  These controls are going to come from pages 13 through 15 of the SP800-63B.

  • All passwords are required to be [value greater than eight] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service], [name of dictionary word list used], repetitive or sequential characters and context specific words are checked and rejected if found.
  • Authentication is only conducted using [encrypted authentication protocol(s)].
  • Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique].
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

To comply with the NIST guidelines for passwords an organization needs to implement all of these controls.

So how do they match up with the four criteria for a CCW?

Above and Beyond

This is the easiest one to tackle because almost all of the controls are above and beyond.  What?  Almost?

There are a couple of controls that do not meet the above and beyond test.

The first is the easiest to discuss and that is “Authentication is only conducted using [encrypted authentication protocol(s)].”.  That control does not pass above and beyond because it is required by requirement 8.2.1 under transmission must use strong cryptography.  As such, that control cannot be relied upon in the CCW and must be removed.

The second one is the “Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique]” control.  This discussion gets sticky because requirement 8.2.1 states that storage of credentials must also use strong cryptography which is not very specific.  I would argue that any sort of reasonable response here would be required by requirement 8.2.1 and therefore this requirement would also be ineligible to be used.

Only the password length is specified by the PCI DSS and as long as a value greater than eight is picked, that meets above and beyond.  However, we need to discuss this value further under intent and rigor.

All of the remaining controls are not specified in the PCI DSS, so those are all considered above and beyond.

Intent and Rigor

For intent and rigor, we need to look to the guidance provided for requirement 8.2.4.

“Passwords/passphrases that are valid for a long time without a change provide malicious individuals with more time to work on breaking the password/phrase.”

Remember, we are looking at a 12 month password change interval, so we need to consider intent and rigor under that concept that we need controls that will allow a password to remain unchanged for 12 months.

So let us look at the length attribute again.  Nine characters in today’s world without any complexity requirements can result in passwords able to be cracked in minutes.  Ten characters can be done in hours.  Only when we get to 12 characters and above do we get a value of at least 12 months or greater to crack.  As such, I would argue that you need 12 character long passwords or greater to pass the rigor requirement for justifying a 12 month change interval.

Passwords are assessed against a dictionary word list, context specific words and repetitive/sequential characters.  The key to this part of the second bullet is the extent of the dictionary word list.  The dictionary needs to be sufficiently large to provide the control that NIST desires.  The QSA is going to need to know how large the dictionary is, what is used as a reference to ensure that the dictionary has the appropriate words in its list and how often is the dictionary updated.  That would likely mean that these controls would need to be separated from the credential breach service control so that those aforementioned additional controls can be documented in the CCW. This would all have to be backed up by a proper risk assessment that documents that the review and updatee intervals of the dicutionary are appropriate and mitigate the risks.

Passwords being assessed to some credentialed breach source/service introduces an interesting twist to ensuring the security of a password.  But it also introduces an interesting discussion into the intent of requirement 8.2.4 which is to ensure the security of credentials.  NIST is only requiring that credentials be tested at the point they are changed.  But what happens if sometime during the 12 month interval that those credentials are compromised?  The intent of requiring a 90 day change interval was to reduce the risk of credentials becoming compromised for an extended length of time by changing one of those credentials at least every 90 days.

But NIST does not require monitoring of the credentials other than when they change.  Without constant monitoring of the credentials from a compromise service, how do you know when they need to be changed which is the intent of the change interval?

The PCI DSS does provide a bit of guidance on how the Council would likely approach this issue.  For reference I point you to requirement 3.6.5 which discusses this in regard to encryption keys that are suspected to have been compromised.  The reason I believe this is relevant here is that the PCI DSS does not require specific change intervals for encryption keys.  I would argue that the PCI DSS would view passwords changing at long intervals as requiring the same sort of control.  If the credentials are ever suspected of being compromised, then they should be changed.

Which brings up an interesting dilemma.  How do you monitor something that you have hashed and cannot recover?  Do we really want to have encrypted passwords in our authentication systems so that we can monitor them for compromise?  I seriously doubt that would be a good practice.

So with that said, we would need some sort of monitoring and alerting capability to warn if credentials do appear to be compromised such as monitoring for excessive logons, logons when the user is out of the office, logons from systems outside of the user’s area or building or other characteristics that would provide some sort of indication of credential compromise.  These controls would have to be added to the monitoring of the credential breach source to show that the credentials are changed when suspected of being compromised.

Similar Level of Defense and Be Commensurate

At this point, I think we have covered these two requirements for a CCW with our discussions about above and beyond and intent and rigor.

Where Are We With The CCW Controls?

Based on our discussion, here is what I think section 4 of the CCW would now have to look like.

  • All passwords are required to be [value of 12+] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service]
  • Passwords are monitored for excessive logons, excessive failed logon attempts, logons when the user is out of the office and logons that occur from systems outside of the user’s area or building to provide an indication of credential compromise.
  • When passwords are modified, [name of dictionary word list/source used], repetitive or sequential characters and context specific words are checked, and the password is rejected if any of these characteristics are found. The dictionary is updated every [month/quarter/six months] and reviewed [semi-annually/annually] to ensure the dictionary contains an appropriate list of words.
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

After looking at these controls, I would venture to say it is simpler and easier to meet the PCI DSS requirements than to implement these controls and make them work consistently and effectively.  Because remember, this is just section 4 of the CCW.  For section 5, you have to produce evidence that all of these controls are in place and working as designed.  Never mind section 6 where you explain how you maintain all of these controls.

So for those of you bent on using NIST, there you have it but I doubt it is worth the effort you think it is.  And this does not address the CCWs you will also need to write for 8.2.3 because you no longer enforce complexity and 8.2.5 because you no longer track the last four passwords used.  But those could be another post.  Yeah, I do not think so.  Not worth the effort because those CCWs will revolve around the controls in this one.

As I said in my original post, it might be better to wait for the Council to issue their guidance in v4 of the PCI DSS.

UPDATE: The PCI Council has created an FAQ to address this situation. https://pcissc.secure.force.com/faq/articles/Frequently_Asked_Question/Can-organizations-use-alternative-password-management-methods-to-meet-PCI-DSS-Requirement-8

01
Mar
19

Will The Council Kill Off TLS?

On February 6, 2019, a technical paper was published regarding a new attack on TLS 1.2 and 1.3 had been identified.  Of course, the first thing that a lot of us wondered was, “Will the PCI SSC now kill off TLS 1.2 and 1.3?”

Before panic sets in, I am guessing that TLS 1.2/1.3 will not go away like SSL v3 and TLS 1.0/1.1 did before.  The reason is that this is just another variation of the Bleichenbacher attacks that seem to crop up every so often regarding SSL and TLS.  What is different about this attack is the new side-channel leak approach that was used.

The risk in this attack is best described from the researchers’ technical paper.

 “… even though  the  use  of  RSA  in  secure  connections  is  diminishing (only ≈6% of TLS connections  currently  use  RSA  [1,  51]), this  fraction  is  still  too  high  to  allow  vendors  to  drop  this mode.  Yet,  as  we  show  in  Section  VI,  supporting  this  small fraction of users puts everyone at risk, as it allows the attacker to perform a downgrade attack by specifying RSA as the only public key algorithm supported by the server.”

The problem is all related to the use of RSA PKCS#1 v1.5 in TLS.  The rest of protocol is just fine.  So, at worst case I could see the Council recommending that RSA PKCS#1 v1.5 not be allowed to be used.

Which reminds me of years ago when the US banking regulators came out and stated that by a certain date, Internet Explorer 6 would no longer be allowed to be used for internet banking.  According to the banks at the time, such a move by the regulators would create a support nightmare or, even worse, kill off internet banking.  However, the date came, the banks turned off IE6 and little happened.  Yes, there were a few days of higher than normal support calls about customers not being able to get into their accounts, but those quickly died off.

The issue with RSA PKCS#1 v1.5 is similar to the banking story.  At what point do we draw the line on these sorts problems?  10% of users?  2% of users?  1% of users?  In this case, 6% of the internet users are putting the remaining 94% at risk.  Is it worth it?  Each organization will have to determine if that risk is acceptable and justify why.

03
Nov
18

Open Source

One of the questions we received at the last PCI Dream Team session was:

“What about open source for 6.5?”

I am sure the person asking wanted to know whether open source payment solutions must comply with the PCI DSS requirements in 6.5.x?

The quick and simple answer is of course, ‘Yes’!  Why would it not?  It is source code after all, so therefore it must comply with the requirements in 6.5.x (as well as other requirements in section 6 and throughout the PCI DSS).  The PCI DSS does differentiate between different sources of application code.  For PCI compliance purposes, code is code is code, regardless of the source.

Now what does come into play is whether or not the PA-DSS validation standard applies to an application.  As PA-DSS relates to open source, I wrote about that over eight years ago, but it is still relevant today.  For the purposes of this post, I am not talking about PA-DSS validated applications.

The next question a QSA typically gets is, “Well 6.5 only applies to internet-facing payment applications, right?”

Wrong!  Any payment application needs to meet the requirements in 6.5.x whether it is internet-facing or internal facing.  Also, it does not matter whether a browser is involved or not although a significant number of the requirements in 6.5.x are related to browser-based applications.

But ensuring open source is PCI compliant goes beyond just 6.5.x.  There are other requirements that, at a minimum, must be applied as well.  Not every requirement in a section or group or requirements may apply, but some will be needed to be covered depending on how the application works.

  • Section 3 related to encryption of stored data and encryption key management;
  • Section 4 related to encryption of communications;
  • Requirements 6.1 and 6.2 for patching and vulnerability management. This can become problematic for open source because as time goes on applications can develop vulnerabilities that the developer community does not address.  This is most likely because the community moved on and your application became an orphan;
  • Requirements 6.4 for application development. Remember, just because your organization did not develop the application, if it is not PA-DSS validated, then it is your responsibility to ensure the code securely processes, stores or transmits sensitive authentication data and/or cardholder data;
  • Requirement 6.6 is also in play regardless of whether or not the application is browser-based. At a minimum, code reviews must be performed.  If the application is browser-based, then you can add in a Web application firewall (WAF) for additional security;
  • Sections 7 and 8 related to access control and user management; and
  • Section10 related to application log data.

Remember, every time a new release of your open source solution becomes available, you have to go through all of this all over again if you intend to use the new release.

So those of you thinking that you can somehow leverage open source to reduce your PCI compliance footprint, think again.  All you have done is outsourced the development of your solution.  The rest is still on you.  In the end, it is really not much of a savings.

08
Oct
18

2018 North American PCI Community Meeting Thoughts

It was an interesting time in Las Vegas this year.  Part of that is due to the fact that we are in Las Vegas.  But part of it was that the Community Meeting seemed to be devoid of the usual anticipation for the Community Meeting and expected pronouncements.  While there were announcements for various standard updates, these were well anticipated and were not a surprise.  Some of the slide decks have been released, but others will not be available until the European Community Meeting is held in a few weeks.

While there were a number of good presentations this year, in my very humble opinion, the best session was the Assessor Session at the end of the meeting.  The good news this year was that a lot of QSAs and ISAs made sure to stick around for this session.  There were a number of good questions asked after the Council’s presentation, but I will wait for the Council’s transcript to be published before weighing in on those.

As in years past, the Council had a presentation at the start.  The following are highlights from that presentation.

AQM Program Highlights

As usual, the AQM team did a bang-up job pointing out common issues found in the various assessment types they review.

On the PA-DSS side of the ledger, a lot of PA-QSAs are having issues with requirement 5.1.6.b regarding application least privilege.  The Council clarified that what they are looking for in this requirement is proof that the application does not run as ‘root’, ‘administrator’ or some other default privileged account in order to run properly.

For P2PE assessments, there have been issues regarding when a double length 3DES key can be used.  The Council explained that a double length 3DES key is only allowed when using derived unique key per transaction (DUKPT).  All other uses must be triple length keys to be in compliance with P2PE.

Apparently, QSAs and their QA minders are totally missing what is meant by “describe how”.  When describing “how” a QSA must describe all of those procedures used to determine the requirement was satisfied as well as how those procedures prove the requirement was met.

QSAC QA manuals still are not covering topics such as evidence retention and destruction, security incident response plans and code of conduct policy.  The Council reminded everyone to make sure all topics in the QSA Qualifications Requirements document are covered.

Compensating controls were a continuing problem area and that should not be a surprise.  I am constantly fascinated when I receive a ROC for proof of PCI compliance performed by another QSAC and get to see what passes for a valid compensating control worksheet (CCW) at other firms.  Apparently ‘intent and rigor’ of the requirement and ‘above and beyond’ are foreign phrases to a lot of QSAs.  Never mind the fact that the controls used, tested and maintained are usually vague in description.  The Council pointed people to their Portal for remedial training of QSAs that cannot comprehend writing a CCW.  I have written a number of posts on compensating controls.  If you want to write good CCWs, start here for the most current post and it will point you to prior posts.

The Council got some interesting questions from QSAs over the year.  The first one is one that a lot of clients ask us, “Do you really have to come onsite?”  Yes, an onsite visit by the QSA is actually required.  However, how long a QSA needs to be onsite can vary from as little as a couple of days for a long-time client to a week or more for a new client.  Onsite visits can be supplemented by video meetings when needed.  Not unusual these days when a client has worldwide operations and not everyone is located at headquarters or will not be available when the QSA is onsite.

The other question was regarding ROC and AOC dates.  How people keep messing these up is beyond me, but as with the CCWs, I see a lot of ROCs and AOCs out of other firms where the dates on the documents are not consistent.  Basically, the last thing any QSAC should do is to set all of the dates in the ROC and AOC to match as part of their document finalization processes.  That way you will avoid this problem.

There was a brief discussion of the Software Security Standard (S3) that will replace the PA-DSS.  Most of the discussion revolved around the proposed timeline.  The standards themselves will be published sometime before year end.  Reporting materials will be published around mid-2019 with training commencing in the Fall of 2019.  The big deadline is that PA-DSS Reports On Validation (ROV) will only be accepted through mid-2020 requiring all reports going forward to be under the S3.  That will mean that by mid-2022, all PA-DSS validated applications will move to “Acceptable for Pre-Existing Deployments”.

Finally, SSL and early TLS got a discussion.  Somehow the word has not gotten around that if a company still uses SSL and/or early TLS, there must be a compensating control developed for the relevant requirements since Appendix A2 no longer exists in v3.2.1 of the DSS.  They also reminded everyone that having SSL or early TLS is NOT an automatic fail.  However, vulnerability scans will have to have explanations developed justify the use of the protocols as well as what is done to mitigate their use.

Card Production Security Assessor Program

If you were not aware, the PCI SSC took over the various card brands’ card production programs and created a single common program similar to what the Council did with the Data Security Standard back in 2006.

In response the Council is creating a new assessor program in 2019.  Card Production Assessor Companies (CPAC) will not need to be existing QSACs nor will assessors need to be QSAs.  The new assessor training program will be rolled out next year for this standard.  The Council did note that existing card production assessors will be somehow recognized by the new program but did not specify how that recognition would be implemented.

As with QSACs and QSAs, the Council will maintain a database of CPACs and qualified card production assessors.

PIN Assessor Program

As with card production, the Council has also been responsible for PIN standards for a few years now.  As a result, the Council is developing a program for creating PIN Assessor Companies and PIN Assessors.

There will be no need for the PIN Assessor Company to be a QSAC nor will assessors be required to be QSAs.  This program will also start in 2019.

Global Executive Assessor Roundtable (GEAR)

This is a new group that was established this year.  Its role is to provide a direct communication channel between the PCI SSC and 20 qualified security assessor companies’ (QSAC) senior executive leadership.  This group met for the first time a few days before the start of the Community Meeting.  Each member of GEAR serves for a two-year term.

The 20 QSACs on the GEAR are:

  • @sec
  • Advantio
  • Coalfire
  • Control Case
  • Foregenix
  • IBM Security
  • isec
  • K3DES
  • nccgroup
  • Protiviti
  • PSC
  • RSM
  • Security Metrics
  • Shellman
  • SISA
  • Sysnet
  • Trustwave
  • UL
  • usd
  • Verizon

As usual, it was great catching up with everyone and meeting new Guru fans.  I really appreciate all of the great comments about the blog.  Even though I see the statistics for the blog, it still amazes me how many people read it and appreciate it particularly when you meet so many of them in person.  It is very humbling.

Hopefully I will see you all next year in Vancouver.

27
Jan
18

Pre-Authorization And Post-Authorization (Part 1)

Welcome to a new year.  I have had a number of interactions with a variety of people over the previous year and it has become obvious that the concepts of pre-authorization and post-authorization data is not clear to a lot of people.  These two concepts are a key part of understanding PCI compliance.  I will start with pre-authorization in this post and have a separate post for a discussion of post-authorization.

Pre-Authorization

Where pre-authorization (aka “pre-auth”) typically comes up is when someone asks, “How does [pick your online merchant] store a customer’s payment data and still be PCI compliant?”

Before we get to that question, we need to define what we mean by “pre-authorization”.  Pre-authorization is that time when a merchant has a customer’s sensitive authentication data (SAD) or cardholder data (CHD) but has not yet processed it for payment.

For most merchants, that time between collecting the SAD/CHD and processing it is measured in seconds.  For card present (CP) transactions, the SAD can be in the form of chip or magnetic stripe data.  For card not present (CNP) transactions, it typically includes the cardholder name, primary account number (PAN), expiration data and CVV/CVC/CID.  Regardless of transaction type, the data is sent off to either be approved or declined in seconds.

However, there are situations where that does not always happen that quickly.  Mail order telephone order (MOTO) and facsimile orders are the most obvious examples that can extend the amount of time between receipt of the CHD and processing by minutes, hours or even days and weeks.

But there are some not necessarily obvious situations where processing delays occur.

My first example of delay is when you go to fill your car with fuel.  When you swipe your card to pump the fuel, the system that manages the payment process will pre-authorize the purchase and then temporarily store the SAD until you finish pumping and hang up the hose to complete the transaction.  When you complete the transaction at the pump, the system sends through the actual charge and securely deletes your SAD from the system.  Depending on the size of your vehicle’s fuel tank and how close to empty you were, the system could have your SAD for quite a few minutes.

Another example is for the hospitality industry.  In the hospitality industry, a reservation typically does not cause a charge until a customer checks out even though they are required to have a card on file to hold the reservation.  When a customer checks into the property, the hotel’s billing system records the SAD and may also pre-authorize charges, but the actual card transaction is not processed until the customer checks out.  As a result, hotels can have SAD on file for the length of a traveler’s stay.  In fact, I have encountered SAD in hospitality systems that have been stored for more than a year due to reservations for special occasions such as graduations, birthdays, family reunions and anniversaries.

But getting back to the original question, the example that usually draws the most questions is in regards to when you, as a customer, store your card information with a merchant for future purchases.  These entities store your payment information (pre-authorization) in their applications so that you or they can quickly pay for your purchases without constantly re-entering your payment information.  These applications are not always part of a payment application, so they may or may not be PA-DSS validated.  However, when encountering them, I use the PA-DSS standard to ensure they process, store and transmit the SAD/CHD securely.  In addition, as a customer, you should have explicitly approved of the merchant storing your payment data and know how they will use that data.

Last, but not least, another great example of pre-authorization data are eWallet applications such as Google Pay and Apple Pay.  eWallets are just an electronic version of a consumer’s physical wallet.  eWallets are not regulated by the PCI standards or the card brands nor are they required to be PA-DSS validated.  Not that these eWallet applications are not secure, it is just that there is no one independently validating that they are secure.  That said, I always instruct developers of eWallet applications (or any pre-authorization applications) to follow the PA-DSS for developing a secure eWallet application.

The most confusion I encounter over pre-authorization data typically occurs regarding SAD/CHD that an organization receives via email or instant messaging.  A lot of QSAs get their undies in a bunch over when this happens and point to requirement 4.2 as the reason why this is unacceptable.  As a refresher, requirement 4.2 states:

“Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word in 4.2 is “send”.  Requirement 4.2 says nothing about receiving PANs by these methods.  That does not mean that the Council recommends receiving PANs via email, IM or similar methods.  It is only recognition of what goes on in the real world.  There will always be a small percentage of people that will send their cardholder data insecurely and there is little an organization can do to stop it.

Yes, you can put a data loss prevention (DLP) solution in the middle of all of these messaging technologies and catch the bulk of the offenders.  But then what?

I have some clients who have taken this approach and the DLP securely deletes the message and triggers a message back to the sender stating that they do not accept payment card information via this communication channel and then explains all of the appropriate and approved ways a customer can communicate SAD/CHD.

I have other clients that use the DLP but do not delete the message.  They explain that in this one instance, they will process the transaction because they are all about the customer experience.  They have a process that they follow to handle the message and then securely delete it.

To keep your email, IM and other messaging systems out of scope, the Council has told QSAs that organizations must have a policy in place that says they never encourage customers to use these messaging channels for communicating SAD/CHD and to make sure that organizations have a process to remove the SAD/CHD as soon as possible from those systems.  That typically involves the printing of the message, deleting the message from the system(s) and then securely destroying the printed message once the transaction is processed.  This is all considered “incidental contact” in the eyes of the Council and the QSA can then consider the system out of scope as long as they can satisfy themselves that the manual process is reliable.

The bottom line is that all of these situations involve pre-authorization data and pre-authorization data can include everything recorded on a card’s track or chip.  If a merchant does store the pre-authorization data for the convenience of their customers, they are obligated under the PCI DSS to store it separately, away from post-authorization data and to protect it with the same rigor as post-authorization data, i.e., encrypted, extremely limited access, logging, monitoring, etc. 

That is a key point that is often missed.  Pre-authorization data must be stored separately and away from any storage of post-authorization data.  That means that separate instances of databases need to be used on separate servers.  The rationale for this is no different than keeping key encrypting keys (KEK) away from data encrypting keys (DEK).  It is to ensure that in the event of a breach of post-authorization data, it does not readily lead to a breach of pre-authorization data.  It also allows for more rigorous controls over the pre-authorization data.

One final point regarding pre-authorization data that I made earlier, but it needs to be reiterated.  If a merchant intends to store pre-authorization data, I highly recommend that you have a legal agreement in place between your organization and your customers that explains why your organization is retaining this information and the business purpose(s) for which the information will be used.  That can be similar to a license agreement that the user either signs or clicks “Okay” online to acknowledge their approval.

In a future post I will discuss the world of post-authorization where the PCI standards were originally focused.

29
Sep
17

What Are You Really Interested In?

As a QSA, we hear this comment all of the time.

“PCI is all about compliance, not security.”

The implication being that the person talking is interested in actually securing their environment not just being PCI compliant.

Yet as the conversation goes on, we get into esoteric discussions regarding scope and how scope can be minimized.  Not necessarily a bad thing, but as these discussions continue, an underlying theme becomes apparent.

This conversation eventually leads to the QSA asking, “What are your drivers that are making you so concerned about minimizing scope?”

The inevitable answer is, “Because, we want to minimize the cost of and/or difficulty in implementing (in no particular order) information security, increasing information security personnel, how many devices we vulnerability scan and penetration test, critical file management tools, anti-virus licenses, devices needing log aggregation and analysis, [insert your security tool/product/device/appliance/widget here].”

It is at that point it becomes painfully obvious that the organization is not at all interested in security.  In fact, they do not give a damn about security.  Their only interest is in checking off the PCI compliance box and moving on to the next annoying compliance checkbox on their list.

I am sure a lot of you are questioning, “How can you be saying this?”

Because, if the organization were truly interested in security, all of the things they mention in their minimization discussion would already be installed in their production environment, if not QA and test environments.  That is right.  They would already be installed and not just on the PCI in-scope stuff.  It would already be installed everywhere in those environments.

Why?

Because all of these security tools and methods are part and parcel of a basic information security program that follows information security “best practices”.  They are not special to PCI, they are required for any successful information security program such as HIPAA, FFIEC, FISMA, HITRUST, etc.

People seem to think that the PCI SSC and the card brands came up with the PCI DSS requirements by arbitrarily pulling the requirements out of thin air.  In fact, I have had people insinuate that the PCI standards are just there for the banks to be mean to merchants and extract more money from them.

But in actuality, the PCI standards come from a lot of recognized sources including the US National Institute of Standards and Technology (NIST) security standards and guidance, US Department of Defense (DoD) security standards and guidance, as well as “lessons learned” from the card brands’ cardholder data breach forensic examinations and working with information security professionals sharing their knowledge of what are the minimum, basic “best practices” required to secure data.

But the key words here are ‘minimum’ and ‘basic’.

Because guess what?  If you want true security (remember that thing you supposedly wanted when we started), then you have to go beyond the PCI DSS requirements.  Hear that people?  If you want true security, your organization must go BEYOND the PCI DSS requirements.  Organizations are complaining about doing the basics.  Imagine what their complaints would be like if they had to do true security?  They would be throwing a tantrum that would be easily heard around the world.

Want actual proof that organizations are not doing the basics?

Read the Verizon Data Breach Investigation Report (DBIR) or any of the dozens of data breach reports issued annually by forensic analysis firms.  They all read the same; year after year after nauseating year.  Organizations cannot consistently execute even the basic security requirements specified in any security standard.  Even more disheartening is the fact that it is the same vulnerabilities and mistakes that are the root cause of the vast majority of breaches.

QSAs still get complaints from organizations about the PCI DSS being too difficult and costly to implement and maintain.  Yet these same organizations have the gall to say that PCI is NOT about security.

So, before you go and tell your QSA that PCI is all about compliance, think long and hard about that remark and why you are saying it.  Odds are you are saying it to look good, make a good impression with your QSA, show them that you are a true security professional and that your organization wants to be secure.

Think again.  The truth will eventually come out.  One way or another.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930