Archive for the 'Requirement 7 – Test payment applications to address vulnerabilities' Category

20
May
20

DevOps And PCI – Part 2

In the first post on this topic we discussed the terminology of DevOps and how segregation of duties can get complicated with DevOps.  In this post we will continue to investigate DevOps and discuss the issues you can encounter with change control, documentation and PCI scope.

Change Control

These days it is not unusual to hear DevOps people be proud of hundreds or even thousands of implementations or deployments per day.  That is until someone like a PCI assessor starts inquiring about what, if anything, is done to formally approve all those deployments?  The conversation with developers typically begins to deteriorate as you discuss requirement 6.4.5.2 which states:

“Documented change approval by authorized parties.”

The normal response is that the approval is provided in Jira, ServiceNow or whatever change management tool is being used.  That leads to a discussion of the guidance for requirement 6.4.5.2 which states:

“Approval by authorized parties indicates that the change is a legitimate and approved change sanctioned by the organization.”

With the rapidity and the volume of changes, then next question asked is how can an authorized party assess a change is legitimate and sanctioned if they never actually physically see and review the change that is deployed?

This leads to a discussion of how Jira, Jenkins, Puppet, whatever CI/CD toolsets work along with the automation involved in the change process as well as the “controls” embedded in the workflow of those tools.  The bottom line of which is usually that the only potential human intervention in the process might occur if the code needs a manual code review.

What requirement 6.4.5.2 is about is ensuring that the change process involves human intervention by getting management to approve what is being put into production and that segregation of duties has been ensured and no fraud or other illegal activity has been introduced into the process.  The reason is that we are talking about code that is processing, storing or transmitting sensitive authentication data (SAD) or cardholder data (CHD).  The potential for implementing code that skims that information or does other nefarious actions is too great to just trust a fully automated process with no human intervention.  This risk can all be driven home with a discussion of the 2013 Target breach where the CI/CD process was compromised to repeatedly push malware to thousands of point of sale devices.

While I am only talking PCI in-scope code in this case, fair warning, HIPAA, SOX, GDPR and other regulations are going to require a similar control.  Sensitive information is worth too much today and there is just too much risk that people will take any opportunity to siphon off sensitive data any way they can if appropriate controls are not in place.  If your process is totally automated and cannot detect such fraudulent activities, the ability to put who knows what into the code is too easy.  The last thing any organization wants is to be breached and then try to defend itself when they had poor or no internal controls to prevent the breach.

The bottom line here is that in our haste to push out software we have compromised the controls in that process.  Those controls need to be put back into place to minimize the risk presented by pushing malicious software into applications without a thorough vetting by management.

Documentation

Another area where compliance falters with DevOps is documentation.  Confluence, SharePoint Wiki or a similar tool will be used for documentation and that is where most assessors/auditors will be pointed for their requests for formal documentation.

The first problem with this is when you are an outside assessor/auditor, because you do not have access to the internally used tool.  That can be remedied several ways, but it is always a hurdle because insiders are so used to the fact that everyone they typically work with has access.

Once the assessor/auditor has access, the next problem for all assessors/auditors is finding what they need for their assessment/audit.  Regardless of whether the assessor/auditor gets PDFs or has online access, the most common reason for this issue is terminology.  A lot of times what an assessor/auditor is trying to find will be referred to by the organization in terms that are not consistent with industry or technology accepted terminology.  While all of these documentation tools have search capabilities, searching the document trove for what an assessor/auditor needs for evidence can be highly problematic.  Never mind the fact that clients get frustrated as well because the evidence exists, but the assessor/auditor cannot find it.

Related to these documentation systems is the fact that it can be difficult, if not impossible, for the assessor/auditor to get hardcopy or even usable PDFs of the documentation.  Let us face it, screen shots while readable can miss sentences at the end of a screen and therefore be missed altogether.  As a result, obtaining usable and legible evidence for the assessor’s/auditor’s work papers is not readily possible let alone having it searchable.  The fix to this is to use a browser extension or addon that will create a PDF or image of an entire page.  But that too can run into issues if the organization has locked down their browsers and do not allow such installations.

Regardless of Agile/Scrum or Waterfall, the next problem with documentation is the fact that the documentation is limited or simply does not exist.  I have encountered more and more organizations that again point to The Agile Manifesto, Scrum and the like and state that none of these approaches specify that documentation is required.  It seems that the age-old adage of “if it was hard to develop, it should be hard to understand” is back in vogue.  Never mind the fact that with hundreds or thousands of deployments a day, keeping up with documentation outside of can be impossible.

Consistent use of a change management ticketing system such as Jira or ServiceNow also can be an issue.  It seems that some organizations have exceptions that do not require every change to their environment be entered into their change management solution.  Worse, the criteria used to determine what is and what is not entered is not consistently applied because the criteria were never officially documented nor formally approved.  As a result, there is no way to rely upon the information contained in the change management system to determine that change management is performed as required.

As a result, I am never surprised to have organizations scrambling to develop even business and IT policies, standards and procedures in addition to network diagrams, data flow diagrams, application documentation, database schemas, operations documentation and a whole host of other missing or incomplete documentation.

PCI Scope Implications

Lastly, there is the scoping issue related to the DevOps infrastructure.  Not all of it is usually in scope, but that all depends on how it has been implemented.

At the very least, the Jenkins, Puppet or Ansible portion of the infrastructure are going to be in-scope for PCI.  The reason is that those components feed the application updates into the cardholder data environment (CDE).  So those are considered “Connected To” systems and must be properly configured and secured to be PCI compliant.

Because these CI/CD solutions are “Connected To”, this can become problematic because of who has access to Jenkins, Puppet, et. al.  As I spoke of earlier, because of poor segregation of roles in the Active Directory system, it can turn out that developers have access to these systems and therefore come into scope for PCI compliance as well.  As a result, the whole concept of development separate from production for requirement 6.4.1 does not exist.

Obviously, this segregation of development and production problem only gets worse if you drag even more of the development infrastructure into scope.  Therefore, you want to ensure that only the Jenkins, Puppet, Ansible portion of CI/CD is in scope.

This will mean moving Jenkins, Puppet, Ansible, etc. into your “Connected To” or “Shared Services” network segment.  This can create some issues with the rest of the development environment because of firewall rules and access to it through a Jump Server.  So simply moving that solution into the new network segment may not be as simple as it appears.

Development Metadata

Before we go, there is one more topic that needs to be discussed and that is the metadata in all these development solutions.

We have touched on the controls surrounding the development toolsets, but we have not discussed securing these toolsets and the risks they present.  This may seem a bit odd because since when have we worried about the security of Visual Studio or other integrated development environments (IDE).  However, with the implementation of CI/CD solutions, all these tools become interlinked and integrated.  Essentially, all these tools make up an automated assembly line for building applications.

But even more importantly, for these tools to work together seamlessly, they need to share metadata about what they are doing.  This metadata might seem like it is benign information, but it is particularly important and controls how the applications are built, tested and deployed.  Essentially the metadata is the “secret sauce” that makes your application work as an application within your organization.

We have already discussed the security controls that will be required around the deployment toolset.  But the rest of the development toolset is also going to require security and controls to be implemented to ensure that your software factory’s assembly line does not become a huge risk for attacks.  Attacks that could maliciously modify your applications to stopping the assembly line altogether.

Before you think that this is unrealistic, I would again point you to the infamous 2013 Target breach.  I wrote about the breach at the time and walked people through how what was known about the breach at that time would have made the breach possible.  The success of that attack was to compromise the CI/CD process to implement their malware that skimmed cards in the point of sale system.  So, the idea that development environments are not a viable attack point is not out of the realm of possibility.  And it gets even worse when you add in the use of contract workers to the development process.

So, what should an organization do to address these risks?  I would recommend securing the entire application development environment to PCI configuration standards so that security monitoring of the entire environment can be performed.  That does not mean that all the environment needs to reside in your ‘Connected To” or “Shared Services” DMZ with the CI/CD solution.  But I would create another DMZ to contain the rest of the toolset that feeds the CI/CD solution.  Servers should be properly security hardened and monitored as though they are in-scope for PCI compliance even though they are not in-scope.

There you have it.  The basics of how Agile and PCI can coexist.

Advertisement
17
May
20

DevOps And PCI – Part 1

DevOps are all the rage in organizations that develop applications.  The move to become “Agile” through the implementation of methodologies such as Scrum to replace the traditional waterfall SDLC is ongoing in most organizations.  But these changes can create compliance issues with the PCI standards regarding software development.

Understanding The Terminology

First and foremost, we need to address the terminology surrounding DevOps.

But before we talk about those specific terms, we need to address the elephant in the room which is “Agile”.  The Agile approach to development traces its history back to early 2001 when a group of developers met at a Utah ski resort.  The result of that meeting was ‘The Agile Manifesto’.  However, the roots for Agile were sown even earlier as application development became unable to keep pace with business changes starting in the late 1980s.

The important thing to remember about Agile is that it is not a methodology.  It is merely a set of values (4) and principles (12) related to the development of software.  The Agile Manifesto never describes a roadmap or steps to follow as to how those values and principles should be used.  So, to refer to Agile as a methodology is a misnomer but you will constantly encounter it being referred to as though it were a methodology.

Interestingly enough, the methodologies used with the Agile approach were actually developed before Agile.  Of the number of them that sprang up in the 1990s, Scrum seems to have won out when it comes to a methodology.  Scrum was one of many methodologies such as Kanban, Crystal Clear, Extreme Programming, Feature Driven Development and Dynamic Systems Development Method that came out at that time to address the delivery of software solutions in a timelier manner.  But while Scrum is the most followed methodology, it can also include some of these other methodologies such as Extreme Programming (XP) for example when used.

Scrum involves three types of roles.

  • Product Owner: The Product Owner needs to be a person with vision, authority, and availability because they are responsible for continuously communicating the vision and priorities to the development team.
  • Scrum Master: The Scrum Master is not a project manager.  The Scrum Master’s primary responsibility is to remove any impediments that are obstructing the team from achieving its sprint goals.  The Scrum Master also is the primary contact with the Product Owner.
  • Team: The Scrum team is responsible for completing the work.  For application development, a Scrum team can contain anywhere from three to nine members.  For software projects, a typical team includes a mix of software engineers, architects, information security personnel, programmers, analysts, QA experts, testers, and UI designers.  The team is responsible for determining how it will accomplish the work to be completed.

The final term from Scrum that needs to be defined is Sprint.  A Sprint is a one month or less in duration project that will result in a releasable increment of a product, in this case, an application or application enhancements.  When a Sprint’s horizon is too long the definition of what is being built may change, complexity may change, and risk may change.  The concepts of Sprints are to enable predictability by ensuring inspection and adaptation of progress toward a Sprint Goal at least every calendar month. Another benefit is that Sprints limit risk to one calendar month of cost.

Once defined, some of the key characteristics of Sprints are:

  • No changes are made that would endanger the Sprint Goal;
  • Quality goals do not decrease; and,
  • Scope may be clarified and re-negotiated between the “Product Owner” and “Team” as more is learned.

With these behind us, let us now turn to the terms DevOps.

DevOps is a merging of development and operations staff to work together to develop and implement solutions that will essentially run 24x7x365 with (hopefully) minimal operational interaction.  DevSecOps merely formally adds in the collaboration of information security into that mix even though information security should be included in DevOps as well.

The final topic in our discussion of terminology regards the tools used by DevOps.  While there are a number of vendors in this space, the “Big Dog” at the moment in DevOps is Atlassian with their tools Confluence and Jira.  Another “Big Dog” is Microsoft’s GitHub and the other “Big Dog” in the DevOps tool world is Jenkins which is open source from CloudBees.

  • Confluence is used as a documentation repository for such items as policies, standards and procedures as well as business, application, network and other important documentation.
  • Jira is used as a project and change management ticketing system.
  • GitHub is used to manage the versions of applications.
  • Jenkins is used for automating the build, testing and deployment of applications into production.

All of these tools have competitors from vendors such as ServiceNow, Puppet, Ansible, Chef, Google, and other commercial and open source development and operations tool vendors.  Regardless of vendor, all solutions seem to have these three basic components of documentation repository, project/change management and deployment automation.  It is also not unusual to find multiple tools in place particularly with Jenkins, Ansible, Puppet and Chef.

Segregation Of Duties

The first and most contentious issue that comes up with DevOps is the segregation of duties.  This is typically one of the biggest discussions/arguments an assessor/auditor will get into regarding DevOps is when Agile fans argue that segregation of duties is inconsistent with Scrum, Agile and DevOps.  Their primary reason will be to point to the fact that nowhere in any of the documentation regarding these topics is the term ‘segregation of duties’ or the requirement to ensure segregation of duties.  They would be correct in that regard.

Unfortunately, corporate life is not driven by Scrum, Agile or DevOps in a vacuum.  Corporations are still required to comply with laws, contracts and regulations promulgated on them by government entities, business partners, financial institutions and other parties regardless of what is in their methodologies and approaches.  So, while the argument can be made that the methods and approaches do not state anything on the subject, there are other documents, contracts and requirements that do state it is required.

Whether we are discussing PCI, NIST, SOC, COBIT or any other recognized audit or compliance programs, segregation of duties between roles is and always has been required.  It is one of the key principles to ensure that people do not have the ability to corrupt a process because they have too much control over that process.  The reason it is, that time and again one of, if not the primary root cause of such illicit activities, is the failure to segregate duties and roles thus allowing one person too much control over a process.  The concept of segregation of duties being that the more individuals involved in a process the less likely a process can or will be abused.

In DevOps, the issue of segregation of duties gets complicated because it gets extended into the tools used in the process.  The concept of continuous implementation (CI)/continuous deployment (CD) relies heavily on the use of tools such as Jira and Jenkins to enable such an approach.  This means that the assessor/auditor needs to look into who has access to these tools and what rights they have to influence the workflows that exist in these tools.

This gets even more complicated by the fact that this requires analysis of user and access control information from tools such as Active Directory, RADIUS and even the tools themselves.  In my experience, it is not unusual to peel the onion on these access controls to reveal the fact that segregation of duties really does not exist as thought because all roles are granted to everyone in DevOps and the organization is relying on individuals’ honesty to ensure compliance.

DevOps also can suffer from segregation between production, quality assurance (QA), test and development environments.  This is because a lot of organizations that move to DevOps have the mistaken belief that the “Operations” and “Security” components become part of the development group.  The argument will be made that Agile is all about “breaking down silos”.  While that is true, the mistake they make is that Agile and Scrum were not a call to abrogate the knowledge and controls that all of the players involved bring to the table as separate disciplines.  The goal is to make the disciplines better work together to achieve a common goal in a Sprint.

Where this manifests itself most often is that developers have unfettered access to the production environment.  In a DevOps environment, it is not unusual to find developers scattered all throughout the environment.  They are developing code, they are operating production, they are diagnosing bugs, they are everywhere with no delineation of roles and responsibilities.  It is essentially a free for all.  Everyone pitches in where they need to be involved.

This organized chaos is supposedly “controlled” by Jira through its ticketing.  Agile advocates will claim that since everything has a ticket (not always a true statement) that they maintain segregation through Jira.  They will show the tickets to the assessor/auditor and display that there are different names on the ticket for the developer, the QA person, the people who approved promotion, etc.  While this is true, as I described earlier, the access controls will show that virtually everyone they gave as evidence of segregation can fulfill any of those roles whenever they so choose.  By definition, that is not segregation of duties because there are no actual controls in place to stop someone from running the whole process.

The bottom line in this discussion is that the segregation of duty controls in an Agile environment is usually illusory.  As such, it is management’s responsibility to periodically ensure that segregation of duty controls are truly implemented and testable.

In the next post we will discuss documentation, change control and PCI scope in an Agile environment.

29
Sep
16

Microsoft Changes Their Patching Strategy

Back in May 2016, Microsoft issued a blog entry on TechNet giving the world insight into its new patching strategy.  The concept of a monthly “rollup” patch or what a lot of people are calling a “mega-patch”.  In August another blog entry was posted that further detailed this strategy and explained that from October 2016 going forward, this is how Microsoft would patch Windows.

But there is even more to it.  For WSUS and SCCM users, security patches will be separated from the Monthly Rollup in their own Security mega-patch.  The idea behind separating the security patches into their own mega-patch is to allow organizations to at least stay current on security.  However there is a twist on this approach as well.  Organizations such as small business that do not use WSUS or SCCM will only get a single mega-patch through Windows or Microsoft Update that will contain the Monthly Rollup and Security mega-patches in one mega-patch.

So what could go wrong you might be asking?

The biggest drawback to this scheme is that, should you have any issue with a mega-patch, you must back out the whole patch, not just the item that is creating the issue.  That means instead of having just one potential issue to mitigate, you could have as many issues to mitigate as the patch contains.  From a PCI compliance perspective, that could mean lots of missing patches in your Windows systems if your systems run into an issue with a mega-patch.  This can get doubly bad for organizations not using WSUS or SCCM because they will be backing out security patches as well as application patches.

But it can get even worse.  These mega-patches are cumulative meaning that every month Microsoft adds the previous mega-patch to the new month’s mega-patch.  For example, say one month the mega-patches cannot be applied for compatibility reasons.  For example, you apply the monthly mega-patch and your point of sale (POS) application fails to work with the mega-patch and you must back it out.  If that issue continues because of your vendor, you will not be able to patch your POS systems until that compatibility issue is resolved because month after month the mega-patches are cumulative.  So until the compatibility issue is resolved, you will not be able to patch your systems.

But I foresee small businesses running into the worst issue with this new approach.  Since small organizations likely will not be using WSUS or SCCM, they will not get a separate Security mega-patch, they will only get a single mega-patch that combines the Monthly Rollup and Security into one mega-patch.  If any issue occurs with that single mega-patch, the small businesses will not even get their security patches.  That will create a situation where the organization must figure out how to mitigate their inability to secure their systems.  In addition, that could mean months of security issues until the original compatibility issue can be resolved.

But to add insult to injury, I can also see situations where a vendor has issues resolving a compatibility problem with a mega-patch and finally gets it fixed only to encounter a new compatibility issue with the latest mega-patch.  Based on how Microsoft is running these mega-patches, there appears to be no way to go back to a compatible and useable mega-patch.  This could result in organizations being unable to patch at all due to ongoing compatibility issues.

At a minimum, I think Microsoft will need to make the Security mega-patch separate from the Monthly Rollup for all organizations, not just those using WSUS or SCCM.  At least then, all organizations can apply security patches independent of the Monthly Rollup which would be more likely to be the one that would create compatibility issues.

It will be interesting to see how this new patching strategy plays out.  Hopefully it does not create even more risk for uses of Windows.  If it does, I would not be surprised if the PCI SSC invokes significant new controls on Windows-based solutions.  That could be the final straw in using Windows for a lot of merchants.  Time will tell.

24
Jul
11

End-To-End Encryption – The Rest Of The Story

Step right up folks.  I have something that will cure all of your problems with credit card processing.  It is called end-to-end encryption.  Yes, folks, it is the be all, to end all in security.  It will cure all that ails you, particularly those nasty data breaches.  Don’t be shy, just step right up and get your own version while supplies last.

Gee, when end-to-end encryption (E2EE) is put that way, it sounds great, almost too good to be true.  And you would be right; it is too good to be true.  But if you listen to the statements of the proponents of E2EE, they make it sound like once E2EE is in place, it is like the Ronco Showtime Oven, “Just set it and forget it.”

Now, do not get me wrong.  E2EE is not a bad thing, but it does have its own set of risks.  And it is those risks that do not get discussed that concern me.  The reason for my concern is that if you discuss E2EE with any merchant, most see it as this panacea, something that will get them out of the PCI compliance game altogether.  However, nothing could be further from the truth.  If anything, E2EE may make PCI compliance even more daunting than it is today.

The first thing everyone seems to forget is that E2EE only removes those systems and networks that are between the endpoints.  That is because the data stream between the endpoints is encrypted and, therefore, out of scope for PCI compliance.  However, for a merchant, that means that the device that accepts the credit card is still in-scope for PCI compliance.  Bring this fact up to most merchants and they start complaining like no tomorrow.

That device might be as “simple” as a credit card terminal or as complex as an integrated point-of-sale (POS) workstation on a network.  However, since this device is an endpoint, the merchant or the merchant’s QSA needs to ensure that the endpoint is properly secured and cannot end up being a breach point.  Depending on the complexity of that device, that assessment might be very straight forward or very time consuming.  The reason the endpoint needs to be assessed is that security is only as good as its weakest link.  In the case of E2EE, the weakest links are the endpoints at which the data is encrypted and decrypted.

The next thing that seems to slip people’s mind is that fact that since the merchant has an endpoint, that endpoint is still a target.  Worse yet, because it is an endpoint, the level of sophistication likely required to compromise that endpoint goes up exponentially, meaning that any successful attack will likely be beyond the average merchant’s capability to readily detect.  The PCI DSS addresses this threat fairly well by requiring network monitoring, daily log reviews, anti-virus, anti-malware, firewalls and the like.  However, I can tell you from personal experience that your average merchant is not going to be equipped to deal with this new threat.

And what is the new threat?  The new threat is tampered with hardware and software.  If you think this is farfetched, think again.  It has already happened on a limited scale.  The doctoring of hardware is fairly straight forward to both accomplish and to detect.  Detection only takes looking inside the device and noticing something that does not belong.  However, doctored software is another story.  The concept of doctored software has been a concern in the health care industry since the start of using computerization for heart pacemakers.  While the health care industry has developed rigorous testing and certification procedures, the rest of the software industry has said there is no need.  That is, until now.  As the world further automates, the need for reliable, safe and secure software only increases because of the reliance people and organizations apply to that software.

So what can an organization do to stem this new threat after implementing E2EE?  Here are some thoughts.

  • Purchase your credit card processing equipment only from your acquiring bank or reputable vendor.  This is not a perfect solution to the problem, but doing this should be better than buying a used unit off of eBay or from Joe’s Guaranteed Card Equipment.  Yes, you may save a few bucks, but is that worth having every one of your customers that uses a credit card being compromised?  Probably not.
  • Ask your supplier of terminals or POS workstations about what they do to test these systems to ensure that they operate as expected and are not routing cardholder data to Timbuktu as well as your bank.  Ask them to provide those procedures in writing and review them to ensure they appear adequate.
  • Use serialized tamperproof tape on the seams and doors of your terminals and POS workstations.  Require that at every Manager shift change the new manager on duty is required to log their review of the devices, inventory the devices and notate if any have been tampered with.  If a device does appear to have been tampered with, it should be taken out of service until a new, secure device can replace it.
  • If using self-checkout systems, make sure to have those systems under both video and employee monitoring.
  • Upgrade your card processing devices to the latest devices.  Over the last few years, some of these devices have seen significant changes in their design that improves their tamper resistance.  This is particularly true of fuel pumps and certain types of terminals.
  • Review video monitoring if any manager notates that a device may have been tampered with to determine if you can identify possible suspects that may have tampered with the device.
  • Patch your devices as soon as possible to minimize their susceptibility to attack or compromise.
  • If the vendor of the equipment will perform updates, make sure that you or someone in your organization schedules the updates.  If anyone shows up at a location to “update” your equipment and it was not scheduled by your organization, contact law enforcement.
  • If updates will be done by the vendor remotely, make sure that someone from your organization initiates the remote access and they observe the remote update process.  At the end of the update process, the person should terminate the remote session of the vendor.

Even implementing these processes will not remove all of the risk.  Particularly the risk of having modified software introduced into your environment.  However, these processes will show a court that you attempted to conduct due diligence and tried to keep your equipment secure.

11
Mar
11

PCI and SOX, HIPAA, GLBA, et.al.

Just got a call regarding PCI and Sarbanes Oxley (SOX) compliance.  Whether it is SOX, the Health Insurance Portability and Accountability Act (HIPAA), Gramm Leach Bliley Act (GLBA) or some other regulation, organizations that also have to comply with the PCI standards want to maximize the work effort to avoid redundancy.  After all, all of these assessments take a lot of effort to gather the documentation and other supporting materials as well as interviews and the like to go through the various assessments.  It is not that this cannot be done, but it can get complicated to ensure efforts are coordinated properly and assessment work done by one party is acceptable to the QSA and vice versa.  It has been my experience that properly planned, a lot of these other assessment programs can be aligned to minimize the amount of effort required to go through a PCI assessment.  However, be advised that there may still be a significant amount of effort on your QSA’s part as well as your own organization because of the testing required by the PCI DSS.

For example, since the release of SOX there have been a lot of changes, particularly for section 404 where most public companies think they will gain leverage.  What has happened is that with the changes to 404, the number of applications in-scope for SOX has been greatly reduced as has been the testing requirements.  Therefore what is in-scope for SOX is usually a very small subset of what is in-scope for PCI or may not even be relevant to an organization’s PCI compliance.  I know this will seem hard to believe, but we have publicly held clients where their point of sale (POS) is not in-scope for SOX.  As a result, any leverage between SOX and PCI efforts is not always possible.

But that is not to say there are not areas where leverage can be obtained.  One place where we typically get leverage is with the assessment of logical access controls, PCI DSS requirements 7 and 8.  Since most companies have a central directory for managing users such as Microsoft Active Directory, logical access controls get fairly well covered under SOX, HIPAA or GLBA.  As such, an organization’s internal audit function and/or external auditors have covered how users are added/changed/disabled/deleted, password aging, password strength, etc.  There may still be areas that were not covered such as password resets, but there is no reason to go back over what has already been covered if that was already assessed and the parameters of their assessment meets or exceed the PCI DSS requirements.  In most cases, we find that the assessment by the other party typically goes above and beyond what the PCI DSS requires.

Another area we find where leverage can be gained is with application development standards, PCI DSS requirement 6.3, and change control, PCI DSS requirement 6.4.  Both of these areas get quite a bit of scrutiny under SOX, HIPAA and Federal Financial Institution Examination Council (FFIEC) regulations as well as most organizations’ internal audit work programs.  Granted, these various assessment efforts will not cover every application or device in-scope for PCI.  However most organizations have common policies, standards and procedures for application develop and change control and those will have been assessed and tested under other assessments.  These assessments should be able to be leveraged and minimize a QSA’s testing to in-scope PCI items that were not tested as part of other assessment efforts.  Again, in most cases, we find that the assessments of these areas go above and beyond what the PCI DSS requires.

Caveats

Now before everyone runs joyously off to limit their QSA’s review activities, there are some caveats that need to be considered in order for this to be effective.

First and foremost of all of these caveats is that it is up to the QSA to be willing to accept the other parties’ work efforts in assessing the requirement in question.  The PCI SSC has made it clear that a QSA is under no obligation to accept any third party’s assessment efforts, even another QSA’s.  As a result, just because you have these other assessments does not mean that you will gain anything in your PCI assessment.  Which leads me to recommend that when you are assessing QSAs for your PCI compliance work, it is a good idea to ask them about whether or not they will accept other third parties’ assessment work?  You cannot leverage the results of these other assessments if the QSA will not cooperate.

Another caveat is that the assessment efforts from any third party must have occurred within the PCI assessment period.  No one should expect a QSA to rely on a work product that did not occur during the PCI assessment period.  I have run into situations where, because of timing, the assessment of logical access occurred a few months prior to the PCI assessment time period and would not be re-assessed by the organizations until a month after the PCI assessment period.  In those cases, we have had to conduct our own testing of logical access controls and could not leverage the other auditor’s work.  I also have had run into instances where the assessment of a particular control occurred right at the start of the PCI assessment period.  Because almost a year had passed, we opted to conduct some limited testing of the control to ensure that the control was still functioning as designed but did not conduct our full testing because of the results of the prior testing.

A third caveat is that the QSA needs to be careful in determining what was covered by the third party in their assessment.  Not that we have had organizations trying to put one over on us, but as a QSA you really need to read the third party’s report and, if necessary, ask questions about the scope of the assessment.  We have found in a number of instances that what was represented by our client and the work performed by the third party were not the same.  The reason this occurs is that what we (QSA) asked for; what our client contact then asked for; what the person who has the report supplied; do not always jive with what we (QSA) were expecting or requested.  As a result, it is not unusual to have to go a couple of iterations to get what we need.  And even then we may find out that what we can get will not meet our needs.

The final caveat is that the third party must be qualified to conduct the testing you are going to rely upon.  This is similar to the requirement that the PCI SSC tells QSAs about for internal vulnerability scanning and penetration testing.  Per my earlier example, if an organization’s external auditor has done testing regarding how users are established and assigned access to the network and applications, there is little to be gained from a QSA going through the same exhaustive testing.  As an employee of a public accounting firm, I can tell you that SOX testing is not a small effort in regards to logical access.  So unless the logical access testing totally missed the cardholder data environment, I would typically have no trouble relying on the external auditor’s assessment.

So there are ways to leverage other compliance efforts to reduce the impact of PCI compliance work.  In order to leverage all of these efforts, quite a bit of planning and coordination are required.  And just because you can do this, does not automatically mean that the leverage gained outweighs the effort required to make it happen.  So you need to keep in mind that such efforts may be for naught.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031