Archive for the 'Requirement 11 – Regularly test security systems and processes' Category

18
Sep
22

I Just Could Not Keep My Mouth Shut

I will give Hoyt Ketterson the credit for my question to the Council in the Assessor Session at the end of the 2022 North American PCI Community Meeting in Toronto.

There I was, just minding my own business with a list of questions I wanted to ask the Council. Hoyt asked about compensating control worksheets (CCW) and used an example of a merchant that had missed one Approved Scanning Vendor (ASV) scan that caused him to create a CCW for that issue.  That triggered a flashback to a client I had dealt with a couple of months earlier who had 12 monthly external scanning reports from an ASV, but due to an information security employee leaving, they only had two “ASV Certified Scans” because they forgot to click on the scanning portal site to make the other two quarterly scans “ASV Certified”. So much for those other questions on my list.

Before discussing my question and the rationale behind it, a quick bit of history as to how we got the ASV program in the first place.

Back before we had the PCI SSC, we had the various Card Brand compliance programs, Mastercard had their Site Data Protection (SDP) program that focused on eCommerce site security.  [That program still exists by the way, but now it supplements the PCI DSS.]  Part of that program was that Mastercard operated a security lab in the EU that prospective ASV organizations were required to scan against and produce a report of all of the vulnerabilities they found. Representatives of Mastercard would review those scanning results and would certify the organization to be an ASV if they passed the test.

Back then, consulting firms that were Qualified Security Assessor Companies (QSAC) were typically an ASV as well [You do NOT want a client going elsewhere for services you can also provide.] and were using tools like Internet Security Scanner and Nessus to conduct scans.  The process to get vulnerability scans run was a very manual and, at times, a time-consuming process.  It was an art form to properly configure these tools to get accurate results.  This is why Mastercard set up a testing lab to insure that ASVs were providing accurate results.  ASVs were required to test against the Lab and recertify annually to ensure that their scans are accurate. [Today, that still occurs but the vendor instructs all ASVs how to properly configure their specific scanner to pass the PCI SSC scanning testing.]

With that background, here is my question.

“What is the point of the ASV program today?”, I asked the round table participants.

Fast forward to today and now most ASVs are just rebranding and reselling ASV portals from Qualys, Tenable and Rapid7.  Which obviously leads to my question given where we are today and the rationale for my question.

  • The ASV of today is nothing like the ASV of yesteryear from when things started with Mastercard’s SDP program. The process is hardly manual and is totally automated.
  • A person no longer manually configures, initiates or monitors the scanning process. In 99% of cases, the only time an ASV is involved is if the merchant or service provider needs to discuss false positive results and to have them removed from the report.
  • ASV scanning today uses the same scanner and settings. The only thing that makes an ASV scan an “ASV Certified Scan” is that the end user typically clicks on a button or check box to make it such.
  • The assessed entity is the one that initiates the scans, not the ASV. Which really makes you wonder about the requirement for ensuring that the person running the scan is qualified. What qualifications does anyone need to click a button to start a preset, preconfigured scan?
  • Now a days, vulnerability scans are scheduled so no human being initiates a scan. Which makes you wonder why someone has to check a box or click a button to initiate an ASV Certified Scan. Why is that also not automated?
  • 99% of ASVs use a portal operated by one of known vulnerability scanner vendors. Unlike the good old days when each ASV configured and operated one of many vulnerability scanners. This can lead to some frustration with the ASVs that are not also the vendor of the scanner. I have been personally involved in situations where the vendor makes a change to their scanner and while they pass their ASV test, my organization did not. Thus forcing me to work with the vendor (also an ASV competitor) to tweak the configuration of their scanner so that my organization can also pass.

The bottom line is that the current ASV scanning process is nothing like the processes that began the ASV certification process almost 20 years ago.

The Council has agreed that further discussion on the subject is needed to understand today’s external vulnerability scanning processes and has promised to initiate those discussions. So stay tuned as change may be coming.

Advertisement
09
Jan
22

Penetration Testing – Yes, It Is Still Misunderstood

Remind me again how far down the road we are with the current practices of information security? Has it really been almost three decades?

Yes, it really has been about that long.  Yet, it continues to fascinate me how those practices are so misunderstood in the information security community.  Particularly when a significant portion of that community holds one or more certifications in the topic.  In my very humble opinion, we should not be having such basic understanding conversations and yet these topics keep constantly coming back up.

One of those practices that is heavily misunderstood is penetration testing.  I have written numerous posts on the subject and yet the understanding of penetration testing continues to be a challenge.  This is particularly true when it comes to PCI compliance.  As a result, I decided I would share some important points about PCI compliance and penetration testing as is required in requirements under 11.3.

A lot of these misunderstandings are clarified in the PCI SSC’s Information Supplement on Penetration Testing.  If you have not read this document, I highly recommend it as it explains a lot about the process and why it is important.

Yet, sadly, it too has some flaws in it that have created more problems than the document solved.  The biggest of which is on page 7 where it discusses the scope of a penetration test.  The only thing those of us can figure is that the group that developed the information supplement assumed that the cardholder data environment (CDE) is implicitly secured by the security measures taken because of the PCI DSS requirements.  Therefore, the actual CDE is not required to be tested.  How any information security professional could think this is a good practice is beyond a lot of us in the profession, but the information supplement only says that the CDE MAY BE tested. 

But then on page 8, comes this quote.

“When access to the CDE is obtained as a result of the testing, the scope of the penetration test may allow the tester to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

“May allow” further exploration?  Wait!  What?  Since when does an attacker stop looking around?  Are you kidding us?

In my very humble opinion (and the opinion of those of a LOT of others in the profession), everything that is in scope for PCI compliance needs to be penetration tested.  No exceptions!  But all QSAs and penetration testers that take this approach have received push back (sometimes significant) from clients and then we usually back off and do exactly as the information supplement states.  I would like to tell you that it ends well, but I have had numerous clients come back later (sometimes years later) and complain (some very loudly) to me that I should have stuck to my guns and tested everything as they hold me responsible for why they got hacked.

The next big misunderstanding is what is considered a “passing” or “clean” penetration test?  Section 4.1.6 on page 16 of the Penetration Testing information supplement discusses what constitutes a successful penetration test.

“Defining the success criteria for the penetration test allows the entity to set limits on the depth of the penetration test. Without agreeing upon the point at which the penetration test is complete, there is a possibility of the tester exceeding the boundaries and expectations of the target entity. This should be documented in the rules of engagement.”

Sorry, but this is not an excuse for the assessed entity to avoid penetration testing by setting the scope of the test too small.  It is up to the client and the penetration tester to agree on the scope based on network and dataflow diagrams.  Going back to my earlier statement, a penetration test should test every device/system that is in scope for PCI compliance.  When I say everything, that does not mean that every “Connected To” system needs to be penetration tested such as with Domain Controllers or other devices where the penetration tester can confirm that standardized configuration practices are in place and can be proven to be standard.  It also does not mean that containers or servers that are spawned to increase capacity in a virtual or cloud environment need to be individually tested as long as the penetration tester can document that all instances are the same.

Another misunderstanding is when exploitable issues are found is that only critical, high or severe exploits need to be addressed in order to get a “passing” test.  Sorry to rain on your parade, but penetration testing is not like vulnerability testing where critical, high or severe vulnerabilities need to be fixed within 30 days and the others can be addressed within 90 days.  An exploit found by a penetration test MUST BE remediated or mitigated and then retested to get a “passing” test.  All someone needs to do is to read the PCI DSS requirements under 11.3 and see that nowhere in those requirements is there ever a reference to critical, high or severe exploits, nor any other remediation criteria.  ALL exploits documented must be addressed, regardless of any documented criticality, and either be remediated or mitigated and then retested to get a “passing” test.

A key point about retesting is that the penetration tester does not have to conduct a complete penetration test as a retest.  The penetration tester only needs to retest those exploits that were found in the original testing exercise.

Another critical point is when an exploit is mitigated and not remediated.  When mitigated, that means that the organization is relying on a variety of controls to detect and alert on the exploit being used.  When testing an exploit that is being mitigated, the penetration tester needs to have a clear understanding of those mitigating controls so that when they review their testing results they can attest to the fact that all of the mitigating controls functioned to identify, alert and then allow personnel to stop an attack.  If those mitigating controls cannot be confirmed then the mitigation is not considered successful.

Another point of confusion regarding penetration testing is network segmentation testing.  Why segmentation testing is bundled with penetration testing is unknown, but it has always been that way.  A lot of us would prefer it was a separate requirement in section 11, since it does not require a penetration tester to conduct this testing which is a surprise to most.  The person conducting the segmentation testing only needs to be deemed as “qualified” to conduct the testing.

People are also typically surprised that segmentation testing does not require anything more than Nmap installed on a laptop.  The key though to a successful segmentation test is that every segment deemed “out of scope” must be tested to ensure that they have no direct communication capability with systems contained in the CDE – inbound to the CDE OR outbound from the CDE.  This testing requires having Nmap test all 65,535 TCP and UDP ports to/from every network segment which takes time – a lot of time depending on the number of active IP addresses in the network segment.

For large organizations with tens of thousands of network segments, the idea of testing EVERY segment that is out of scope against the CDE is not realistic.  There are two options to address this situation.  The first option is to use a tool such as AlgoSec, FireMon, Tufin or the like.  Most large organizations have such a tool installed.  Using the tool’s database, queries can be run against the CDE and all segments to find any “holes” in those rules.  To use this approach though the entire network needs to be in the tool’s database because you are going to test ALL segments and you need to test all the segments.  Sadly, most large organization do NOT have all their network segments in the tool, so it is not usable for segmentation testing.

The second option requires an analysis of the firewall and routing rules to determine if there are ways to “sample” network segments that are covered under the same rules in the firewalls and routers.  If the segmentation tester can document that say one thousand network segments are all governed by the same set of firewall/router rules, then testing of five randomly selected segments of those thousand segments and getting the same results, those results can be extended to the remaining 995 segments.  But the key to this second option is making sure that the rules are exactly the same for all one thousand segments which is sometimes not as easy as it sounds.

Hopefully these clarifications will assist you in conducting and evaluating penetration testing.

13
Dec
20

Network Segmentation Testing

NOTE: If you have not read the PCI SSC Information Supplement – Guidance for PCI DSS Scoping and Network Segmentation you must do so before using the procedures documented in this post.

How something so simple became something so complicated (or at least believed to be complicated), I just will never understand.  The only thing I can point to is the fact that network segmentation testing falls within the requirements of penetration testing.  Because of that, I think people therefore believe there is something “special” about how segmentation testing must be performed.  Never mind the fact that there is the even more basic issue of how to approach network segmentation testing.

Here is the network segmentation testing methodology for traditional IP networks.

  • Gather an inventory of all of the network segments.  Label each network segment as Cardholder Data Environment (CDE), Connected To or Out of Scope based on the definitions from the Scoping Information Supplement.
  • Make sure you have Nmap installed on a portable computer.  The reason this needs to be portable is because you will likely have to move around your facilities in order to complete all of the testing.  It is also not unusual to use diagnostic systems in the data center to accomplish this effort (they may already have Nmap installed) as well as creating VMs for this testing and then remoting into those systems.  The important thing is to have access to every network segment in your environment so that you can conduct this testing.
  • Connect your scanner to every CDE network segment and attempt to reach all of the Out of Scope network segments from the CDE.  You will want to run an Nmap scan that scans all TCP and UDP ports (i.e., 1 through 65535) against all IP addresses in a given out of scope network segment.  This likely sounds extreme but to prove segmentation you must test all 65,535 TCP/UDP ports against all IP addresses to make sure that no traffic “leaks” to your out of scope networks.  If you do find a port open in one of your out of scope networks, you will have to track down where that leak occurs.  Example: nmap –p- -sT –sU 10.10.0.0/16
  • While in each CDE, test connections out to your Connected To network segments testing all TCP and UDP ports against all IP addresses in your Connected To network segments.  Since communication between the CDE and Connected To segments is allowed, you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Finally, you will need to test that your CDE can only reach the internet through ports and IP addresses you have specified.  Obviously, you are not going to test every internet address as that would take forever.  However, what I tell my clients to do is to use every external IP address they have for business partners or other third parties they are connected to.  Again, you are going to test all TCP and UDP ports against those addresses. If you get any unexpected results back, you are going to have to resolve those issues as there should be no external connectivity.
  • Connect to every Connected To network segment and conduct testing into the CDE for all TCP and UDP ports against all IP addresses in the CDE network segment.  Again, since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • While in the Connected To network segments, conduct testing to all Out of Scope network segments.  Since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Connect to every Out of Scope network segment and run an Nmap scan into each CDE network segment for every TCP and UDP port for all IP addresses in the CDE.  This should return no results back if the network is truly out of scope.  If it does return results, you will have to figure out way and block that traffic into the CDE.
  • Save all of your results and comparisons so that you have a record of your testing.  If you found issues, make sure you document in detail what was done to resolve those issues and conduct new scans to prove that those issues were remediated.

When you bring in newer solutions such as the Cloud, containers, serverless, microsegmentation and the like the traditional method of network segmentation testing is impossible to completely test.  You can conduct all of the tests documented above from outside of the environment looking into your cloud environment, but you cannot look from inside the cloud out.  That must be done manually by examining the cloud configuration information and ensuring that networks are properly segmented.

If you are like me, you are looking for a better way to deal with the Cloud as well as large networks.  There are network tools from vendors such as FireMon, AlgoSec, Skybox and Tufin that have capabilities to take the Cloud configuration information as well as firewall, router, switch and other network infrastructure configurations and provide analytical capabilities to simulate the testing above from both internal and external perspectives.  The downside of these tools of course is that they are not inexpensive and can require significant horsepower to operate.  However, they can be worth their weight in gold for their ability to analyze and understand your networks, find misconfigurations and find issues where attacks can potentially succeed.

There is no reason to pay your penetration tester to conduct network segmentation testing unless you are uncertain as to how to analyze the information from the Cloud.

25
Feb
19

Network Segmentation Testing

As part of penetration testing, merchants and service providers are required to test that their network segmentation is properly implemented and functioning.  Sounds like a simple enough task, but you would be amazed at the bizarre and complicated discussions that QSAs encounter when segmentation testing comes up.

As a reminder, requirement 11.3.4 states:

“If segmentation is used to isolate the CDE from other networks, perform penetration tests at least annually and after any changes to segmentation controls/methods to verify that the segmentation methods are operational and effective, and isolate all out-of-scope systems from systems in the CDE.”

For service providers, requirement 11.3.4.1 adds in the requirement of testing at least every six months or any changes to network segmentation, not just “significant changes”.

Regardless of whether you are a merchant or a service provider, how segmentation testing is performed is the same.

So why all of the issues?

First, the PCI DSS does us no favors with the “guidance” for requirement 11.3.4 which states:

“Penetration testing is an important tool to confirm that any segmentation in place to isolate the CDE from other networks is effective. The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE. For example, network testing and/or scanning for open ports, to verify no connectivity between in-scope and out-of-scope networks.”

The first point of confusion typically relates to the phrase “penetration testing” as though segmentation testing somehow requires the use of a penetration testing tool such as Metasploit or similar to conduct the segmentation testing.  Nothing could be further from the truth.  But the terminology of “penetration testing” clouds the task.

The second point that seems to confuse is the last sentence that starts out with “For example …”.  People seem to miss that start of the sentence and take it that all they have to do is make sure that out of scope devices cannot get to the CDE and that is it.  While network segmentation testing is simple, it is not quite that simple.

What Is Segmentation Testing?

After going through the debunking of all of the mythology and rumors surrounding network segmentation testing, this is the first question asked.  I always take people back to what the purpose of network segmentation testing is – to prove network segmentation is implemented and is functioning as designed to keep the various networks logically separated.

When I say, “various networks”, I am referring to the network segments defined in the information supplement “Guidance for PCI DSS Scoping and Network Segmentation” issued in May 2017.  In that document, the following terminology is used.

  • CDE Systems – any systems/devices that directly process, store or transmit sensitive authentication data (SAD) or cardholder data (CHD) or are directly connected to such systems/devices. These systems/devices are also sometimes referred to as Tier 1 or Category 1.
  • Connected To or Security Impacting Systems – are systems that provide services to the CDE or have connections to systems/devices in the CDE that could adversely affect the security of the systems/devices in the CDE. These systems/devices can also be referred to as “Shared Services”, Tier 2 or Category 2.
  • Out of Scope Systems – are systems that cannot connect to the CDE also referred to as Tier 3 or Category 3.

For PCI compliance, all CDE Systems (Category 1) and Connected To (Category 2) systems are always in scope.  However, for network segmentation testing, Category 3 systems/devices are also included because the testing must prove that Category 3 cannot get to Category 1 and vice versa.  That is typically were network segmentation testing goes wrong is that it only proves that Category 3 cannot get to Category 1 and then stops.  The guidance for requirement 11.3.4 provides some clarity in the second sentence which states:

“The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE.”

The Council has advised that what they want is testing from inside and outside the CDE as well as from other network segments including the internet if applicable.  The idea is to further support the analysis and findings from a QSA’s review of the firewall rules from the requirements in 1.3.x of the PCI DSS.  The reason for this is that with some breaches and the advent of “next generation” firewalls and more sophisticated security technologies, the Council felt that assessed organizations and QSAs were not necessarily proving that network segmentation was truly in place and wanted some additional testing and confirmation.

How Do I Test?

First and foremost, timing of the testing is very important.  For merchants, it should be conducted as close to annually as possible,  For service providers, they are required to be conducted as close to every six months as possible.  But you also need to consider the concept of “significant change”.  If there have been significant changes that affected network segmentation, then the network segmentation testing must be done as soon as possible (the Council typically recommends a maximum of 30 days) after the significant change has been implemented.

While the tool used to conduct the test can be as simple as Nmap or the like, the testing itself can be complicated depending on how your network is segmented.  I have clients that have hundreds of segments that results in a very time-consuming amount of testing.  The key here is to be thorough, but not insanely thorough.

I have no problem with network segmentation testing including a review of firewall and ACL rules and using that information to test for example from a particular network segment into another because the rules are the same for all the network segments being tested to support a particular rule.  The key is to be able to justify why you picked one segment over another and not repeatedly test from only one segment for every test.  Provide the rules with an explanation of your justification for what you did.  This will allow the QSA to understand how you worked and why.

But Nmap is not the only tool that can be used.  There are a number of network management/modelling/monitoring tools such as FireMon, Tufin and RedSeal that can also be used to prove out network segmentation.  In fact, these tools can provide ways to perform the network segmentation testing that do not need to involve scanning the network and merely running reports against the databases created by these tools.

Regardless of the tool used, be careful.  I have seen too many reports where the tools did not go to the devices within the network segment and the results did not necessarily prove segmentation is in place and functioning because when matched up to the server configuration it showed other forms of communication.

Segmentation Testing Reporting Requirements

Once you have completed your network segmentation testing, you need to create a proper report of those results.  At a minimum, a network segmentation testing report should have the following sections.

  • A one to two page (at most) Executive Summary of the network segmentation test, the date the testing was started, the date when testing was completed, the results (i.e., pass or fail) and a summary of all findings and recommendations.
  • Document who conducted the test including a bit of background as to why they are considered capable of conducting the test by including any information security certifications they hold and other relevant information security experience.
  • Provide the reader a frame of reference for the testing performed. At a minimum, this should include a high-level diagram of the various segments (i.e., CDE, Connected To and Out of Scope) and an overview of the IP addressing within each of those segments.
  • Document and discuss any significant changes that occurred since the last network segmentation test and what was done to prove that significant changes did or did not occur since the last segmentation test. This is necessary to confirm to the QSA and other readers that you are not just following some predefined schedule (i.e., annually or semi-annually) but are also ensuring that significant changes also potentially drive segmentation testing as required in by the PCI DSS.
  • Document the methodology that was followed and the tools that were used to prove out network segmentation. What is needed in this section is specificity.  Document step by step, in enough detail that someone else could conduct the testing, what you did to prove network segmentation was in place and functioning as expected.
  • Document any findings and recommendations that result from the network segmentation testing particularly those findings that prove the network is not segmented as expected resulting in a failed test. If segmentation is not in place, then you will need to remediate those findings and retest to prove that the remediation was successful.  If retesting is required, you need to keep all reports so that you have a record of everything that has been tested.
08
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 2

In the last post I discussed what the SOC reports are and what, in general, to look for in a SOC 2/3 report.  Now I want to take you through the more detailed analysis of the SOC reporting so that you can understand why it might not give you the result you desire and how to address that fact.

How Do I Analyze The SOC Report?

Based on the testing in the ‘Information Provided by Independent Service Auditor’ section, you are going to need to map that testing into the PCI ROC tests, if they even fit.  I typically use the Prioritized Approach spreadsheet to do this as it provides a way of documenting the requirements covered and a quick dashboard regarding what is covered.

As you reviewed the domains listed under the SOC 3 report, I am sure you thought; “What is not to like?  It looks like most of what I need for PCI is covered.”  But you would be wrong.  You will find after you map the controls from any SOC 2 report that covers all the TSP domains into the Prioritized Approach that the report will likely only cover around 20% to 25% of the PCI DSS requirements.  That is because the level of detail in the SOC tests are just not as detailed as they are in the PCI DSS.  As a result, SOC 2 reporting does not provide the kind of reliance you need to respond to all of the relevant PCI DSS requirements.

For example, while SOC will likely test that password controls are in place, you will be unable to ascertain if the organization enforces seven character or greater password lengths, password complexity, nor if they require passwords to be changed every 90 days or less.  Let alone if the special requirements for vendor password management are enforced.  It is these missing details that create the coverage problems with using the SOC reporting results.

The same can be said for change management.  When tested, the SOC report will likely call out a lot about change management, but not at the level of detail required in the PCI DSS for requirements under 6.4.  You will also find that coverage in requirements 1 and 2 regarding network and server configurations will be lacking in specificity to meet the PCI DSS testing.

Now as a QSA, you have a decision to make.  Can you accept only 20% to 25% of coverage of PCI DSS requirements as being PCI compliant?  I know I cannot.  I need much more to work with before I can get comfortable that a SOC report provides the necessary coverage for PCI compliance.

Now What?

You and your client have expended all this effort and are no closer to the result desired than when this process started.

So, what to do next?

Work with your service providers that provide you SOC reports to include testing that adds the PCI DSS details that are missing.  There will likely be a bit of push back from these service providers because adding testing to their SOC reports will cause the cost of their SOC reports to increase, sometimes significantly.  So be prepared for it.

What you need to do is to have their auditors add the necessary testing details to the description of controls and then have them test that they are in place.  Examples include:

  • Password length, complexity, change frequency and the procedures followed to perform a password reset.
  • Details surrounding privileged and general user management including provisioning, management approvals, users are implemented with least privilege and users are disabled or removed when terminated.
  • Changes tested for segregation of duties between developers and operations, segregation of test, QA and production environments, production data not used for testing, developers do not have unrestricted access to production, test data and accounts removed before applications are promoted to production, changes document impact, they are appropriately authorized, they have been tested, they have been vulnerability assessed and they document backout procedures.
  • If encryption is used to protect data, document the algorithms used, are key custodian agreements in place, are split key processes in place if performing manual key management, indicate if a hardware security module (HSM) is used and are keys changed when their crypto-periods expire or they are believed to be compromised.
  • Document the configuration standards that are followed by device classes such as firewalls, switches, servers and test that they have been implemented.
  • Document that anti-virus is implemented on systems commonly affected by viruses and malware, what the anti-virus solution is that is implemented, the anti-virus solution cannot be disabled and that the anti-virus solution is actively running on all systems it is installed.
  • Document that vulnerability scanning is performed, how often scanning is performed and that vulnerabilities are remediated.
  • Document that penetration testing is performed, how often penetration testing is performed and that findings are remediated.
  • Document that log data is collected from all devices, it is reviewed at least daily and that it contains a date/time stamp, device name, type of log entry and other relevant information.

There are a lot of other areas that could be added to the SOC report, but these are, in my opinion, the bare minimum that need to be added to make the SOC report more relevant for PCI.  I am trying to balance the amount of additional information needed versus the cost of providing it in the SOC report.

By adding all of this will it cover all of the gaps between SOC and PCI?  No.  But it should give your QSA significantly more comfort that the controls in place to meet PCI than what is currently being provided by CPAs.

04
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 1

This is a common question that QSAs encounter from clients.  The client has an SSAE 18 Controls at a Service Organization (SOC) report from one of their service providers and they want to know if they can use it to satisfy any or all of the requirements in 12.8, 12.9 and 12.11 related to vendor management?

The biggest caveat in this discussion is that the PCI SSC does not sanction the use of any report other than a PCI Attestation Of Compliance (AOC) and/or a PCI Report On Compliance (ROC) in addition to any other PCI reports.  The Council has repeatedly stated that if a QSA chooses to rely on an SSAE 18 SOC 2 report (or any other compliance report for that matter), the QSAC and their client accepts the risk if the SSAE 18 SOC 2 does not cover what the QSA claims it covers and therefore relies upon it for fulfilling PCI ROC requirements.  As a result, most QSAs will not accept an SSAE 18 SOC 2 report (or any other non-PCI compliance reports) for any reason.

For those of us “recovering” certified public accountant (CPA) types that have conducted SSAE18 audits, we know how to read and interpret these reports.  As a result, when we are asked about SSAE 18 SOC 2 reports being relevant, our answer is that, “It depends on what the SOC 2 covers and how it was tested.”

Before we get too deep into this discussion though, we need to define the terminology surrounding this topic.  The first thing is that SSAE 18 replaced SSAE 16 as of 2017 even though nothing else appears to have changed.  The next key thing anyone needs to know about SSAE 18 is that there are three reports that can come from this reporting series: SOC 1, SOC 2 and SOC 3.

The first, SOC 1, is for financial auditors only.  It used to be called a SAS 70 years ago.  It is a report focused on financial controls that an external auditor needs to ensure that the financial numbers coming from the third party can be relied upon in their annual audit of their client.  Yes, these SOC 1 reports can cover security controls, but that is only in regard to financial systems, not necessarily the third party’s entire environment.  In addition, the control coverage is typically not as deep as required for PCI compliance.  The bottom line is that any reliance on a SOC 1 report outside of financial systems should never be assumed.

I am going to cover the SOC 3 report next because it covers all of the security domains.  The SOC 3 report (also sometimes referred to as the ‘SysTrust’ report) covers the following domains:

  • Organization and Management – The criteria relevant to how the organization is structured and the processes the organization has implemented to manage and support people within its operating units.
  • Communications – The criteria relevant to how the organization communicates its policies, processes, procedures, commitments, and requirements to authorized users and other parties of the system and the obligations of those parties and users to the effective operation of the system.
  • Risk Management and Design and Implementation of Controls – The criteria relevant to how the entity (i) identifies potential risks that would affect the entity’s ability to achieve its objectives, (ii) analyzes those risks, (iii) develops responses to those risks including the design and implementation of controls and other risk mitigating actions, and (iv) conducts ongoing monitoring of risks and the risk management process.
  • Monitoring of Controls – The criteria relevant to how the entity monitors the system, including the suitability, and design and operating effectiveness of the controls, and takes action to address deficiencies identified.
  • Logical and Physical Access Controls – The criteria relevant to how the organization restricts logical and physical access to the system, provides and removes that access, and prevents unauthorized access to meet the criteria for the principle(s) addressed in the engagement.
  • System Operations – The criteria relevant to how the organization manages the execution of system procedures and detects and mitigates processing deviations, including logical and physical security deviations, to meet the objective(s) of the principle(s) addressed in the engagement.
  • Change Management – The criteria relevant to how the organization identifies the need for changes to the system, makes the changes following a controlled change management process, and prevents unauthorized changes from being made to meet the criteria for the principle(s) addressed in the engagement.

There are also some additional considerations that are related to Confidentiality specified in the Trust Services Principals and Criteria (TSP), but those are not required to be covered in the SOC 3 report.

Finally, there is the SOC 2 report.  The SOC 2 report uses the same TSP as the SOC 3 but with a twist.  The third party can select any or all of the seven domains to be assessed.  Think of it as a “cafeteria style” assessment.  With the SOC 2, the AICPA does not require that all domains be covered (as with the SOC 3), the assessed entity can select only those domains they wish audited.  As a result, a third party could select only the ‘Organization and Management’ domain to be assessed and nothing else in their SOC 2 report.  Therefore, just because you have a SOC 2 does not mean it covers the domains necessary for your PCI assessment.  Like the SOC 3, in addition to the seven domains, the SOC 2 can also cover none, any or all of the additional considerations documented in the TSP.

Within each of these SOC reports there is a Type I and a Type II report.  A Type I report is basically worthless from a reliance perspective because no testing of the controls is ever performed.  With a Type I report, the auditor is signing off on the fact that the third party has controls defined and formally documented.  But without testing, there really is no point to this report.  Yet every now and then, I encounter a Type I report that an organization has relied upon for years.

The only report worth anything is a Type II report which tests the control environment to ensure that the controls are functioning as designed.  So, when you get that SOC 2 report, you need to make sure you have a Type II report where testing has been performed by the auditor.  Even then though, the report might not be as useful as you might think.

I Have A SOC 2 Type II Report From A Service Provider

While you want to read the whole report in detail, when I am pressed for time and cannot read it in its entirety, here is where I focus so that I can get a quick view of what I have.  Some CPA firms provide a one-page Executive Summary that gives the reader a quick overview of the report, provides the timeframe the report covers, opinion, exceptions and other useful information.  But that is not required by the AICPA so you cannot always rely on such an overview being in every report you receive.  When they are available, they can help you focus your quick review efforts even better.

The first thing to do is to read the auditor’s opinion which should be the first section of the report.  It is in the form of a letter on the auditor’s letterhead and signed by the auditing firm.  The opinion the auditor provides will be either:

  • Unqualified – no material control weaknesses or failures were identified.
  • Qualified – some material control weaknesses or failures were identified.
  • Adverse – significant control weaknesses or failures were identified.

An unqualified opinion is what all organizations desire and what most reports document.  But do not be fooled by an unqualified opinion.  There still could have been control weaknesses or failures identified but they did not rise to the level of being considered “material”.  I have seen some unqualified reports with control weaknesses that I would have considered material as their auditor, so you might still want to contact the organization to get clarification on any weaknesses identified.

A report with a qualified opinion is not the end of the world, but that will all depend upon what control weaknesses or failures created the qualification.  Someone misusing their access can be minor compared to not performing backups of servers for months.  As a result, you need to read each control weakness to determine the criticality of the control failure as well as review management’s responses to how they addressed or will address the failure.  Again, you may find yourself contacting the organization to clarify weaknesses documented.

In my experience, reports with an adverse opinion never get issued to the public.  Management sees all of the control failures and weaknesses and then embarks on the long arduous task of cleaning up their control environment.

The next section to look at is the one labeled ‘Information Provided by Independent Service Auditor’ or similar.  This is the section that will contain the testing results and will define which of the domains were covered as well as the timeframe the report covers.  Most organizations issue SOC reports annually, so you always want to make sure that you have the most current report.  If the coverage end date is getting within three months of a year old or more, you should contact the third party and ask them when the next report will be issued.  They should inform you that the new report is in progress and give you an estimated date the report will be issued.  If they do not give you a succinct answer, I would be concerned.

You need to go through this section looking at a couple of things.  The first is to determine which of the domains were covered.  While documenting those domains, you also need to review the testing that was performed and at what level of detail those tests were conducted.  For example, it is not unusual to see tests for change control cover five random changes but not test those changes for having appropriate documentation, backout instructions and testing, only that the changes were approved.  At some point you will need to read this section carefully to determine what, if anything, will cover the testing required by the PCI DSS.  But a quick perusal will usually give you an idea of what you are likely going to get out of the SOC 2 for PCI compliance, if you are going to get anything at all.

This leads to the next section of the report you should read.  The last section of all SOC reports is usually titled ‘Supplemental Information Provided By [Organization Name]’.  This section contains information that was provided by the entity being audited but is not covered by the auditor’s opinion.  There can be all sorts of information presented here but the important point to remember is that the auditor did not test or assess the accuracy of that information.  So, you need to take any information provided in this section with a bit of skepticism.

It is in the Supplemental Information section that you want to look for a sub-section titled ‘Management’s Response to Control Exceptions’ or similar.  Even when an organization has an unqualified opinion, there can still be items listed in this section.  If there are items listed, you want to carefully read what those items were and how management addressed or corrected the condition.  If you find any control issues and responses that concern you, you should contact the entity and get those discussed so that you are comfortable with the situation.  If you cannot get comfortable with the situation, then you may want to consider additional controls at your end to compensate for the control weakness with the third party.

In the next postpost I will take you through a more thorough review of the SOC report.

18
Jun
16

iFrame Hack Reported

This week brought news of an inline frame (iFrame) payment solution that was hacked in the UK.  For all of you merchants that use an iFrame solution because you were told it reduced your PCI scope, you may want to rethink your security strategy.  For all of you hosting companies that offer these iFrame solutions because of the scope reduction value, you too may want to rethink your security strategy as well.

For those of us that are not Web developers, an iFrame is:

“An HTML document embedded inside another HTML document on a website. The iFrame HTML element is often used to insert content from another source, such as an payment page or advertisement, into a merchant’s Web page.”

For merchants using an iFrame for handling payments, the PCI DSS rules that the iFrame makes the merchant’s Web site out of scope because the iFrame is managed by the payment provider, not the merchant.  Thus merchants using an iFrame or a redirect are allowed to fill out an SAQ A.  However, because of increased risks to merchant Web sites using iFrames and redirects, the Council has updated SAQ A in response to those risks.

But there has always been a risk that iFrames and redirects could be manipulated.  The attack used in the article was fairly sophisticated in that it required a lot of knowledge about how that particular iFrame worked and then used a man in the middle (MITM) approach to intercept the invocation of the payment processor’s iFrame and insert their own iFrame.  Not easy, but definitely effective.

The easier approach is an attacker changes the script/executable that invokes the iFrame/redirect to invoke a malicious iFrame/redirect.  A merchant would be alerted to such a change if critical file monitoring were required, but SAQ A does not require critical file monitoring.

This is why a lot of QSAs have told their clients that only fools believe that the requirements in SAQ A will keep their Web sites secure.  At a minimum, merchants using iFrame/redirect solutions should have critical file monitoring and logging implemented as well as conducting quarterly vulnerability scanning so that they can secure their Web sites as well as alert on any changes or any suspicious activity on their Web sites.

16
Apr
16

PCI DSS v3.2 Draft Released

On Friday, April 15, 2016 while a lot of you were probably getting your US income taxes done, the PCI SSC decided to release the draft of v3.2 of the PCI DSS.  I know the announcement message to me from the Council ended up in my company’s spam filter, so you may want to check there if you did not receive a message.  I was lucky enough for a colleague to forward his copy along to me.  However to get the draft, you need access to the PCI Portal to obtain the draft PCI DSS v3.2 and the requisite change log.

These are some of the more notable changes in the new PCI DSS version.

  • The draft provides an official sunset date for v3.1 of the PCI DSS. Regardless of the date in April that v3.2 is released, v3.1 will be withdrawn on October 31, 2016.  So any assessments done after that date will need to comply with and use v3.2.
  • Two new sections to Appendix A have been added. In addition to the Appendix for shared hosting providers (now marked A.1), we get Appendices A.2 and A.3.  2 covers SSL and early TLS for those of you that will miss the June 30, 2016 date.  For those of you that thought 2018 was the deadline and missed discussions on the Webinar about the SSL/early TLS deadline, while the deadline was extended to June 30, 2018, any organizations missing the June 30, 2016 date must fill out Appendix A.2.  A.3 is where the Council added the designated entities supplemental validation (DESV) requirements.
  • There are a number of new requirements for service providers that are best practices until February 1, 2018. Those new requirements include: (1) maintain a documented description of the cryptographic architecture, (2) detect and report on failures of critical security control systems, (3) perform penetration testing on segmentation controls at least every six months, (4) executive management to establish responsibilities for the protection of cardholder data and a PCI DSS compliance program, and (5) perform reviews at least quarterly, to confirm personnel are following security policies and operational procedures.  I would bet that numbers three and five will likely create a lot of contention with service providers.  But you have until February 1, 2018 to get those in place.  However, if experience teaches us anything, service providers had better start now getting these new requirements in place and operating.
  • All organizations picked up the following new requirements that are best practices until February 1, 2018: (1) change control processes to include verification of PCI DSS requirements impacted by a change, and (2) multi-factor authentication for all personnel with non-console administrative access to the CDE. As with the aforementioned new requirements for service providers, these will also require a lot of organizations to get started now to ensure these new requirements are in place and operating.
  • The Council clarified requirement 8.1.5 to show that it is intended for all third parties with remote access, rather than only vendors. While most organizations understood the intent of this requirement, there were a few that played “legal eagle” and refused to require compliance for non-vendors.
  • Requirement 6.5 has been clarified that developers must go through secure coding training at least annually. This change will likely create some consternation for some organizations that are developing their own software that is in-scope for PCI compliance.
  • Clarified 11.5.a by removing “within the cardholder data environment” from the testing procedure for consistency with requirement, as the requirement may apply to critical systems located outside the designated CDE. This will likely expand the number of systems that require critical file monitoring.
  • Clarified 12.8 1 by saying that the list of service providers now must include a description of the service(s) provided.
  • Clarified 12.8.2 by adding guidance that service provider responsibility will depend on the particular service(s) being provided and the agreement between the two parties.
  • One of my pet peeves has finally been addressed. I have always had an issue with requirement 1.1.6 and the use of the terminology “insecure protocols”.  The reason is that in one way or another, all protocols have their insecurities whether they are known or not.  In v3.2, the Council has finally removed the “insecure” designation as, in their words, “these may change in accordance with industry standards.”  It is those small battles at times that make your day.

There are other clarifications and edits that have been made to the new version.

For all of us QSAs, we await the Reporting Template which will detail out the actual testing to be performed which will allow us to assess the real impact to the effort required to conduct an assessment.  As a result, there could still be some surprises with this new version of the PCI DSS.  So stay tuned.

24
Nov
15

Information Supplements Versus The PCI DSS

At various times over the years, the Council has repeatedly told QSAs, Participating Organizations (PO) and anyone else that has asked questions about statements in the Information Supplements the following.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

So what are the point then of the Information Supplements?

Boy is that a good question. As a QSA, I often ask myself that very question after some of the inane conversations with clients and prospective clients regarding Information Supplements and their supposed “guidance”.

The first thing everyone should remember about Information Supplements is that they are developed and written by a committee at the suggestion of the Council, POs or as part of special interest work groups. These committees are made up of personnel from interested POs, QSAs, ISAs, vendors and anyone else willing to participate in their development. They are edited by a representative from the Council and reviewed by the Committee and are then submitted to all POs, QSAs and ISAs for review and comment. Similar in concept to the development and review of RFCs by the IETF.

The other key point about Information Supplements are that they are developed to give QSAs, ISAs and organizations ideas and guidance on how best to appropriately meet the requirements of the PCI DSS and the Reporting Template testing. Again, as the Council has repeatedly stated, the Information Supplements do not replace the explicit guidance and testing requirements in the PCI DSS and the Reporting Template. They are merely suggests on an approach.

Yet time and again, QSAs and ISAs get these priceless documents tossed in our faces and are told we do not know what we are talking about. “The Information Supplement says …” is always put out there as the justification as to why an organization is doing something it should not be doing or as the rationale for why the organization is not in compliance with the PCI DSS. And we again are forced to explain that the Council never has said that an Information Supplement replaces the guidance and testing in the PCI DSS or the Reporting Template.

The first question anyone, and I do mean anyone, should ask about any statement in an Information Supplement is, “Does the PCI DSS and/or the Reporting Template explicitly say the same thing?” Those are the only two documents that matter and the only documents that your organization will be assessed against. If it is not explicitly called out in either of those documents, then it is not accurate and does not reflect the compliance requirements.

As an example. I was on a conference call recently regarding the Council’s Information Supplement on penetration testing. This supplement was issued in March, 2015 and is possibly one of the most confusing and contradictory pieces of “guidance” we have ever encountered. In fact, it has created more confusion than it has actually clarified. In my very humble opinion, the Council would be better off taking it out of circulation because of all of the trouble it creates for QSAs, penetration testers, ASVs and clients. It is possibly one of the worst written of the Information Supplements and, while people both on the Committee that developed it and externally supplied the Council with numerous suggestions for changes, those changes were not incorporated into the document. Why those changes were not incorporated is anyone’s guess. But we in the PCI community ended up with possibly the worst expressed and misunderstood guidance available.

As usual, the client was arguing over the scope of their penetration testing. I get the fact that organizations want to minimize costs and scope as much as possible. However when you listen to some security professionals arguments on this topic, you just wonder how they got to their positions as they argue over not testing systems and devices that are painfully obvious to be in scope.

And as also is usual, the first piece of confusion regarding scope is in Section 2, page 5, first paragraph after the bullets and states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

One would think that to any reasonably intelligent information security professional, the first part of the sentence, “It is not a requirement to test from within the CDE to the servers inside the CDE;” would be considered a pure line of garbage. Never mind that none of the recognized penetration testing methodologies ever suggest such an approach. But people arguing never consider that fact. Nope. The people arguing are so focused on cutting their PCI compliance bill that it does not matter that the statement is pure and unsupported garbage. It is considered the gospel truth. Otherwise, why would the Council allow such a statement? Good question. We have asked the Council that question and the answer back is? You guessed it.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

Again, never mind it is in no way supported by the guidance provided by the PCI DSS for requirement 11.3 which says:

“The intent of a penetration test is to simulate a real-world attack situation with a goal of identifying how far an attacker would be able to penetrate into an environment. This allows an entity to gain a better understanding of their potential exposure and develop a strategy to defend against attacks.”

But argue that point they do even when you point out that arguing this point is basically arguing that any attacker would stop at the perimeter of the CDE and would go no further.

Seriously? If you believe that fact, you must also believe in Santa Claus, the Easter Bunny, the Tooth Fairy and any other of the multitude of mythical fictional creatures. Or you are just lying to yourself and are in serious denial about your organization’s security posture. But argue on they do.

Then you pair that to the second part of that first sentence of this paragraph that says, “… and testing exclusively from within the CDE perimeter will not satisfy the requirement.” Just adds to the out of scope argument.

As I point out when bitch slapped with this terrible writing, if you go back and carefully re-read the second part of the first sentence, what it points out is that penetration testing from only inside the CDE is not sufficient to meet the penetration testing requirements of the PCI DSS requirement 11.3. In no way does that sentence say or even further imply that the CDE is out of scope. It is actually saying that penetration testing should be done from within the CDE, but that penetration testing only inside the CDE does not meet 11.3. But people will still argue that the CDE is out of scope.

That the CDE is in scope is further supported by the definitions of “critical systems” from section 2.2.1 of the document which defines that not only are systems within the CDE in scope, but also those that are outside the CDE but could affect the security of those systems inside the CDE (i.e., what the Council and the Open PCI DSS Scoping Toolkit refer to as “connected to” systems). However, people arguing over scope rarely, if ever, tie these two section together and then argue that because they are in separate sections they cannot be possibly together even though the entire document is about only one subject, penetration testing and requirements in 11.3 of the PCI DSS.

So before you go off telling your QSA or ISA that the Information Supplement says something. Think about what the information supplement says. Is the guidance from the Information Supplement even implied in the PCI DSS? Read the guidance in the PCI DSS and the testing procedures from the Reporting Template. If the PCI DSS or the Reporting Template do not explicitly have the same language in them that the Information Supplement has, then the Information Supplement is merely a suggestion.

And if the guidance from the Information Supplement does not make sense, pull your head out of your posterior and use some God given common sense. Ask your QSA or ISA to explain it, before going off halfcocked and thinking that someone could actually think such things made sense.

But again, why would the Council allow such statements? Good question. We have asked the Council that question and the answer back is? You guessed it.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

Clear as mud? You bet.

But what did you expect? It is PCI.

For all of you in the United States, have a happy and safe Thanksgiving holiday.

25
Jul
15

Compensating Control Refresher

From time to time, organizations find themselves in the predicament of not being able to meet a PCI DSS requirement due to business or technical constraints. To address that situation, the PCI SSC has provided the compensating control worksheet (CCW) as a way to work around those requirements that cannot be met directly as stated in the PCI DSS. When the CCW was updated back in 2010 for v1.2, I wrote about those changes and how to write a CCW. However, here we are at v3.1, five years down the road and I still see a lot of poorly and improperly written CCWs. As a result, I think it is time to take people through a refresher on the CCW.

First and foremost, the writing of any CCW is your organization’s responsibility. Your QSA can provide input and guidance, but the origination of the CCW is up to the organization. Once developed, your QSA can review it and make suggestions to enhance and improve the CCW. Once that has been completed, you will then want your acquiring bank to review it to ensure that they will accept it as part of your self-assessment questionnaire (SAQ) or Report On Compliance (ROC) filing.

Secondly, the format of the CCW is dictated by the Council and that format is provided in Appendix B of the SAQ D or in Appendix C of the ROC. Failure to use the proper format will create issues with your QSA, your bank and with the Council, particularly if you are doing a ROC. So please use the Council supplied format and not develop something on your own.

Finally, the PCI SSC has stated that any requirement can have a CCW. In the past, the Council instructed QSAs and ISAs that requirement 3.2 [Do not store sensitive authentication data after authorization (even if encrypted). If sensitive authentication data is received, render all data unrecoverable upon completion of the authorization process] was not allowed to have a CCW. At the 2014 Community Meeting, the Council backed away from that restriction and said that any requirement can have a CCW with no restrictions. However, as a QSA I would have a serious problem accepting a CCW for requirement 3.2 because storing sensitive authentication data (SAD) is the whole reason why the PCI DSS was created to stop.

To remind everyone, the CCW is broken into seven sections.

  • Identification of the PCI DSS requirement(s) being compensated.
  • The constraint or business justification for needing the CCW.
  • The original objective of the requirement(s) being compensated.
  • Identification of any additional risks because of the CCW
  • The compensating controls.
  • The procedures your QSA/ISA followed to confirm that the compensating controls are in place and functioning.
  • The procedures followed by your organization to maintain the compensating controls.

While the Council tells everyone to have an individual compensating control for each requirement, there are some places where a compensating control is the same for a number of requirements. This most often occurs for requirements in section 8 around the various user management requirements or 6.1, 2.2, 11.2 and the processes of vulnerability management. I would highly recommend using one CCW per requirement, but I can understand why you might combine some. Just be judicial in combining them. Also, list not only the number of the requirement(s), but also the text of the requirement from the Requirements column in the PCI DSS. While your QSA might have memorized the PCI DSS requirements, bankers and others that will read the CCW have typically not committed to that level of detail and it will help them with the context of the CCW.

The business justification needs to be more than just “we don’t want to” or “it was too hard”. Believe it or not, I have had a lot of organizations provide just such simplistic and silly reasons for justifying a CCW. Proper justifications can involve budgetary constraints, timing (e.g., not enough time to complete remediation by the end of the assessment period), application requirements (e.g., the application requires XP to run) and/or vendor requirements (e.g., the vendor requires a hardware upgrade to correct the issue). If you do have a target date for addressing the CCW, this is where you want to provide that information so that readers know that the CCW has some time limit.

The original objective is the easiest part of the CCW to develop. The Council has provided the “Guidance” column in the PCI DSS for each requirement and it is the verbiage in that Guidance column that you should use to explain the original objective of the requirement. If you are using the CCW for multiple requirements, this section can get rather lengthy and I would recommend identifying the Guidance information with its requirement to help understanding of the information.

The next section can sometimes be the toughest to develop and that is identification of any additional risks because you are using a CCW. In some cases, there may actually be no additional risk perceived by using a CCW. One such example is when organizations have a separate system management VLAN where network and system administrators can use telnet, SNMPv2 and other “unsecure” protocols in addition to SSH, RDP and other secure protocols to manage devices/systems. These system management VLANs typically require the use of an out of band (OOB) to gain access, administrator credentials different from the administrator’s user credentials and two factor authentication to name just a few of the controls you see in this example. These management/administrative VLANs are no more risky than using only secure protocols.

However, if you are compensating for having to keep Windows XP running, that will likely be a very different story and depending on the compensating controls put in place, the risk could be moderately higher than not have XP around. The key here is that it is that the risk should be assessed and then honestly discussed in the CCW. If you think you are going to say that having XP does not increase risk to your cardholder data environment (CDE), I would seriously think again regardless of your compensating controls in place because any outdated Windows implementation is a security problem waiting to happen regardless of how you think you have mitigated the risk.

The compensating controls section is where the rubber finally meets the road. It is here that you document each individual control that compensates for your organization’s inability to meet the requirement(s) in question. I recommend that people either bullet point or number list each individual control. The reason is that in the next two sections, you need to tie the validation and maintenance items to the controls in this section and doing some sort of list makes it easy for people to ensure they have covered all controls in each section.

The most common mistake made in this section is organizations state that they have a project to remediate the issue(s). Sorry, but this is NOT a control. It is nice information, but it is not a control that can be relied upon. QSAs never want to ever see such statements made about future projects ever in this section. This section is all about what you are doing from a controls perspective to manage the fact that you cannot meet the requirement(s).

Valid controls in this section must also go “above and beyond” what is required by the PCI DSS. Examples of “above and beyond” include:

  • Reviewing log data in real time for a particular condition that would indicate an out of compliance condition on a control. This is above and beyond because log data only needs to be reviewed daily for such conditions.
  • Using whitelisting to identify applications that do not belong on a PC and generating an alert in real time if such applications are found. Such technology is above and beyond because it is not currently required by the PCI DSS.
  • Using critical file monitoring to identify rogue applications that do not belong on a PC and generating alerts in real time if found. Critical file monitoring is a PCI requirement, but this goes above and beyond because monitoring is only required on a weekly basis.

The list here can go on and on, but hopefully I have given you some ideas of how to create compensating controls that can actually compensate for your inability to comply with the requirement(s).

One key point though is that you cannot use a requirement in the same requirement group to compensate for a different requirement in the same group. For example, requirement 6.4 has bunches of sub-requirements under it. You cannot write a compensating control for one sub-requirement in 6.4 and then use a different sub-requirement under 6.4 as one of your compensating controls regardless if it is above and beyond.

The next section will list how the controls were assessed by your QSA/ISA to prove they have been implemented. So using our previous bullet list, here is what the control validation bullets would look like.

  • Observed the system information event management (SIEM) solution and verified that alerts are generated in near real time for [control failure condition] and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [whitelisting solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [whitelisting solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [critical file monitoring solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [critical file monitoring solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.

Finally, you need to document what your organization will do to ensure that the controls remain implemented and effective. This is where most compensating controls fall apart. The organization gets through their assessment and then neglects to keep the compensating controls working. Using our list from the compensating controls section, the maintenance controls would look something like this.

  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the SIEM and test that the alerts for the [control failure condition] are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [whitelisting solution name] and test that the alerts for rogue applications are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [critical file monitoring solution name] and test that the alerts for rogue applications are still functioning as designed.

A good idea in the maintenance section is to set timeframes for remediating any control testing failures.

One other important item of note about the controls, validation and maintenance lists. Notice that there are no “forward looking” statements made such as someone “will” perform or “will” review. CCWs must be shown to be in place and operating. A promise of implementing a control is NOT a control either. The control must be shown to be operating and maintained. That is an important point a lot of organization miss. It means that CCWs cannot be created at the last minute and then be operational past the filing of your SAQ or ROC. If you are going to have to use a CCW, that means you will need to identify the situation early and then get the compensating controls implemented, validated and through at least one maintenance cycle before it can be accepted.

CCWs can buy organizations time while they address issues that will take longer to address than their PCI assessment period. Unfortunately, there are organizations that see the CCW as a way to be judged PCI compliant without addressing their serious security shortcomings. It is not unusual for large organizations to have a number of CCWs particularly if they have legacy applications and hardware. However, I would highly recommend that all organizations only rely on CCWs if there are no other options to achieving PCI compliance.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031