Archive for the 'Requirement 11 – Regularly test security systems and processes' Category

13
Dec
20

Network Segmentation Testing

NOTE: If you have not read the PCI SSC Information Supplement – Guidance for PCI DSS Scoping and Network Segmentation you must do so before using the procedures documented in this post.

How something so simple became something so complicated (or at least believed to be complicated), I just will never understand.  The only thing I can point to is the fact that network segmentation testing falls within the requirements of penetration testing.  Because of that, I think people therefore believe there is something “special” about how segmentation testing must be performed.  Never mind the fact that there is the even more basic issue of how to approach network segmentation testing.

Here is the network segmentation testing methodology for traditional IP networks.

  • Gather an inventory of all of the network segments.  Label each network segment as Cardholder Data Environment (CDE), Connected To or Out of Scope based on the definitions from the Scoping Information Supplement.
  • Make sure you have Nmap installed on a portable computer.  The reason this needs to be portable is because you will likely have to move around your facilities in order to complete all of the testing.  It is also not unusual to use diagnostic systems in the data center to accomplish this effort (they may already have Nmap installed) as well as creating VMs for this testing and then remoting into those systems.  The important thing is to have access to every network segment in your environment so that you can conduct this testing.
  • Connect your scanner to every CDE network segment and attempt to reach all of the Out of Scope network segments from the CDE.  You will want to run an Nmap scan that scans all TCP and UDP ports (i.e., 1 through 65535) against all IP addresses in a given out of scope network segment.  This likely sounds extreme but to prove segmentation you must test all 65,535 TCP/UDP ports against all IP addresses to make sure that no traffic “leaks” to your out of scope networks.  If you do find a port open in one of your out of scope networks, you will have to track down where that leak occurs.  Example: nmap –p- -sT –sU 10.10.0.0/16
  • While in each CDE, test connections out to your Connected To network segments testing all TCP and UDP ports against all IP addresses in your Connected To network segments.  Since communication between the CDE and Connected To segments is allowed, you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Finally, you will need to test that your CDE can only reach the internet through ports and IP addresses you have specified.  Obviously, you are not going to test every internet address as that would take forever.  However, what I tell my clients to do is to use every external IP address they have for business partners or other third parties they are connected to.  Again, you are going to test all TCP and UDP ports against those addresses. If you get any unexpected results back, you are going to have to resolve those issues as there should be no external connectivity.
  • Connect to every Connected To network segment and conduct testing into the CDE for all TCP and UDP ports against all IP addresses in the CDE network segment.  Again, since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • While in the Connected To network segments, conduct testing to all Out of Scope network segments.  Since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Connect to every Out of Scope network segment and run an Nmap scan into each CDE network segment for every TCP and UDP port for all IP addresses in the CDE.  This should return no results back if the network is truly out of scope.  If it does return results, you will have to figure out way and block that traffic into the CDE.
  • Save all of your results and comparisons so that you have a record of your testing.  If you found issues, make sure you document in detail what was done to resolve those issues and conduct new scans to prove that those issues were remediated.

When you bring in newer solutions such as the Cloud, containers, serverless, microsegmentation and the like the traditional method of network segmentation testing is impossible to completely test.  You can conduct all of the tests documented above from outside of the environment looking into your cloud environment, but you cannot look from inside the cloud out.  That must be done manually by examining the cloud configuration information and ensuring that networks are properly segmented.

If you are like me, you are looking for a better way to deal with the Cloud as well as large networks.  There are network tools from vendors such as FireMon, AlgoSec, Skybox and Tufin that have capabilities to take the Cloud configuration information as well as firewall, router, switch and other network infrastructure configurations and provide analytical capabilities to simulate the testing above from both internal and external perspectives.  The downside of these tools of course is that they are not inexpensive and can require significant horsepower to operate.  However, they can be worth their weight in gold for their ability to analyze and understand your networks, find misconfigurations and find issues where attacks can potentially succeed.

There is no reason to pay your penetration tester to conduct network segmentation testing unless you are uncertain as to how to analyze the information from the Cloud.

25
Feb
19

Network Segmentation Testing

As part of penetration testing, merchants and service providers are required to test that their network segmentation is properly implemented and functioning.  Sounds like a simple enough task, but you would be amazed at the bizarre and complicated discussions that QSAs encounter when segmentation testing comes up.

As a reminder, requirement 11.3.4 states:

“If segmentation is used to isolate the CDE from other networks, perform penetration tests at least annually and after any changes to segmentation controls/methods to verify that the segmentation methods are operational and effective, and isolate all out-of-scope systems from systems in the CDE.”

For service providers, requirement 11.3.4.1 adds in the requirement of testing at least every six months or any changes to network segmentation, not just “significant changes”.

Regardless of whether you are a merchant or a service provider, how segmentation testing is performed is the same.

So why all of the issues?

First, the PCI DSS does us no favors with the “guidance” for requirement 11.3.4 which states:

“Penetration testing is an important tool to confirm that any segmentation in place to isolate the CDE from other networks is effective. The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE. For example, network testing and/or scanning for open ports, to verify no connectivity between in-scope and out-of-scope networks.”

The first point of confusion typically relates to the phrase “penetration testing” as though segmentation testing somehow requires the use of a penetration testing tool such as Metasploit or similar to conduct the segmentation testing.  Nothing could be further from the truth.  But the terminology of “penetration testing” clouds the task.

The second point that seems to confuse is the last sentence that starts out with “For example …”.  People seem to miss that start of the sentence and take it that all they have to do is make sure that out of scope devices cannot get to the CDE and that is it.  While network segmentation testing is simple, it is not quite that simple.

What Is Segmentation Testing?

After going through the debunking of all of the mythology and rumors surrounding network segmentation testing, this is the first question asked.  I always take people back to what the purpose of network segmentation testing is – to prove network segmentation is implemented and is functioning as designed to keep the various networks logically separated.

When I say, “various networks”, I am referring to the network segments defined in the information supplement “Guidance for PCI DSS Scoping and Network Segmentation” issued in May 2017.  In that document, the following terminology is used.

  • CDE Systems – any systems/devices that directly process, store or transmit sensitive authentication data (SAD) or cardholder data (CHD) or are directly connected to such systems/devices. These systems/devices are also sometimes referred to as Tier 1 or Category 1.
  • Connected To or Security Impacting Systems – are systems that provide services to the CDE or have connections to systems/devices in the CDE that could adversely affect the security of the systems/devices in the CDE. These systems/devices can also be referred to as “Shared Services”, Tier 2 or Category 2.
  • Out of Scope Systems – are systems that cannot connect to the CDE also referred to as Tier 3 or Category 3.

For PCI compliance, all CDE Systems (Category 1) and Connected To (Category 2) systems are always in scope.  However, for network segmentation testing, Category 3 systems/devices are also included because the testing must prove that Category 3 cannot get to Category 1 and vice versa.  That is typically were network segmentation testing goes wrong is that it only proves that Category 3 cannot get to Category 1 and then stops.  The guidance for requirement 11.3.4 provides some clarity in the second sentence which states:

“The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE.”

The Council has advised that what they want is testing from inside and outside the CDE as well as from other network segments including the internet if applicable.  The idea is to further support the analysis and findings from a QSA’s review of the firewall rules from the requirements in 1.3.x of the PCI DSS.  The reason for this is that with some breaches and the advent of “next generation” firewalls and more sophisticated security technologies, the Council felt that assessed organizations and QSAs were not necessarily proving that network segmentation was truly in place and wanted some additional testing and confirmation.

How Do I Test?

First and foremost, timing of the testing is very important.  For merchants, it should be conducted as close to annually as possible,  For service providers, they are required to be conducted as close to every six months as possible.  But you also need to consider the concept of “significant change”.  If there have been significant changes that affected network segmentation, then the network segmentation testing must be done as soon as possible (the Council typically recommends a maximum of 30 days) after the significant change has been implemented.

While the tool used to conduct the test can be as simple as Nmap or the like, the testing itself can be complicated depending on how your network is segmented.  I have clients that have hundreds of segments that results in a very time-consuming amount of testing.  The key here is to be thorough, but not insanely thorough.

I have no problem with network segmentation testing including a review of firewall and ACL rules and using that information to test for example from a particular network segment into another because the rules are the same for all the network segments being tested to support a particular rule.  The key is to be able to justify why you picked one segment over another and not repeatedly test from only one segment for every test.  Provide the rules with an explanation of your justification for what you did.  This will allow the QSA to understand how you worked and why.

But Nmap is not the only tool that can be used.  There are a number of network management/modelling/monitoring tools such as FireMon, Tufin and RedSeal that can also be used to prove out network segmentation.  In fact, these tools can provide ways to perform the network segmentation testing that do not need to involve scanning the network and merely running reports against the databases created by these tools.

Regardless of the tool used, be careful.  I have seen too many reports where the tools did not go to the devices within the network segment and the results did not necessarily prove segmentation is in place and functioning because when matched up to the server configuration it showed other forms of communication.

Segmentation Testing Reporting Requirements

Once you have completed your network segmentation testing, you need to create a proper report of those results.  At a minimum, a network segmentation testing report should have the following sections.

  • A one to two page (at most) Executive Summary of the network segmentation test, the date the testing was started, the date when testing was completed, the results (i.e., pass or fail) and a summary of all findings and recommendations.
  • Document who conducted the test including a bit of background as to why they are considered capable of conducting the test by including any information security certifications they hold and other relevant information security experience.
  • Provide the reader a frame of reference for the testing performed. At a minimum, this should include a high-level diagram of the various segments (i.e., CDE, Connected To and Out of Scope) and an overview of the IP addressing within each of those segments.
  • Document and discuss any significant changes that occurred since the last network segmentation test and what was done to prove that significant changes did or did not occur since the last segmentation test. This is necessary to confirm to the QSA and other readers that you are not just following some predefined schedule (i.e., annually or semi-annually) but are also ensuring that significant changes also potentially drive segmentation testing as required in by the PCI DSS.
  • Document the methodology that was followed and the tools that were used to prove out network segmentation. What is needed in this section is specificity.  Document step by step, in enough detail that someone else could conduct the testing, what you did to prove network segmentation was in place and functioning as expected.
  • Document any findings and recommendations that result from the network segmentation testing particularly those findings that prove the network is not segmented as expected resulting in a failed test. If segmentation is not in place, then you will need to remediate those findings and retest to prove that the remediation was successful.  If retesting is required, you need to keep all reports so that you have a record of everything that has been tested.
08
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 2

In the last post I discussed what the SOC reports are and what, in general, to look for in a SOC 2/3 report.  Now I want to take you through the more detailed analysis of the SOC reporting so that you can understand why it might not give you the result you desire and how to address that fact.

How Do I Analyze The SOC Report?

Based on the testing in the ‘Information Provided by Independent Service Auditor’ section, you are going to need to map that testing into the PCI ROC tests, if they even fit.  I typically use the Prioritized Approach spreadsheet to do this as it provides a way of documenting the requirements covered and a quick dashboard regarding what is covered.

As you reviewed the domains listed under the SOC 3 report, I am sure you thought; “What is not to like?  It looks like most of what I need for PCI is covered.”  But you would be wrong.  You will find after you map the controls from any SOC 2 report that covers all the TSP domains into the Prioritized Approach that the report will likely only cover around 20% to 25% of the PCI DSS requirements.  That is because the level of detail in the SOC tests are just not as detailed as they are in the PCI DSS.  As a result, SOC 2 reporting does not provide the kind of reliance you need to respond to all of the relevant PCI DSS requirements.

For example, while SOC will likely test that password controls are in place, you will be unable to ascertain if the organization enforces seven character or greater password lengths, password complexity, nor if they require passwords to be changed every 90 days or less.  Let alone if the special requirements for vendor password management are enforced.  It is these missing details that create the coverage problems with using the SOC reporting results.

The same can be said for change management.  When tested, the SOC report will likely call out a lot about change management, but not at the level of detail required in the PCI DSS for requirements under 6.4.  You will also find that coverage in requirements 1 and 2 regarding network and server configurations will be lacking in specificity to meet the PCI DSS testing.

Now as a QSA, you have a decision to make.  Can you accept only 20% to 25% of coverage of PCI DSS requirements as being PCI compliant?  I know I cannot.  I need much more to work with before I can get comfortable that a SOC report provides the necessary coverage for PCI compliance.

Now What?

You and your client have expended all this effort and are no closer to the result desired than when this process started.

So, what to do next?

Work with your service providers that provide you SOC reports to include testing that adds the PCI DSS details that are missing.  There will likely be a bit of push back from these service providers because adding testing to their SOC reports will cause the cost of their SOC reports to increase, sometimes significantly.  So be prepared for it.

What you need to do is to have their auditors add the necessary testing details to the description of controls and then have them test that they are in place.  Examples include:

  • Password length, complexity, change frequency and the procedures followed to perform a password reset.
  • Details surrounding privileged and general user management including provisioning, management approvals, users are implemented with least privilege and users are disabled or removed when terminated.
  • Changes tested for segregation of duties between developers and operations, segregation of test, QA and production environments, production data not used for testing, developers do not have unrestricted access to production, test data and accounts removed before applications are promoted to production, changes document impact, they are appropriately authorized, they have been tested, they have been vulnerability assessed and they document backout procedures.
  • If encryption is used to protect data, document the algorithms used, are key custodian agreements in place, are split key processes in place if performing manual key management, indicate if a hardware security module (HSM) is used and are keys changed when their crypto-periods expire or they are believed to be compromised.
  • Document the configuration standards that are followed by device classes such as firewalls, switches, servers and test that they have been implemented.
  • Document that anti-virus is implemented on systems commonly affected by viruses and malware, what the anti-virus solution is that is implemented, the anti-virus solution cannot be disabled and that the anti-virus solution is actively running on all systems it is installed.
  • Document that vulnerability scanning is performed, how often scanning is performed and that vulnerabilities are remediated.
  • Document that penetration testing is performed, how often penetration testing is performed and that findings are remediated.
  • Document that log data is collected from all devices, it is reviewed at least daily and that it contains a date/time stamp, device name, type of log entry and other relevant information.

There are a lot of other areas that could be added to the SOC report, but these are, in my opinion, the bare minimum that need to be added to make the SOC report more relevant for PCI.  I am trying to balance the amount of additional information needed versus the cost of providing it in the SOC report.

By adding all of this will it cover all of the gaps between SOC and PCI?  No.  But it should give your QSA significantly more comfort that the controls in place to meet PCI than what is currently being provided by CPAs.

04
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 1

This is a common question that QSAs encounter from clients.  The client has an SSAE 18 Controls at a Service Organization (SOC) report from one of their service providers and they want to know if they can use it to satisfy any or all of the requirements in 12.8, 12.9 and 12.11 related to vendor management?

The biggest caveat in this discussion is that the PCI SSC does not sanction the use of any report other than a PCI Attestation Of Compliance (AOC) and/or a PCI Report On Compliance (ROC) in addition to any other PCI reports.  The Council has repeatedly stated that if a QSA chooses to rely on an SSAE 18 SOC 2 report (or any other compliance report for that matter), the QSAC and their client accepts the risk if the SSAE 18 SOC 2 does not cover what the QSA claims it covers and therefore relies upon it for fulfilling PCI ROC requirements.  As a result, most QSAs will not accept an SSAE 18 SOC 2 report (or any other non-PCI compliance reports) for any reason.

For those of us “recovering” certified public accountant (CPA) types that have conducted SSAE18 audits, we know how to read and interpret these reports.  As a result, when we are asked about SSAE 18 SOC 2 reports being relevant, our answer is that, “It depends on what the SOC 2 covers and how it was tested.”

Before we get too deep into this discussion though, we need to define the terminology surrounding this topic.  The first thing is that SSAE 18 replaced SSAE 16 as of 2017 even though nothing else appears to have changed.  The next key thing anyone needs to know about SSAE 18 is that there are three reports that can come from this reporting series: SOC 1, SOC 2 and SOC 3.

The first, SOC 1, is for financial auditors only.  It used to be called a SAS 70 years ago.  It is a report focused on financial controls that an external auditor needs to ensure that the financial numbers coming from the third party can be relied upon in their annual audit of their client.  Yes, these SOC 1 reports can cover security controls, but that is only in regard to financial systems, not necessarily the third party’s entire environment.  In addition, the control coverage is typically not as deep as required for PCI compliance.  The bottom line is that any reliance on a SOC 1 report outside of financial systems should never be assumed.

I am going to cover the SOC 3 report next because it covers all of the security domains.  The SOC 3 report (also sometimes referred to as the ‘SysTrust’ report) covers the following domains:

  • Organization and Management – The criteria relevant to how the organization is structured and the processes the organization has implemented to manage and support people within its operating units.
  • Communications – The criteria relevant to how the organization communicates its policies, processes, procedures, commitments, and requirements to authorized users and other parties of the system and the obligations of those parties and users to the effective operation of the system.
  • Risk Management and Design and Implementation of Controls – The criteria relevant to how the entity (i) identifies potential risks that would affect the entity’s ability to achieve its objectives, (ii) analyzes those risks, (iii) develops responses to those risks including the design and implementation of controls and other risk mitigating actions, and (iv) conducts ongoing monitoring of risks and the risk management process.
  • Monitoring of Controls – The criteria relevant to how the entity monitors the system, including the suitability, and design and operating effectiveness of the controls, and takes action to address deficiencies identified.
  • Logical and Physical Access Controls – The criteria relevant to how the organization restricts logical and physical access to the system, provides and removes that access, and prevents unauthorized access to meet the criteria for the principle(s) addressed in the engagement.
  • System Operations – The criteria relevant to how the organization manages the execution of system procedures and detects and mitigates processing deviations, including logical and physical security deviations, to meet the objective(s) of the principle(s) addressed in the engagement.
  • Change Management – The criteria relevant to how the organization identifies the need for changes to the system, makes the changes following a controlled change management process, and prevents unauthorized changes from being made to meet the criteria for the principle(s) addressed in the engagement.

There are also some additional considerations that are related to Confidentiality specified in the Trust Services Principals and Criteria (TSP), but those are not required to be covered in the SOC 3 report.

Finally, there is the SOC 2 report.  The SOC 2 report uses the same TSP as the SOC 3 but with a twist.  The third party can select any or all of the seven domains to be assessed.  Think of it as a “cafeteria style” assessment.  With the SOC 2, the AICPA does not require that all domains be covered (as with the SOC 3), the assessed entity can select only those domains they wish audited.  As a result, a third party could select only the ‘Organization and Management’ domain to be assessed and nothing else in their SOC 2 report.  Therefore, just because you have a SOC 2 does not mean it covers the domains necessary for your PCI assessment.  Like the SOC 3, in addition to the seven domains, the SOC 2 can also cover none, any or all of the additional considerations documented in the TSP.

Within each of these SOC reports there is a Type I and a Type II report.  A Type I report is basically worthless from a reliance perspective because no testing of the controls is ever performed.  With a Type I report, the auditor is signing off on the fact that the third party has controls defined and formally documented.  But without testing, there really is no point to this report.  Yet every now and then, I encounter a Type I report that an organization has relied upon for years.

The only report worth anything is a Type II report which tests the control environment to ensure that the controls are functioning as designed.  So, when you get that SOC 2 report, you need to make sure you have a Type II report where testing has been performed by the auditor.  Even then though, the report might not be as useful as you might think.

I Have A SOC 2 Type II Report From A Service Provider

While you want to read the whole report in detail, when I am pressed for time and cannot read it in its entirety, here is where I focus so that I can get a quick view of what I have.  Some CPA firms provide a one-page Executive Summary that gives the reader a quick overview of the report, provides the timeframe the report covers, opinion, exceptions and other useful information.  But that is not required by the AICPA so you cannot always rely on such an overview being in every report you receive.  When they are available, they can help you focus your quick review efforts even better.

The first thing to do is to read the auditor’s opinion which should be the first section of the report.  It is in the form of a letter on the auditor’s letterhead and signed by the auditing firm.  The opinion the auditor provides will be either:

  • Unqualified – no material control weaknesses or failures were identified.
  • Qualified – some material control weaknesses or failures were identified.
  • Adverse – significant control weaknesses or failures were identified.

An unqualified opinion is what all organizations desire and what most reports document.  But do not be fooled by an unqualified opinion.  There still could have been control weaknesses or failures identified but they did not rise to the level of being considered “material”.  I have seen some unqualified reports with control weaknesses that I would have considered material as their auditor, so you might still want to contact the organization to get clarification on any weaknesses identified.

A report with a qualified opinion is not the end of the world, but that will all depend upon what control weaknesses or failures created the qualification.  Someone misusing their access can be minor compared to not performing backups of servers for months.  As a result, you need to read each control weakness to determine the criticality of the control failure as well as review management’s responses to how they addressed or will address the failure.  Again, you may find yourself contacting the organization to clarify weaknesses documented.

In my experience, reports with an adverse opinion never get issued to the public.  Management sees all of the control failures and weaknesses and then embarks on the long arduous task of cleaning up their control environment.

The next section to look at is the one labeled ‘Information Provided by Independent Service Auditor’ or similar.  This is the section that will contain the testing results and will define which of the domains were covered as well as the timeframe the report covers.  Most organizations issue SOC reports annually, so you always want to make sure that you have the most current report.  If the coverage end date is getting within three months of a year old or more, you should contact the third party and ask them when the next report will be issued.  They should inform you that the new report is in progress and give you an estimated date the report will be issued.  If they do not give you a succinct answer, I would be concerned.

You need to go through this section looking at a couple of things.  The first is to determine which of the domains were covered.  While documenting those domains, you also need to review the testing that was performed and at what level of detail those tests were conducted.  For example, it is not unusual to see tests for change control cover five random changes but not test those changes for having appropriate documentation, backout instructions and testing, only that the changes were approved.  At some point you will need to read this section carefully to determine what, if anything, will cover the testing required by the PCI DSS.  But a quick perusal will usually give you an idea of what you are likely going to get out of the SOC 2 for PCI compliance, if you are going to get anything at all.

This leads to the next section of the report you should read.  The last section of all SOC reports is usually titled ‘Supplemental Information Provided By [Organization Name]’.  This section contains information that was provided by the entity being audited but is not covered by the auditor’s opinion.  There can be all sorts of information presented here but the important point to remember is that the auditor did not test or assess the accuracy of that information.  So, you need to take any information provided in this section with a bit of skepticism.

It is in the Supplemental Information section that you want to look for a sub-section titled ‘Management’s Response to Control Exceptions’ or similar.  Even when an organization has an unqualified opinion, there can still be items listed in this section.  If there are items listed, you want to carefully read what those items were and how management addressed or corrected the condition.  If you find any control issues and responses that concern you, you should contact the entity and get those discussed so that you are comfortable with the situation.  If you cannot get comfortable with the situation, then you may want to consider additional controls at your end to compensate for the control weakness with the third party.

In the next postpost I will take you through a more thorough review of the SOC report.

18
Jun
16

iFrame Hack Reported

This week brought news of an inline frame (iFrame) payment solution that was hacked in the UK.  For all of you merchants that use an iFrame solution because you were told it reduced your PCI scope, you may want to rethink your security strategy.  For all of you hosting companies that offer these iFrame solutions because of the scope reduction value, you too may want to rethink your security strategy as well.

For those of us that are not Web developers, an iFrame is:

“An HTML document embedded inside another HTML document on a website. The iFrame HTML element is often used to insert content from another source, such as an payment page or advertisement, into a merchant’s Web page.”

For merchants using an iFrame for handling payments, the PCI DSS rules that the iFrame makes the merchant’s Web site out of scope because the iFrame is managed by the payment provider, not the merchant.  Thus merchants using an iFrame or a redirect are allowed to fill out an SAQ A.  However, because of increased risks to merchant Web sites using iFrames and redirects, the Council has updated SAQ A in response to those risks.

But there has always been a risk that iFrames and redirects could be manipulated.  The attack used in the article was fairly sophisticated in that it required a lot of knowledge about how that particular iFrame worked and then used a man in the middle (MITM) approach to intercept the invocation of the payment processor’s iFrame and insert their own iFrame.  Not easy, but definitely effective.

The easier approach is an attacker changes the script/executable that invokes the iFrame/redirect to invoke a malicious iFrame/redirect.  A merchant would be alerted to such a change if critical file monitoring were required, but SAQ A does not require critical file monitoring.

This is why a lot of QSAs have told their clients that only fools believe that the requirements in SAQ A will keep their Web sites secure.  At a minimum, merchants using iFrame/redirect solutions should have critical file monitoring and logging implemented as well as conducting quarterly vulnerability scanning so that they can secure their Web sites as well as alert on any changes or any suspicious activity on their Web sites.

16
Apr
16

PCI DSS v3.2 Draft Released

On Friday, April 15, 2016 while a lot of you were probably getting your US income taxes done, the PCI SSC decided to release the draft of v3.2 of the PCI DSS.  I know the announcement message to me from the Council ended up in my company’s spam filter, so you may want to check there if you did not receive a message.  I was lucky enough for a colleague to forward his copy along to me.  However to get the draft, you need access to the PCI Portal to obtain the draft PCI DSS v3.2 and the requisite change log.

These are some of the more notable changes in the new PCI DSS version.

  • The draft provides an official sunset date for v3.1 of the PCI DSS. Regardless of the date in April that v3.2 is released, v3.1 will be withdrawn on October 31, 2016.  So any assessments done after that date will need to comply with and use v3.2.
  • Two new sections to Appendix A have been added. In addition to the Appendix for shared hosting providers (now marked A.1), we get Appendices A.2 and A.3.  2 covers SSL and early TLS for those of you that will miss the June 30, 2016 date.  For those of you that thought 2018 was the deadline and missed discussions on the Webinar about the SSL/early TLS deadline, while the deadline was extended to June 30, 2018, any organizations missing the June 30, 2016 date must fill out Appendix A.2.  A.3 is where the Council added the designated entities supplemental validation (DESV) requirements.
  • There are a number of new requirements for service providers that are best practices until February 1, 2018. Those new requirements include: (1) maintain a documented description of the cryptographic architecture, (2) detect and report on failures of critical security control systems, (3) perform penetration testing on segmentation controls at least every six months, (4) executive management to establish responsibilities for the protection of cardholder data and a PCI DSS compliance program, and (5) perform reviews at least quarterly, to confirm personnel are following security policies and operational procedures.  I would bet that numbers three and five will likely create a lot of contention with service providers.  But you have until February 1, 2018 to get those in place.  However, if experience teaches us anything, service providers had better start now getting these new requirements in place and operating.
  • All organizations picked up the following new requirements that are best practices until February 1, 2018: (1) change control processes to include verification of PCI DSS requirements impacted by a change, and (2) multi-factor authentication for all personnel with non-console administrative access to the CDE. As with the aforementioned new requirements for service providers, these will also require a lot of organizations to get started now to ensure these new requirements are in place and operating.
  • The Council clarified requirement 8.1.5 to show that it is intended for all third parties with remote access, rather than only vendors. While most organizations understood the intent of this requirement, there were a few that played “legal eagle” and refused to require compliance for non-vendors.
  • Requirement 6.5 has been clarified that developers must go through secure coding training at least annually. This change will likely create some consternation for some organizations that are developing their own software that is in-scope for PCI compliance.
  • Clarified 11.5.a by removing “within the cardholder data environment” from the testing procedure for consistency with requirement, as the requirement may apply to critical systems located outside the designated CDE. This will likely expand the number of systems that require critical file monitoring.
  • Clarified 12.8 1 by saying that the list of service providers now must include a description of the service(s) provided.
  • Clarified 12.8.2 by adding guidance that service provider responsibility will depend on the particular service(s) being provided and the agreement between the two parties.
  • One of my pet peeves has finally been addressed. I have always had an issue with requirement 1.1.6 and the use of the terminology “insecure protocols”.  The reason is that in one way or another, all protocols have their insecurities whether they are known or not.  In v3.2, the Council has finally removed the “insecure” designation as, in their words, “these may change in accordance with industry standards.”  It is those small battles at times that make your day.

There are other clarifications and edits that have been made to the new version.

For all of us QSAs, we await the Reporting Template which will detail out the actual testing to be performed which will allow us to assess the real impact to the effort required to conduct an assessment.  As a result, there could still be some surprises with this new version of the PCI DSS.  So stay tuned.

24
Nov
15

Information Supplements Versus The PCI DSS

At various times over the years, the Council has repeatedly told QSAs, Participating Organizations (PO) and anyone else that has asked questions about statements in the Information Supplements the following.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

So what are the point then of the Information Supplements?

Boy is that a good question. As a QSA, I often ask myself that very question after some of the inane conversations with clients and prospective clients regarding Information Supplements and their supposed “guidance”.

The first thing everyone should remember about Information Supplements is that they are developed and written by a committee at the suggestion of the Council, POs or as part of special interest work groups. These committees are made up of personnel from interested POs, QSAs, ISAs, vendors and anyone else willing to participate in their development. They are edited by a representative from the Council and reviewed by the Committee and are then submitted to all POs, QSAs and ISAs for review and comment. Similar in concept to the development and review of RFCs by the IETF.

The other key point about Information Supplements are that they are developed to give QSAs, ISAs and organizations ideas and guidance on how best to appropriately meet the requirements of the PCI DSS and the Reporting Template testing. Again, as the Council has repeatedly stated, the Information Supplements do not replace the explicit guidance and testing requirements in the PCI DSS and the Reporting Template. They are merely suggests on an approach.

Yet time and again, QSAs and ISAs get these priceless documents tossed in our faces and are told we do not know what we are talking about. “The Information Supplement says …” is always put out there as the justification as to why an organization is doing something it should not be doing or as the rationale for why the organization is not in compliance with the PCI DSS. And we again are forced to explain that the Council never has said that an Information Supplement replaces the guidance and testing in the PCI DSS or the Reporting Template.

The first question anyone, and I do mean anyone, should ask about any statement in an Information Supplement is, “Does the PCI DSS and/or the Reporting Template explicitly say the same thing?” Those are the only two documents that matter and the only documents that your organization will be assessed against. If it is not explicitly called out in either of those documents, then it is not accurate and does not reflect the compliance requirements.

As an example. I was on a conference call recently regarding the Council’s Information Supplement on penetration testing. This supplement was issued in March, 2015 and is possibly one of the most confusing and contradictory pieces of “guidance” we have ever encountered. In fact, it has created more confusion than it has actually clarified. In my very humble opinion, the Council would be better off taking it out of circulation because of all of the trouble it creates for QSAs, penetration testers, ASVs and clients. It is possibly one of the worst written of the Information Supplements and, while people both on the Committee that developed it and externally supplied the Council with numerous suggestions for changes, those changes were not incorporated into the document. Why those changes were not incorporated is anyone’s guess. But we in the PCI community ended up with possibly the worst expressed and misunderstood guidance available.

As usual, the client was arguing over the scope of their penetration testing. I get the fact that organizations want to minimize costs and scope as much as possible. However when you listen to some security professionals arguments on this topic, you just wonder how they got to their positions as they argue over not testing systems and devices that are painfully obvious to be in scope.

And as also is usual, the first piece of confusion regarding scope is in Section 2, page 5, first paragraph after the bullets and states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

One would think that to any reasonably intelligent information security professional, the first part of the sentence, “It is not a requirement to test from within the CDE to the servers inside the CDE;” would be considered a pure line of garbage. Never mind that none of the recognized penetration testing methodologies ever suggest such an approach. But people arguing never consider that fact. Nope. The people arguing are so focused on cutting their PCI compliance bill that it does not matter that the statement is pure and unsupported garbage. It is considered the gospel truth. Otherwise, why would the Council allow such a statement? Good question. We have asked the Council that question and the answer back is? You guessed it.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

Again, never mind it is in no way supported by the guidance provided by the PCI DSS for requirement 11.3 which says:

“The intent of a penetration test is to simulate a real-world attack situation with a goal of identifying how far an attacker would be able to penetrate into an environment. This allows an entity to gain a better understanding of their potential exposure and develop a strategy to defend against attacks.”

But argue that point they do even when you point out that arguing this point is basically arguing that any attacker would stop at the perimeter of the CDE and would go no further.

Seriously? If you believe that fact, you must also believe in Santa Claus, the Easter Bunny, the Tooth Fairy and any other of the multitude of mythical fictional creatures. Or you are just lying to yourself and are in serious denial about your organization’s security posture. But argue on they do.

Then you pair that to the second part of that first sentence of this paragraph that says, “… and testing exclusively from within the CDE perimeter will not satisfy the requirement.” Just adds to the out of scope argument.

As I point out when bitch slapped with this terrible writing, if you go back and carefully re-read the second part of the first sentence, what it points out is that penetration testing from only inside the CDE is not sufficient to meet the penetration testing requirements of the PCI DSS requirement 11.3. In no way does that sentence say or even further imply that the CDE is out of scope. It is actually saying that penetration testing should be done from within the CDE, but that penetration testing only inside the CDE does not meet 11.3. But people will still argue that the CDE is out of scope.

That the CDE is in scope is further supported by the definitions of “critical systems” from section 2.2.1 of the document which defines that not only are systems within the CDE in scope, but also those that are outside the CDE but could affect the security of those systems inside the CDE (i.e., what the Council and the Open PCI DSS Scoping Toolkit refer to as “connected to” systems). However, people arguing over scope rarely, if ever, tie these two section together and then argue that because they are in separate sections they cannot be possibly together even though the entire document is about only one subject, penetration testing and requirements in 11.3 of the PCI DSS.

So before you go off telling your QSA or ISA that the Information Supplement says something. Think about what the information supplement says. Is the guidance from the Information Supplement even implied in the PCI DSS? Read the guidance in the PCI DSS and the testing procedures from the Reporting Template. If the PCI DSS or the Reporting Template do not explicitly have the same language in them that the Information Supplement has, then the Information Supplement is merely a suggestion.

And if the guidance from the Information Supplement does not make sense, pull your head out of your posterior and use some God given common sense. Ask your QSA or ISA to explain it, before going off halfcocked and thinking that someone could actually think such things made sense.

But again, why would the Council allow such statements? Good question. We have asked the Council that question and the answer back is? You guessed it.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

Clear as mud? You bet.

But what did you expect? It is PCI.

For all of you in the United States, have a happy and safe Thanksgiving holiday.

25
Jul
15

Compensating Control Refresher

From time to time, organizations find themselves in the predicament of not being able to meet a PCI DSS requirement due to business or technical constraints. To address that situation, the PCI SSC has provided the compensating control worksheet (CCW) as a way to work around those requirements that cannot be met directly as stated in the PCI DSS. When the CCW was updated back in 2010 for v1.2, I wrote about those changes and how to write a CCW. However, here we are at v3.1, five years down the road and I still see a lot of poorly and improperly written CCWs. As a result, I think it is time to take people through a refresher on the CCW.

First and foremost, the writing of any CCW is your organization’s responsibility. Your QSA can provide input and guidance, but the origination of the CCW is up to the organization. Once developed, your QSA can review it and make suggestions to enhance and improve the CCW. Once that has been completed, you will then want your acquiring bank to review it to ensure that they will accept it as part of your self-assessment questionnaire (SAQ) or Report On Compliance (ROC) filing.

Secondly, the format of the CCW is dictated by the Council and that format is provided in Appendix B of the SAQ D or in Appendix C of the ROC. Failure to use the proper format will create issues with your QSA, your bank and with the Council, particularly if you are doing a ROC. So please use the Council supplied format and not develop something on your own.

Finally, the PCI SSC has stated that any requirement can have a CCW. In the past, the Council instructed QSAs and ISAs that requirement 3.2 [Do not store sensitive authentication data after authorization (even if encrypted). If sensitive authentication data is received, render all data unrecoverable upon completion of the authorization process] was not allowed to have a CCW. At the 2014 Community Meeting, the Council backed away from that restriction and said that any requirement can have a CCW with no restrictions. However, as a QSA I would have a serious problem accepting a CCW for requirement 3.2 because storing sensitive authentication data (SAD) is the whole reason why the PCI DSS was created to stop.

To remind everyone, the CCW is broken into seven sections.

  • Identification of the PCI DSS requirement(s) being compensated.
  • The constraint or business justification for needing the CCW.
  • The original objective of the requirement(s) being compensated.
  • Identification of any additional risks because of the CCW
  • The compensating controls.
  • The procedures your QSA/ISA followed to confirm that the compensating controls are in place and functioning.
  • The procedures followed by your organization to maintain the compensating controls.

While the Council tells everyone to have an individual compensating control for each requirement, there are some places where a compensating control is the same for a number of requirements. This most often occurs for requirements in section 8 around the various user management requirements or 6.1, 2.2, 11.2 and the processes of vulnerability management. I would highly recommend using one CCW per requirement, but I can understand why you might combine some. Just be judicial in combining them. Also, list not only the number of the requirement(s), but also the text of the requirement from the Requirements column in the PCI DSS. While your QSA might have memorized the PCI DSS requirements, bankers and others that will read the CCW have typically not committed to that level of detail and it will help them with the context of the CCW.

The business justification needs to be more than just “we don’t want to” or “it was too hard”. Believe it or not, I have had a lot of organizations provide just such simplistic and silly reasons for justifying a CCW. Proper justifications can involve budgetary constraints, timing (e.g., not enough time to complete remediation by the end of the assessment period), application requirements (e.g., the application requires XP to run) and/or vendor requirements (e.g., the vendor requires a hardware upgrade to correct the issue). If you do have a target date for addressing the CCW, this is where you want to provide that information so that readers know that the CCW has some time limit.

The original objective is the easiest part of the CCW to develop. The Council has provided the “Guidance” column in the PCI DSS for each requirement and it is the verbiage in that Guidance column that you should use to explain the original objective of the requirement. If you are using the CCW for multiple requirements, this section can get rather lengthy and I would recommend identifying the Guidance information with its requirement to help understanding of the information.

The next section can sometimes be the toughest to develop and that is identification of any additional risks because you are using a CCW. In some cases, there may actually be no additional risk perceived by using a CCW. One such example is when organizations have a separate system management VLAN where network and system administrators can use telnet, SNMPv2 and other “unsecure” protocols in addition to SSH, RDP and other secure protocols to manage devices/systems. These system management VLANs typically require the use of an out of band (OOB) to gain access, administrator credentials different from the administrator’s user credentials and two factor authentication to name just a few of the controls you see in this example. These management/administrative VLANs are no more risky than using only secure protocols.

However, if you are compensating for having to keep Windows XP running, that will likely be a very different story and depending on the compensating controls put in place, the risk could be moderately higher than not have XP around. The key here is that it is that the risk should be assessed and then honestly discussed in the CCW. If you think you are going to say that having XP does not increase risk to your cardholder data environment (CDE), I would seriously think again regardless of your compensating controls in place because any outdated Windows implementation is a security problem waiting to happen regardless of how you think you have mitigated the risk.

The compensating controls section is where the rubber finally meets the road. It is here that you document each individual control that compensates for your organization’s inability to meet the requirement(s) in question. I recommend that people either bullet point or number list each individual control. The reason is that in the next two sections, you need to tie the validation and maintenance items to the controls in this section and doing some sort of list makes it easy for people to ensure they have covered all controls in each section.

The most common mistake made in this section is organizations state that they have a project to remediate the issue(s). Sorry, but this is NOT a control. It is nice information, but it is not a control that can be relied upon. QSAs never want to ever see such statements made about future projects ever in this section. This section is all about what you are doing from a controls perspective to manage the fact that you cannot meet the requirement(s).

Valid controls in this section must also go “above and beyond” what is required by the PCI DSS. Examples of “above and beyond” include:

  • Reviewing log data in real time for a particular condition that would indicate an out of compliance condition on a control. This is above and beyond because log data only needs to be reviewed daily for such conditions.
  • Using whitelisting to identify applications that do not belong on a PC and generating an alert in real time if such applications are found. Such technology is above and beyond because it is not currently required by the PCI DSS.
  • Using critical file monitoring to identify rogue applications that do not belong on a PC and generating alerts in real time if found. Critical file monitoring is a PCI requirement, but this goes above and beyond because monitoring is only required on a weekly basis.

The list here can go on and on, but hopefully I have given you some ideas of how to create compensating controls that can actually compensate for your inability to comply with the requirement(s).

One key point though is that you cannot use a requirement in the same requirement group to compensate for a different requirement in the same group. For example, requirement 6.4 has bunches of sub-requirements under it. You cannot write a compensating control for one sub-requirement in 6.4 and then use a different sub-requirement under 6.4 as one of your compensating controls regardless if it is above and beyond.

The next section will list how the controls were assessed by your QSA/ISA to prove they have been implemented. So using our previous bullet list, here is what the control validation bullets would look like.

  • Observed the system information event management (SIEM) solution and verified that alerts are generated in near real time for [control failure condition] and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [whitelisting solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [whitelisting solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [critical file monitoring solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [critical file monitoring solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.

Finally, you need to document what your organization will do to ensure that the controls remain implemented and effective. This is where most compensating controls fall apart. The organization gets through their assessment and then neglects to keep the compensating controls working. Using our list from the compensating controls section, the maintenance controls would look something like this.

  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the SIEM and test that the alerts for the [control failure condition] are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [whitelisting solution name] and test that the alerts for rogue applications are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [critical file monitoring solution name] and test that the alerts for rogue applications are still functioning as designed.

A good idea in the maintenance section is to set timeframes for remediating any control testing failures.

One other important item of note about the controls, validation and maintenance lists. Notice that there are no “forward looking” statements made such as someone “will” perform or “will” review. CCWs must be shown to be in place and operating. A promise of implementing a control is NOT a control either. The control must be shown to be operating and maintained. That is an important point a lot of organization miss. It means that CCWs cannot be created at the last minute and then be operational past the filing of your SAQ or ROC. If you are going to have to use a CCW, that means you will need to identify the situation early and then get the compensating controls implemented, validated and through at least one maintenance cycle before it can be accepted.

CCWs can buy organizations time while they address issues that will take longer to address than their PCI assessment period. Unfortunately, there are organizations that see the CCW as a way to be judged PCI compliant without addressing their serious security shortcomings. It is not unusual for large organizations to have a number of CCWs particularly if they have legacy applications and hardware. However, I would highly recommend that all organizations only rely on CCWs if there are no other options to achieving PCI compliance.

03
Apr
15

PCI Issues Penetration Testing Information Supplement

On Thursday, March 26, the PCI SSC issued their latest information supplement. This one is focused on the topic of penetration testing. Not surprisingly, I have a number of comments based on what I have read. There are some good things that have come out of this effort. But in my opinion, the committee missed the boat in a few areas and those will lead to even more confusion between penetration testers, QSAs and clients.

Scoping

Talk about making a mess of a mess, I believe the writers accomplished that amazingly well in their discussion on scoping. This is probably the area that creates the most confusion and will likely create the most problems.

On page 5 and the first bullet, talk about confusing. It took me a number of readings to even get what I think they are trying to communicate.

“The scope of the internal penetration test is the internal perimeter of the CDE from the perspective of any out-of-scope LAN segment that has access to a unique type of attack on the CDE perimeter. Critical systems or those systems that may impact the security of the CDE should also be included in the scope. Testing must include both application-layer and network-layer assessments.”

Out of scope LAN segments are out of scope because there are no rules that allow them access to the CDE. I agree that you need to test that these LAN segments are truly out of scope, but if there is a unique type of attack available from such a LAN segment into the CDE, that would seem to imply that the LAN segment was not out of scope. Confusing and likely to cause a lot of pen testers to force a lot of unnecessary testing and fishing for bizarre issues on out of scope network segments.

The next sentence paired with the first seems to just add to the confusion because it implies again that the out of scope LANs are not out of scope. These should have been two separate bullet points in my opinion.

In the end, what I think the writers are saying is that even if deemed out of scope, all LANs and VLANs need to be confirmed to be out of scope. However, how they have said that in my opinion leads to a lot of confusion.

If the last one did not cause your head to spin around, this one probably will. It is the third bullet at the top of page 5.

“To be considered out of scope for PCI DSS, a system component must be isolated (segmented) from the CDE, such that even if the out-of-scope system component was compromised it could not impact the security of the CDE. Therefore, the penetration test may include systems not directly related to the processing, transmission or storage of cardholder data to ensure these assets, if compromised, could not impact the security of the CDE.”

Correct me if I’m wrong, but this seems to imply that everything is in scope for penetration testing. I know the writers of this supplement have used the word ‘MAY’, but we also know how ‘MAY’ tends to be interpreted by the Council (i.e., MUST) and how it will be interpreted by a lot of our clients (i.e., do not do it). Until a few penetration testing reports are reviewed by the Council, the verdict will be out on the correct approach. However, coupled with the theme created by these bullets, I am guessing that the committee could not agree on how to express how to scope the penetration test.

But if those bullets were not confusing enough on scoping, the writers add insult to injury. After the bulleted list at the top of page 5, the first paragraph states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

An internal pen test does not require testing from within the cardholder data environment (CDE)? This sounds like the “Three Hop Rule” all over again. But then we twist it further somehow justifying the first sentence.

All this diatribe does is just further obfuscates scoping. What simply should have been said is that everything in-scope for PCI compliance is to be penetration tested. Instead, we got a lot of words that, in my very humble opinion, do nothing to clarify scoping.

PA-DSS

On page 6 under 2.3.2 PA-DSS Compliant Applications we get the following guidance.

“If a payment application has been PA-DSS validated, the application’s functionality does not need to be tested as part of the entity’s PCI DSS compliance validation. However, the implementation of the application does need to be tested. This includes both the operating system and any exposed services, but not the payment application’s functionality (e.g., authentication, key management, transaction processing, etc.) since this was validated as part of the PA-DSS application validation.”

Under the PCI DSS, even PA-DSS validated applications still must be tested to prove that their implementations were completed per the vendor’s PA-DSS implementation guide. Yet for penetration testing, PA-DSS validated applications, their operating systems and exposed services are exempt?

I hope the acquiring banks are geared up for the fights this is going to cause between the penetration testers and their clients. For example, in the restaurant and hospitality industries, I can see the situation where penetration testing only occurs on firewalls, routers and switches because everything else is PA-DSS validated.

I do not believe this is what the Council wanted, but that is what this supplement is stating. If we are truly talking about security of our networks and systems, this is not how you achieve it.

Social Engineering

I was pleasantly surprised that this was included in this supplement. However, as I read through this section, I got more concerned that it was even included.

For section 2.5 Social Engineering on page 7, we encounter the following.

“Social engineering is the attempt to gain information, access, or introduce unauthorized software into the environment through the manipulation of end users. PCI DSS v3.0 reconfirms testing by requiring industry accepted penetration testing approaches (many of which include social engineering as part of their approach) and to have an approach to penetration testing that “considers the threats and vulnerabilities experienced by merchants in the last 12 months.” This may include social-engineering attacks as a method used for introducing malware into the environment.”

This makes social engineering attacks legitimate if the client has been breached by such methods in the past year. But does this include physical as well as logical social engineering? Since it is not explicitly called out, I would assume so, but it should have been defined. Regardless of that being explicitly called out, this is a great thing to see in this document.

On page 8, the last paragraph of the discussion on social engineering states:

“Social-engineering testing may not be appropriate or provide a meaningful result for all organizations. Although social-engineering testing is not a requirement of PCI DSS, an organization may consider documenting the reason(s) for foregoing social-engineering testing and include applicable documentation with the internal and external penetration test reports, particularly if social-engineering attacks were encountered in the last 12 months.”

What started with such promise ends on a very sour note. This last paragraph gives organizations an out. So I guess we can look forward to a lot of organizations issuing inventive explanations to avoid social engineering testing.

Some Miscellaneous Issues

On page 2, the seventh bullet down in the Terminology we have the following.

National Vulnerability Database (NVD): The U.S. government repository of standards based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance (e.g., FISMA).”

I know they just grabbed the description from the National Vulnerability Database (NVD) Web site, but that does not mean that it is appropriate for use when discussing penetration testing. The Common Vulnerabilities and Exposures (CVE) database maintained by Mitre Corporation would have been the better reference since the NVD uses the CVE as the catalog of vulnerabilities.

I have no idea how the Federal Information Systems Management Act (FISMA) ties into the NVD. If they had truly pulled the content exactly from the NVD Web site they would have grabbed the fact that it ties to the Security Content Automation Protocol (SCAP) not FISMA.

My last comment on this topic is regarding the fact that the NVD (or the CVE) enable automation of vulnerability management. Had they brought in the reference to SCAP, I might have had an easier time going along with things. The NVD/CVE do not contain the actual exploits, they are merely a catalog of the vulnerabilities. As a result, I question the value of either the NVD or the CVE as to it automating vulnerability management. Yes, the CVE catalogs all of the vulnerabilities, but that is the value. In my opinion, the use of either of these in regards to the automation of the vulnerability management process is a stretch. They are a foundation that will allow for automation, but they do not in and of themselves actually enable automation.

Under 2.3.3 Web Applications on page 6, we have the following.

“It is common for an environment to host a web application that was not specifically coded for the organization such as commercial, off-the-shelf web-mail interfaces, document-sharing tools, file-transfer services, network-device administrative interfaces, etc. In these instances, the web application does not typically need an application-layer penetration test as the entity is not responsible for the source code of this type of software. Instead, the tester should perform a network-layer test and ensure the software was implemented, configured, and is currently being maintained in a secure manner (disabling or uninstalling unused services, blocking unused ports, applying current updates, etc.).”

I can see it now, organizations arguing over the pen testing of IBM Websphere, Oracle eCommerce or similar Web application frameworks because they are a package. It is a framework that you then must write an application for, but penetration testers will get into arguments over that aspect.

A Few Good Things

Yes, I have complaints, but there are some good things that are contained in this supplement.

Probably the best thing about this information supplement is the checklist at the end of the document for evaluating penetration testing reports. The Council should have a similar item for vulnerability scanning reports as well.

The discussion of the penetration testing methodology in section 4 of the document is a good read for both penetration testers and the people that contract for penetration testing services. But keep in mind it is a framework and not the entire rule/procedure set for conducting a proper penetration test. Yes, there are some good details in areas, but it is not the detailed roadmap that some people will portray it.

Section 3 has a good discussion on how to evaluate the qualifications of a penetration tester. It was nice to see that while they document the common certifications a penetration tester may hold, they caution readers that certifications alone does not make a penetration tester qualified.

The case studies in section 6 probably do more to clarify than the previous five sections. They are good examples of properly conducted penetration tests.

Finally, there is a good discussion on the definition of a “significant change”. However, I think this discussion as well as a discussion about the words “periodic” and “periodically” should be put into a discussion about risk management and assessment, rather than brought up here. I wrote a post on this topic a while back to provide some guidance to people and it was nice to see that my thoughts jive with what they wrote in the supplement.

What Was Missed

The biggest item I think that was missed was how to handle the penetration testing of huge environments. Case study 6.3 discusses the concept of sampling somewhat, but it deals with a retailer with only six locations, not thousands of locations. There are also veiled references to sampling in section 2.3.4 in discussing a separate testing environment. But there is no deliberate and complete discussion on this very important topic.

Retailers such as Wal-Mart, Target and Home Depot have more than a thousand locations with 15+ checkout lanes meaning there are, at a minimum, 15K devices out in the field that technically need to be penetration tested. How does an organization penetration test such a large number of devices annually? The answer is that they do not test them all. It is physically impossible. However, this information supplement does not address that situation and provide guidance as to what the Council deems as acceptable.

While I use some of the largest retailers in the world as my example, it is not just them as there are mid-sized retailers as well as hospitality organizations that face this same issue. I had hoped that how to approach these situations would have been explicitly addressed in greater detail but it was not.

The writers also missed a huge topic regarding social engineering testing. Unlike vulnerability scanning and penetration testing, in my experience, it is virtually impossible for any organization to “pass” social engineering testing. So what would be considered “passing” in a social engineering test? That is not discussed but should have been. Since this document broaches the subject, it is probably a good topic to propose for a SIG for this year’s SIG recommendations so we get social engineering covered in an information supplement rather than as a sidebar to penetration testing.

It is not that this information supplement is an entirely bad document. But the advice it provides needs to be taken with a bit of caution. As usual, the shortcomings will be leveraged by all of the “check boxers” to do as little as possible so that you get that box checked for 11.3.

01
Jan
15

The Three Hop Rule

At the 2014 Community Meeting, the PCI SSC responded to a question about network segmentation with what has come to be termed the “Three Hop Rule”.  The statement was made that if a device/system was “three hops or more” away from the cardholder data environment (CDE), then it was out of scope.  A lot of us in the room were taken aback by this statement.  And based on some questions of late regarding this subject, there is a lot of confusion out there regarding what the Council was trying to say.

First, the term “hop” is not a network security term nor does it even have any security implications.  The term “hop” is defined as:

“Data packets pass through routers and gateways on the way.  Each time packets are passed to the next device, a hop occurs.”

The count of three therefore is the number of hops or “hop count” between devices.  Hop count is defined as:

“Each router along the data path constitutes a hop, as the data is moved from one Layer 3 network to another.  Hop count is therefore a basic measurement of distance in a network.”

Nowhere in these definitions is there any statement about hops, the number of hops between devices and any correlation of hops and hop count as some form of security.  Hence why a lot of us were really concerned about this statement and likely why there is so much confusion and discussion resulting from the comment.

What we believe the Council was getting at was the number of network segments there are between a device/system and the CDE.  However, having three network layers between the CDE and devices/systems is also no guarantee of security.

What provides security at Layer 3 are the access control lists (ACL) or rules that allow or deny packets to traverse particular paths of the network.  ACLs can be implemented to control what devices and/or ports and services can communicate between various networks.  But just because there are ACLs implemented at each hop is also no guarantee that the number of hops between devices also secure the devices.

This is why the requirements in requirement 1 of the PCI DSS require that the QSA review all relevant ACLs to ensure that the network is truly segmented.  It is also why in v3, requirement 11.3 requires that the penetration testing also prove that the network is truly segmented.  As a result, the number of hops between the CDE and a device should not be considered a guarantee and never will be a guarantee that a device is out of scope.

The bottom line is that, in order to be truly out of scope, there needs to be ZERO hops between a device and the CDE.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

October 2021
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 2,422 other followers