Archive for the 'PCI DSS' Category

09
Jan
22

Penetration Testing – Yes, It Is Still Misunderstood

Remind me again how far down the road we are with the current practices of information security? Has it really been almost three decades?

Yes, it really has been about that long.  Yet, it continues to fascinate me how those practices are so misunderstood in the information security community.  Particularly when a significant portion of that community holds one or more certifications in the topic.  In my very humble opinion, we should not be having such basic understanding conversations and yet these topics keep constantly coming back up.

One of those practices that is heavily misunderstood is penetration testing.  I have written numerous posts on the subject and yet the understanding of penetration testing continues to be a challenge.  This is particularly true when it comes to PCI compliance.  As a result, I decided I would share some important points about PCI compliance and penetration testing as is required in requirements under 11.3.

A lot of these misunderstandings are clarified in the PCI SSC’s Information Supplement on Penetration Testing.  If you have not read this document, I highly recommend it as it explains a lot about the process and why it is important.

Yet, sadly, it too has some flaws in it that have created more problems than the document solved.  The biggest of which is on page 7 where it discusses the scope of a penetration test.  The only thing those of us can figure is that the group that developed the information supplement assumed that the cardholder data environment (CDE) is implicitly secured by the security measures taken because of the PCI DSS requirements.  Therefore, the actual CDE is not required to be tested.  How any information security professional could think this is a good practice is beyond a lot of us in the profession, but the information supplement only says that the CDE MAY BE tested. 

But then on page 8, comes this quote.

“When access to the CDE is obtained as a result of the testing, the scope of the penetration test may allow the tester to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

“May allow” further exploration?  Wait!  What?  Since when does an attacker stop looking around?  Are you kidding us?

In my very humble opinion (and the opinion of those of a LOT of others in the profession), everything that is in scope for PCI compliance needs to be penetration tested.  No exceptions!  But all QSAs and penetration testers that take this approach have received push back (sometimes significant) from clients and then we usually back off and do exactly as the information supplement states.  I would like to tell you that it ends well, but I have had numerous clients come back later (sometimes years later) and complain (some very loudly) to me that I should have stuck to my guns and tested everything as they hold me responsible for why they got hacked.

The next big misunderstanding is what is considered a “passing” or “clean” penetration test?  Section 4.1.6 on page 16 of the Penetration Testing information supplement discusses what constitutes a successful penetration test.

“Defining the success criteria for the penetration test allows the entity to set limits on the depth of the penetration test. Without agreeing upon the point at which the penetration test is complete, there is a possibility of the tester exceeding the boundaries and expectations of the target entity. This should be documented in the rules of engagement.”

Sorry, but this is not an excuse for the assessed entity to avoid penetration testing by setting the scope of the test too small.  It is up to the client and the penetration tester to agree on the scope based on network and dataflow diagrams.  Going back to my earlier statement, a penetration test should test every device/system that is in scope for PCI compliance.  When I say everything, that does not mean that every “Connected To” system needs to be penetration tested such as with Domain Controllers or other devices where the penetration tester can confirm that standardized configuration practices are in place and can be proven to be standard.  It also does not mean that containers or servers that are spawned to increase capacity in a virtual or cloud environment need to be individually tested as long as the penetration tester can document that all instances are the same.

Another misunderstanding is when exploitable issues are found is that only critical, high or severe exploits need to be addressed in order to get a “passing” test.  Sorry to rain on your parade, but penetration testing is not like vulnerability testing where critical, high or severe vulnerabilities need to be fixed within 30 days and the others can be addressed within 90 days.  An exploit found by a penetration test MUST BE remediated or mitigated and then retested to get a “passing” test.  All someone needs to do is to read the PCI DSS requirements under 11.3 and see that nowhere in those requirements is there ever a reference to critical, high or severe exploits, nor any other remediation criteria.  ALL exploits documented must be addressed, regardless of any documented criticality, and either be remediated or mitigated and then retested to get a “passing” test.

A key point about retesting is that the penetration tester does not have to conduct a complete penetration test as a retest.  The penetration tester only needs to retest those exploits that were found in the original testing exercise.

Another critical point is when an exploit is mitigated and not remediated.  When mitigated, that means that the organization is relying on a variety of controls to detect and alert on the exploit being used.  When testing an exploit that is being mitigated, the penetration tester needs to have a clear understanding of those mitigating controls so that when they review their testing results they can attest to the fact that all of the mitigating controls functioned to identify, alert and then allow personnel to stop an attack.  If those mitigating controls cannot be confirmed then the mitigation is not considered successful.

Another point of confusion regarding penetration testing is network segmentation testing.  Why segmentation testing is bundled with penetration testing is unknown, but it has always been that way.  A lot of us would prefer it was a separate requirement in section 11, since it does not require a penetration tester to conduct this testing which is a surprise to most.  The person conducting the segmentation testing only needs to be deemed as “qualified” to conduct the testing.

People are also typically surprised that segmentation testing does not require anything more than Nmap installed on a laptop.  The key though to a successful segmentation test is that every segment deemed “out of scope” must be tested to ensure that they have no direct communication capability with systems contained in the CDE – inbound to the CDE OR outbound from the CDE.  This testing requires having Nmap test all 65,535 TCP and UDP ports to/from every network segment which takes time – a lot of time depending on the number of active IP addresses in the network segment.

For large organizations with tens of thousands of network segments, the idea of testing EVERY segment that is out of scope against the CDE is not realistic.  There are two options to address this situation.  The first option is to use a tool such as AlgoSec, FireMon, Tufin or the like.  Most large organizations have such a tool installed.  Using the tool’s database, queries can be run against the CDE and all segments to find any “holes” in those rules.  To use this approach though the entire network needs to be in the tool’s database because you are going to test ALL segments and you need to test all the segments.  Sadly, most large organization do NOT have all their network segments in the tool, so it is not usable for segmentation testing.

The second option requires an analysis of the firewall and routing rules to determine if there are ways to “sample” network segments that are covered under the same rules in the firewalls and routers.  If the segmentation tester can document that say one thousand network segments are all governed by the same set of firewall/router rules, then testing of five randomly selected segments of those thousand segments and getting the same results, those results can be extended to the remaining 995 segments.  But the key to this second option is making sure that the rules are exactly the same for all one thousand segments which is sometimes not as easy as it sounds.

Hopefully these clarifications will assist you in conducting and evaluating penetration testing.

02
Jan
22

PCI DSS v4

I wrote this for my new employer’s, Truvantis, blog so I figured why not point you to it here. Just my thoughts on the subject and all of you are concerned about: what is in the new coming release of the PCI DSS?

https://www.truvantis.com/blog/pci-dss-v4-2022-how-to-be-prepared

Enjoy!

19
Dec
21

Updated PAN Truncation FAQ

As part of the holiday giving tradition, the PCI SSC has given us an updated FAQ (#1091) on the subject of PAN truncation and it will likely go down as the most confusing FAQ ever.

The FAQ starts out simple enough with the statement:

“A maximum of the first 6 and last 4 digits of the PAN is the starting baseline for entities to retain after truncation, considering the business needs and purposes for which the PAN is used.”

But it is the table that follows that gets messy.

It seems that each of the card brands has their own take on PAN truncation based on PAN length and other factors. Only American Express has stayed the course.

Based on the guidance for UnionPay, Visa, Mastercard, JCB and Discover, the idea of first six/eight and ANY OTHER four is a bit bizarre not to mention risky.

Never mind the obvious warning note at the end of the FAQ that states:

“Access to different truncation formats of the same PAN greatly increases the ability to reconstruct full PAN, and the security value provided by an individual truncated PAN is significantly reduced. If the same PAN is truncated using more than one truncation format (for example, different truncation formats are used on different systems), additional controls should be in place to ensure that the truncated versions cannot be correlated to reconstruct additional digits of the original PAN.”

Personally, I would stick with the good old first six, last four and avoid any of these other formats as you are likely setting yourself up for problems and PCI non-compliance.

Happy holidays to all!

24
Oct
21

Remote PCI Assessment Guidance Issued

At the end of September 2021, the PCI Council released a Guidelines and Procedures document on conducting Remote Assessments for PCI and card brand assessments.  Most of this document is a rehash of previous Council statements and guidance.  However, there is one new element in this document that all QSAs will need to read and comply with and that is the requirement of documenting a feasibility analysis to justify conducting a remote assessment.

Some of the examples the Council gives as valid reasons that an on-site assessment may not be feasible includes:

  • Restrictions on the ability to travel or meet in person due to health and safety concerns or government advisories.  We are all familiar with the COVID-19 pandemic and its impact on travel, particularly international travel.  However, I encountered this a while back due to a volcanic eruption in Iceland that cancelled my trip to Europe.  Since we had no way of knowing how long the eruption would cause travel disruptions and we were on a tight timeline, we conducted video conferences rather than travel.
  • Geographic locations that are physically inaccessible or difficult to reach.  I personally ran into this situation one several years ago when a data center in Europe that was supposed to be decommissioned before the next assessment remained operational.  The company I worked for had shut down their EU operations and there was no way to justify 16 hours of flight time for a two-hour data center walk through.  We held meetings with the data center operator via video conference and did a virtual walk through.
  • Testing required at a location is limited to documentation and interviews and no observations of processes, systems or physical environment apply.
  • The entity operates a virtual environment without physical premises or facilities.  This has become more and more common with entities that operate in The Cloud.  Why rent expensive office space when there is not need for it?  This situation only got more prevalent with the pandemic and will likely only increase in the future.

As the Council states in their guidance,

“For many assessments, a combination of onsite and remote testing may provide a suitable balance, as it allows for increased efficiencies in the assessment process while enabling an appropriate level of assurance to be achieved in the assessment result.  For example, documentation reviews can often be performed remotely without significant loss of assurance, whereas observations of processes and environmental characteristics will generally require an onsite review.”

Regardless of whether the assessment fits into one of the bullets above, the Council wants QSAs to formally document their analyses of why the onsite assessment cannot be performed and the risks that may present to meeting the assessment objectives.  This analysis needs to be completed prior to starting any testing and is supposed to be a joint effort between the assessor and the client.

Topics that the Council recommends be addressed include, but are not limited to:

  • Confidentiality, security, and data protection requirements.
  • Availability and effectiveness of the remote assessment technologies.
  • Effects on entity’s personnel.
  • Effects on operation support.
  • Assessment scope and completeness.
  • Quality and reliability of digital evidence.

The Council further states:

“During the analysis, the entity and assessor should identify any challenges and potential risks associated with the remote testing and determine whether it is feasible for testing to be thoroughly completed to produce a high level of confidence in the assessment results.

The results of the feasibility analysis—including the risks and challenges associated with use of the remote testing methods, and any mitigating controls for overcoming the risks and challenges—should be documented and agreed upon by both the entity and assessor. A copy of the feasibility analysis results should be included with the applicable ROC/ROV. Entities and assessors may be required to produce the analysis upon request by the PCI SSC or applicable compliance-accepting entity.

The key points from that statement above is that: (1) the feasibility analysis needs to be submitted with the ROC/ROV and, (2) if requested by the PCI SSC or compliance accepting entity (i.e., Brand or bank), the QSA is required to produce the analysis.  As a result, this is a non-optional exercise.

The feasibility analyses must document that:

  • The assessment is feasible to be fully completed at this time using onsite methods, remote methods, or a combination of onsite and remote methods.
  • The assessment is only feasible to be partially completed at this time.
  • The assessment is not feasible currently.

According to the guidance, it is only those assessments that are completely feasible that can be conducted.

The Council includes a very important note regarding the analyses.

“The feasibility analysis determines whether the use of remote testing methods is feasible for a particular assessment.  Determining that a remote testing method is feasible does not guarantee that use of the testing method will produce the level of assurance needed for the assessor to reach a finding; this will depend on how the remote testing method is implemented and used, whether the testing can be completed for all applicable components and areas, and whether sufficient evidence is provided for the assessor to make a determination.  Assessors and entities should continue to monitor and evaluate the effectiveness of the remote testing methods throughout the assessment to confirm whether the testing methods are performing as intended and whether additional testing may be needed.”

This concept of “assurance” appears to all be in the eye of the beholder.  Meaning, if the Council, Brands or Banks determine, in their opinion, that the remote methods are not providing appropriate levels of assurance, the ROC/ROV can be rejected.  Not that a lot of banks are going to reject ROCs/ROVs on this, but I can see the Council’s AQM reviews and Card Brands rejecting ROCs/ROVs on analyses that they deem flawed or incomplete.  The AQM process is the most concerning because a QSAC could end up in remediation due to a failure to appropriately document the remote assessment feasibility.

As with most edicts issued by the Council, they should have produced a form for this feasibility analysis so that everyone understands what is required from these feasibility analyses.  Can the feasibility analysis be documented in section 1.2 of the reporting template or is a separate document required?  I would recommend this for the obvious remote assessments of COVID and everything in The Cloud.  I would recommend a separate document for feasibility analyses that are longer in discussion.

Sadly, I foresee a lot of confusion and heartache in the QSAC community as we move through this new requirement.  That is because I see a lot of assessments that are blocked due to COVID travel restrictions or the assessed entity having no physical offices being rejected for “flawed” feasibility analyses when it should just be allowed with no further documentation or discussion.

It will take time to see how this shakes out.

UPDATE 11/29/2021 – I received a comment on this post (see below) and the confusion is beginning. A service provider has had one of their customers request the documentation regarding what is provided in Appendix A of the remote assessment guidance document as well as the remote assessment feasibility study. Since these are ROC documents, there is no requirement from the Council that requires any organization to turn over their ROC to any third party other than their acquiring bank or the card brands. The AOC is the communication document to third parties. If an organization wishes to turn over Appendix A from the guidance, that is the organization’s decision, but it is NOT mandatory nor it is required by the Council.

17
Sep
21

2021 Government IT Symposium

I am honored to have been granted the privilege to speak at the 2021 Government IT Symposium this coming November.

I will be speaking (virtually) on Tuesday, November 16, at 145PM CT/1945 UTC.  My presentation is titled ‘PCI Compliance – Yes, That Includes Governments’.  The reason for my session is that while the PCI DSS has been around for over 15 years, government entities still question how it applies to them and why.  In my years doing assessments for government entities, I have found there are a number of unique situations that complicate their assessments.  In my session I will cover the basics of the PCI DSS and provide a walk through of the potential traps that tend to trip up government entities.

If you want to register for this symposium, go here to register.

I look forward to seeing you there.

31
Jul
21

PCI Dream Team LIVE! Is Coming In October

The PCI Dream Team will be appearing LIVE at the (ISC)2 Security Congress in Orlando this Fall, Monday, October 18 through Wednesday, October 20, 2021.   Our session is scheduled for Tuesday, October 19, at 11:45 AM ET/ 1545 UTC.

While we will be live at the conference, you can also attend the conference and our session virtually.  So other than training budget limitations, there is no other good reason you cannot join us.

As usual, we will be taking questions live and via email at pcidreamteam AT gmail DOT com.  We also monitor Twitter if you use #pcidreamteam.

We are expecting our usual lively discussion of all topics PCI and other security standards if time allows.

We really are looking forward to physically seeing people at the conference.

14
Jun
21

Last PCI DSS v4 Request For Comments Period

According to an email I received today, the draft validation documents (I am assuming that means the ROC Reporting Template and AOC) will be released on Monday, June 28, on the PCI Portal for QSAs, ISAs and POs to review and comment.

The comment period will be open for 30 days from that date.

Make sure you get your copy, review the documents and generate comments as this is your chance to have input on the PCI DSS.

01
Mar
21

Quick Update on PCI DSS v4

In the February 2021 Assessor newsletter, the Council announced the following.

“Because of the broad impact PCI DSS has on the payment community, the Council is seeking additional feedback into the PCI DSS v4.0 validation documents. As a result of expanding stakeholder feedback opportunities to include these supporting documents, the Council is now targeting a Q4 2021 completion date for PCI DSS v4.0. The publication and availability of PCI DSS v4.0 is still being determined. The Council will communicate the targeted publication date in the coming months.”

So we will apparently see one more iteration of v4 before it is released. According to their blog post, the comment period will start around June 2021.

See their blog post for more information.

One other important item from the newsletter for all QSAs, do not forget to register for the next All Assessor Webcast on March 18, 2021.

13
Dec
20

Network Segmentation Testing

NOTE: If you have not read the PCI SSC Information Supplement – Guidance for PCI DSS Scoping and Network Segmentation you must do so before using the procedures documented in this post.

How something so simple became something so complicated (or at least believed to be complicated), I just will never understand.  The only thing I can point to is the fact that network segmentation testing falls within the requirements of penetration testing.  Because of that, I think people therefore believe there is something “special” about how segmentation testing must be performed.  Never mind the fact that there is the even more basic issue of how to approach network segmentation testing.

Here is the network segmentation testing methodology for traditional IP networks.

  • Gather an inventory of all of the network segments.  Label each network segment as Cardholder Data Environment (CDE), Connected To or Out of Scope based on the definitions from the Scoping Information Supplement.
  • Make sure you have Nmap installed on a portable computer.  The reason this needs to be portable is because you will likely have to move around your facilities in order to complete all of the testing.  It is also not unusual to use diagnostic systems in the data center to accomplish this effort (they may already have Nmap installed) as well as creating VMs for this testing and then remoting into those systems.  The important thing is to have access to every network segment in your environment so that you can conduct this testing.
  • Connect your scanner to every CDE network segment and attempt to reach all of the Out of Scope network segments from the CDE.  You will want to run an Nmap scan that scans all TCP and UDP ports (i.e., 1 through 65535) against all IP addresses in a given out of scope network segment.  This likely sounds extreme but to prove segmentation you must test all 65,535 TCP/UDP ports against all IP addresses to make sure that no traffic “leaks” to your out of scope networks.  If you do find a port open in one of your out of scope networks, you will have to track down where that leak occurs.  Example: nmap –p- -sT –sU 10.10.0.0/16
  • While in each CDE, test connections out to your Connected To network segments testing all TCP and UDP ports against all IP addresses in your Connected To network segments.  Since communication between the CDE and Connected To segments is allowed, you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Finally, you will need to test that your CDE can only reach the internet through ports and IP addresses you have specified.  Obviously, you are not going to test every internet address as that would take forever.  However, what I tell my clients to do is to use every external IP address they have for business partners or other third parties they are connected to.  Again, you are going to test all TCP and UDP ports against those addresses. If you get any unexpected results back, you are going to have to resolve those issues as there should be no external connectivity.
  • Connect to every Connected To network segment and conduct testing into the CDE for all TCP and UDP ports against all IP addresses in the CDE network segment.  Again, since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • While in the Connected To network segments, conduct testing to all Out of Scope network segments.  Since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Connect to every Out of Scope network segment and run an Nmap scan into each CDE network segment for every TCP and UDP port for all IP addresses in the CDE.  This should return no results back if the network is truly out of scope.  If it does return results, you will have to figure out way and block that traffic into the CDE.
  • Save all of your results and comparisons so that you have a record of your testing.  If you found issues, make sure you document in detail what was done to resolve those issues and conduct new scans to prove that those issues were remediated.

When you bring in newer solutions such as the Cloud, containers, serverless, microsegmentation and the like the traditional method of network segmentation testing is impossible to completely test.  You can conduct all of the tests documented above from outside of the environment looking into your cloud environment, but you cannot look from inside the cloud out.  That must be done manually by examining the cloud configuration information and ensuring that networks are properly segmented.

If you are like me, you are looking for a better way to deal with the Cloud as well as large networks.  There are network tools from vendors such as FireMon, AlgoSec, Skybox and Tufin that have capabilities to take the Cloud configuration information as well as firewall, router, switch and other network infrastructure configurations and provide analytical capabilities to simulate the testing above from both internal and external perspectives.  The downside of these tools of course is that they are not inexpensive and can require significant horsepower to operate.  However, they can be worth their weight in gold for their ability to analyze and understand your networks, find misconfigurations and find issues where attacks can potentially succeed.

There is no reason to pay your penetration tester to conduct network segmentation testing unless you are uncertain as to how to analyze the information from the Cloud.

12
Dec
20

The PCI DSS Is Not The Only Relevant Payment Security Standard

One of the more lively discussions at our past PCI Dream Team session involved a discussion of requirement 12.8 and third party management (i.e., service providers).  What got the discussion started was when Art (Coop) Cooper made the comment that only SAQ A states that all third parties must be PCI compliant.  All of the other SAQs and even the ROC does not state that third parties need to be PCI compliant.

All of this is very true and has been this way since the beginning of the PCI DSS.

But …  That is not the whole story.

In this instance, the PCI DSS is not the only game in town.

People forget that Visa, Mastercard, Discover, American Express and JCB (aka “The Brands”) still have their own security programs and requirements in addition to the PCI DSS.  Some of these requirements are in their Operating Rules or similar documents.  In this case, Visa, Mastercard and Discover all require that service providers be PCI compliant as defined on their respective Web sites.  In the case of Visa and Mastercard, they maintain lists of PCI compliant service providers.  That said, those lists are marketing ploys that generate revenue for Visa and Mastercard as those service providers listed pay them to be on those lists. 

While Coop’s statement is accurate that the PCI DSS does not require service providers to be PCI compliant, it is shortsighted.  The Brands do require service providers to be PCI compliant and will enforce it through the merchant agreement/contract all organizations sign in order to accept those cards for payment.

The bottom line is that, if any service provider can provide you a current PCI Service Provider Attestation Of Compliance (AOC), you can use their services and comply with the Visa, Mastercard and Discover contracts.

Coop also stated that he has never seen the Brands enforce the contractual obligation when reviewing organizations’ ROCs and SAQs.  That is also a true statement but again not the complete story.  Based on what I have been told by lawyers that have been involved in breach litigation, it is the merchant agreement/contract that is used to hold breached merchants legally responsible and enforce fines, not PCI compliance or what is in any PCI document.  The PCI documents are used to influence fines and penalties, but the actual enforcement is through the contracts with the Brands.  If it is found that an organization was using non-PCI compliant service providers that just adds fuel to the fire.

As famous radio personality Paul Harvey used to say, “And that, is the rest of the story.”




January 2022
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Months