Archive for the 'PCI DSS' Category

17
Sep
21

2021 Government IT Symposium

I am honored to have been granted the privilege to speak at the 2021 Government IT Symposium this coming November.

I will be speaking (virtually) on Tuesday, November 16, at 145PM CT/1945 UTC.  My presentation is titled ‘PCI Compliance – Yes, That Includes Governments’.  The reason for my session is that while the PCI DSS has been around for over 15 years, government entities still question how it applies to them and why.  In my years doing assessments for government entities, I have found there are a number of unique situations that complicate their assessments.  In my session I will cover the basics of the PCI DSS and provide a walk through of the potential traps that tend to trip up government entities.

If you want to register for this symposium, go here to register.

I look forward to seeing you there.

31
Jul
21

PCI Dream Team LIVE! Is Coming In October

The PCI Dream Team will be appearing LIVE at the (ISC)2 Security Congress in Orlando this Fall, Monday, October 18 through Wednesday, October 20, 2021.   Our session is scheduled for Tuesday, October 19, at 11:45 AM ET/ 1545 UTC.

While we will be live at the conference, you can also attend the conference and our session virtually.  So other than training budget limitations, there is no other good reason you cannot join us.

As usual, we will be taking questions live and via email at pcidreamteam AT gmail DOT com.  We also monitor Twitter if you use #pcidreamteam.

We are expecting our usual lively discussion of all topics PCI and other security standards if time allows.

We really are looking forward to physically seeing people at the conference.

14
Jun
21

Last PCI DSS v4 Request For Comments Period

According to an email I received today, the draft validation documents (I am assuming that means the ROC Reporting Template and AOC) will be released on Monday, June 28, on the PCI Portal for QSAs, ISAs and POs to review and comment.

The comment period will be open for 30 days from that date.

Make sure you get your copy, review the documents and generate comments as this is your chance to have input on the PCI DSS.

01
Mar
21

Quick Update on PCI DSS v4

In the February 2021 Assessor newsletter, the Council announced the following.

“Because of the broad impact PCI DSS has on the payment community, the Council is seeking additional feedback into the PCI DSS v4.0 validation documents. As a result of expanding stakeholder feedback opportunities to include these supporting documents, the Council is now targeting a Q4 2021 completion date for PCI DSS v4.0. The publication and availability of PCI DSS v4.0 is still being determined. The Council will communicate the targeted publication date in the coming months.”

So we will apparently see one more iteration of v4 before it is released. According to their blog post, the comment period will start around June 2021.

See their blog post for more information.

One other important item from the newsletter for all QSAs, do not forget to register for the next All Assessor Webcast on March 18, 2021.

13
Dec
20

Network Segmentation Testing

NOTE: If you have not read the PCI SSC Information Supplement – Guidance for PCI DSS Scoping and Network Segmentation you must do so before using the procedures documented in this post.

How something so simple became something so complicated (or at least believed to be complicated), I just will never understand.  The only thing I can point to is the fact that network segmentation testing falls within the requirements of penetration testing.  Because of that, I think people therefore believe there is something “special” about how segmentation testing must be performed.  Never mind the fact that there is the even more basic issue of how to approach network segmentation testing.

Here is the network segmentation testing methodology for traditional IP networks.

  • Gather an inventory of all of the network segments.  Label each network segment as Cardholder Data Environment (CDE), Connected To or Out of Scope based on the definitions from the Scoping Information Supplement.
  • Make sure you have Nmap installed on a portable computer.  The reason this needs to be portable is because you will likely have to move around your facilities in order to complete all of the testing.  It is also not unusual to use diagnostic systems in the data center to accomplish this effort (they may already have Nmap installed) as well as creating VMs for this testing and then remoting into those systems.  The important thing is to have access to every network segment in your environment so that you can conduct this testing.
  • Connect your scanner to every CDE network segment and attempt to reach all of the Out of Scope network segments from the CDE.  You will want to run an Nmap scan that scans all TCP and UDP ports (i.e., 1 through 65535) against all IP addresses in a given out of scope network segment.  This likely sounds extreme but to prove segmentation you must test all 65,535 TCP/UDP ports against all IP addresses to make sure that no traffic “leaks” to your out of scope networks.  If you do find a port open in one of your out of scope networks, you will have to track down where that leak occurs.  Example: nmap –p- -sT –sU 10.10.0.0/16
  • While in each CDE, test connections out to your Connected To network segments testing all TCP and UDP ports against all IP addresses in your Connected To network segments.  Since communication between the CDE and Connected To segments is allowed, you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Finally, you will need to test that your CDE can only reach the internet through ports and IP addresses you have specified.  Obviously, you are not going to test every internet address as that would take forever.  However, what I tell my clients to do is to use every external IP address they have for business partners or other third parties they are connected to.  Again, you are going to test all TCP and UDP ports against those addresses. If you get any unexpected results back, you are going to have to resolve those issues as there should be no external connectivity.
  • Connect to every Connected To network segment and conduct testing into the CDE for all TCP and UDP ports against all IP addresses in the CDE network segment.  Again, since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • While in the Connected To network segments, conduct testing to all Out of Scope network segments.  Since communication is allowed between these network segments you will need to compare the results of the Nmap scan to your documented, approved ports and firewall rules to confirm that no ports are open that are not documented and approved.
  • Connect to every Out of Scope network segment and run an Nmap scan into each CDE network segment for every TCP and UDP port for all IP addresses in the CDE.  This should return no results back if the network is truly out of scope.  If it does return results, you will have to figure out way and block that traffic into the CDE.
  • Save all of your results and comparisons so that you have a record of your testing.  If you found issues, make sure you document in detail what was done to resolve those issues and conduct new scans to prove that those issues were remediated.

When you bring in newer solutions such as the Cloud, containers, serverless, microsegmentation and the like the traditional method of network segmentation testing is impossible to completely test.  You can conduct all of the tests documented above from outside of the environment looking into your cloud environment, but you cannot look from inside the cloud out.  That must be done manually by examining the cloud configuration information and ensuring that networks are properly segmented.

If you are like me, you are looking for a better way to deal with the Cloud as well as large networks.  There are network tools from vendors such as FireMon, AlgoSec, Skybox and Tufin that have capabilities to take the Cloud configuration information as well as firewall, router, switch and other network infrastructure configurations and provide analytical capabilities to simulate the testing above from both internal and external perspectives.  The downside of these tools of course is that they are not inexpensive and can require significant horsepower to operate.  However, they can be worth their weight in gold for their ability to analyze and understand your networks, find misconfigurations and find issues where attacks can potentially succeed.

There is no reason to pay your penetration tester to conduct network segmentation testing unless you are uncertain as to how to analyze the information from the Cloud.

12
Dec
20

The PCI DSS Is Not The Only Relevant Payment Security Standard

One of the more lively discussions at our past PCI Dream Team session involved a discussion of requirement 12.8 and third party management (i.e., service providers).  What got the discussion started was when Art (Coop) Cooper made the comment that only SAQ A states that all third parties must be PCI compliant.  All of the other SAQs and even the ROC does not state that third parties need to be PCI compliant.

All of this is very true and has been this way since the beginning of the PCI DSS.

But …  That is not the whole story.

In this instance, the PCI DSS is not the only game in town.

People forget that Visa, Mastercard, Discover, American Express and JCB (aka “The Brands”) still have their own security programs and requirements in addition to the PCI DSS.  Some of these requirements are in their Operating Rules or similar documents.  In this case, Visa, Mastercard and Discover all require that service providers be PCI compliant as defined on their respective Web sites.  In the case of Visa and Mastercard, they maintain lists of PCI compliant service providers.  That said, those lists are marketing ploys that generate revenue for Visa and Mastercard as those service providers listed pay them to be on those lists. 

While Coop’s statement is accurate that the PCI DSS does not require service providers to be PCI compliant, it is shortsighted.  The Brands do require service providers to be PCI compliant and will enforce it through the merchant agreement/contract all organizations sign in order to accept those cards for payment.

The bottom line is that, if any service provider can provide you a current PCI Service Provider Attestation Of Compliance (AOC), you can use their services and comply with the Visa, Mastercard and Discover contracts.

Coop also stated that he has never seen the Brands enforce the contractual obligation when reviewing organizations’ ROCs and SAQs.  That is also a true statement but again not the complete story.  Based on what I have been told by lawyers that have been involved in breach litigation, it is the merchant agreement/contract that is used to hold breached merchants legally responsible and enforce fines, not PCI compliance or what is in any PCI document.  The PCI documents are used to influence fines and penalties, but the actual enforcement is through the contracts with the Brands.  If it is found that an organization was using non-PCI compliant service providers that just adds fuel to the fire.

As famous radio personality Paul Harvey used to say, “And that, is the rest of the story.”

30
Sep
20

The Second Draft of PCI DSS v4 Has Been Released

Not that anyone can discuss it because we are all under a non-disclosure agreement (NDA) and therefore we cannot discuss it. However, the Council Web site quietly announced the release of the new version on September 23. The comment period is open until November 13. You can read more here about it.

If you are a QSAC or a Participating Organization, your Point of Contact for the Council can download your copy of the new PCI DSS at the PCI Portal as usual.

For the rest of you, you will continue to stay in the dark until the final version is released. That said, as far as I am aware there have been no changes in the working dates for release which continue to seem to be a 2022 rollout.

11
Aug
20

Join Me On September 3

I am speaking at the Toronto ISACA Lunch & Learn session on PCI and the Cloud on Thursday, September 3, at Noon ET. You can go here to register.

I look forward to “seeing” you at this event.

07
Jul
20

The Security/Compliance Disconnect

I was speaking with someone recently and they tossed out one of the most despised phrases I know:

“Compliance is NOT security!”

I told them to stop right there and take it back or our discussion was over.  Since they really wanted my opinion on the actual topic at hand, we continued.  But I felt the need to explain why I find this statement so repulsive.  Which, by the way, has nothing to do with being an auditor.

The first point I make when discussing this phrase is about security frameworks and that they are merely the foundation for a good security program, not the whole enchilada.  They are only the starting point and that great security programs must go well beyond these frameworks.  The bottom line is that achieving compliance with any security framework means your organization can execute the basics consistently.

The next important point I like to make to people who spew this trope is that if they read any of the data breach or security reports from the likes of Verizon, Trustwave, Security Metrics or any other recognized security company, what do you see?  That the organizations breached could not comply with any of the recognized security frameworks be it PCI DSS, CoBIT, NIST, HIPAA, pick your poison.  Unfortunately, as these reports point out in annoying detail, organizations rarely execute the basics consistently because if they did, they would likely not have been breached.  Which really punches a huge hole in the whole compliance does not equal security argument.

Another point about this statement is that organizations high five over being compliant with a security framework when it really means that they are mediocre at best.  Yet time and again I hear back after PCI assessments that management is so proud that they were assessed compliant.  “Yea, we achieved mediocrity!”

Finally, there is how do you measure how well your security program is operating?  You must have a “yardstick” of some sort and to do that, so you need one of the security frameworks as your yardstick.  Given that these frameworks are only the basics, you need to add in all the additional controls your organization has in place that go beyond the framework.  That activity typically identifies a huge gap in the security program – there are few if any additional controls.  So, there you sit with say the PCI DSS as your “yardstick” and your organization cannot consistently execute the basic security controls in that framework.

Yeah, that is it!  It is the yardstick’s fault!

20
May
20

DevOps And PCI – Part 2

In the first post on this topic we discussed the terminology of DevOps and how segregation of duties can get complicated with DevOps.  In this post we will continue to investigate DevOps and discuss the issues you can encounter with change control, documentation and PCI scope.

Change Control

These days it is not unusual to hear DevOps people be proud of hundreds or even thousands of implementations or deployments per day.  That is until someone like a PCI assessor starts inquiring about what, if anything, is done to formally approve all those deployments?  The conversation with developers typically begins to deteriorate as you discuss requirement 6.4.5.2 which states:

“Documented change approval by authorized parties.”

The normal response is that the approval is provided in Jira, ServiceNow or whatever change management tool is being used.  That leads to a discussion of the guidance for requirement 6.4.5.2 which states:

“Approval by authorized parties indicates that the change is a legitimate and approved change sanctioned by the organization.”

With the rapidity and the volume of changes, then next question asked is how can an authorized party assess a change is legitimate and sanctioned if they never actually physically see and review the change that is deployed?

This leads to a discussion of how Jira, Jenkins, Puppet, whatever CI/CD toolsets work along with the automation involved in the change process as well as the “controls” embedded in the workflow of those tools.  The bottom line of which is usually that the only potential human intervention in the process might occur if the code needs a manual code review.

What requirement 6.4.5.2 is about is ensuring that the change process involves human intervention by getting management to approve what is being put into production and that segregation of duties has been ensured and no fraud or other illegal activity has been introduced into the process.  The reason is that we are talking about code that is processing, storing or transmitting sensitive authentication data (SAD) or cardholder data (CHD).  The potential for implementing code that skims that information or does other nefarious actions is too great to just trust a fully automated process with no human intervention.  This risk can all be driven home with a discussion of the 2013 Target breach where the CI/CD process was compromised to repeatedly push malware to thousands of point of sale devices.

While I am only talking PCI in-scope code in this case, fair warning, HIPAA, SOX, GDPR and other regulations are going to require a similar control.  Sensitive information is worth too much today and there is just too much risk that people will take any opportunity to siphon off sensitive data any way they can if appropriate controls are not in place.  If your process is totally automated and cannot detect such fraudulent activities, the ability to put who knows what into the code is too easy.  The last thing any organization wants is to be breached and then try to defend itself when they had poor or no internal controls to prevent the breach.

The bottom line here is that in our haste to push out software we have compromised the controls in that process.  Those controls need to be put back into place to minimize the risk presented by pushing malicious software into applications without a thorough vetting by management.

Documentation

Another area where compliance falters with DevOps is documentation.  Confluence, SharePoint Wiki or a similar tool will be used for documentation and that is where most assessors/auditors will be pointed for their requests for formal documentation.

The first problem with this is when you are an outside assessor/auditor, because you do not have access to the internally used tool.  That can be remedied several ways, but it is always a hurdle because insiders are so used to the fact that everyone they typically work with has access.

Once the assessor/auditor has access, the next problem for all assessors/auditors is finding what they need for their assessment/audit.  Regardless of whether the assessor/auditor gets PDFs or has online access, the most common reason for this issue is terminology.  A lot of times what an assessor/auditor is trying to find will be referred to by the organization in terms that are not consistent with industry or technology accepted terminology.  While all of these documentation tools have search capabilities, searching the document trove for what an assessor/auditor needs for evidence can be highly problematic.  Never mind the fact that clients get frustrated as well because the evidence exists, but the assessor/auditor cannot find it.

Related to these documentation systems is the fact that it can be difficult, if not impossible, for the assessor/auditor to get hardcopy or even usable PDFs of the documentation.  Let us face it, screen shots while readable can miss sentences at the end of a screen and therefore be missed altogether.  As a result, obtaining usable and legible evidence for the assessor’s/auditor’s work papers is not readily possible let alone having it searchable.  The fix to this is to use a browser extension or addon that will create a PDF or image of an entire page.  But that too can run into issues if the organization has locked down their browsers and do not allow such installations.

Regardless of Agile/Scrum or Waterfall, the next problem with documentation is the fact that the documentation is limited or simply does not exist.  I have encountered more and more organizations that again point to The Agile Manifesto, Scrum and the like and state that none of these approaches specify that documentation is required.  It seems that the age-old adage of “if it was hard to develop, it should be hard to understand” is back in vogue.  Never mind the fact that with hundreds or thousands of deployments a day, keeping up with documentation outside of can be impossible.

Consistent use of a change management ticketing system such as Jira or ServiceNow also can be an issue.  It seems that some organizations have exceptions that do not require every change to their environment be entered into their change management solution.  Worse, the criteria used to determine what is and what is not entered is not consistently applied because the criteria were never officially documented nor formally approved.  As a result, there is no way to rely upon the information contained in the change management system to determine that change management is performed as required.

As a result, I am never surprised to have organizations scrambling to develop even business and IT policies, standards and procedures in addition to network diagrams, data flow diagrams, application documentation, database schemas, operations documentation and a whole host of other missing or incomplete documentation.

PCI Scope Implications

Lastly, there is the scoping issue related to the DevOps infrastructure.  Not all of it is usually in scope, but that all depends on how it has been implemented.

At the very least, the Jenkins, Puppet or Ansible portion of the infrastructure are going to be in-scope for PCI.  The reason is that those components feed the application updates into the cardholder data environment (CDE).  So those are considered “Connected To” systems and must be properly configured and secured to be PCI compliant.

Because these CI/CD solutions are “Connected To”, this can become problematic because of who has access to Jenkins, Puppet, et. al.  As I spoke of earlier, because of poor segregation of roles in the Active Directory system, it can turn out that developers have access to these systems and therefore come into scope for PCI compliance as well.  As a result, the whole concept of development separate from production for requirement 6.4.1 does not exist.

Obviously, this segregation of development and production problem only gets worse if you drag even more of the development infrastructure into scope.  Therefore, you want to ensure that only the Jenkins, Puppet, Ansible portion of CI/CD is in scope.

This will mean moving Jenkins, Puppet, Ansible, etc. into your “Connected To” or “Shared Services” network segment.  This can create some issues with the rest of the development environment because of firewall rules and access to it through a Jump Server.  So simply moving that solution into the new network segment may not be as simple as it appears.

Development Metadata

Before we go, there is one more topic that needs to be discussed and that is the metadata in all these development solutions.

We have touched on the controls surrounding the development toolsets, but we have not discussed securing these toolsets and the risks they present.  This may seem a bit odd because since when have we worried about the security of Visual Studio or other integrated development environments (IDE).  However, with the implementation of CI/CD solutions, all these tools become interlinked and integrated.  Essentially, all these tools make up an automated assembly line for building applications.

But even more importantly, for these tools to work together seamlessly, they need to share metadata about what they are doing.  This metadata might seem like it is benign information, but it is particularly important and controls how the applications are built, tested and deployed.  Essentially the metadata is the “secret sauce” that makes your application work as an application within your organization.

We have already discussed the security controls that will be required around the deployment toolset.  But the rest of the development toolset is also going to require security and controls to be implemented to ensure that your software factory’s assembly line does not become a huge risk for attacks.  Attacks that could maliciously modify your applications to stopping the assembly line altogether.

Before you think that this is unrealistic, I would again point you to the infamous 2013 Target breach.  I wrote about the breach at the time and walked people through how what was known about the breach at that time would have made the breach possible.  The success of that attack was to compromise the CI/CD process to implement their malware that skimmed cards in the point of sale system.  So, the idea that development environments are not a viable attack point is not out of the realm of possibility.  And it gets even worse when you add in the use of contract workers to the development process.

So, what should an organization do to address these risks?  I would recommend securing the entire application development environment to PCI configuration standards so that security monitoring of the entire environment can be performed.  That does not mean that all the environment needs to reside in your ‘Connected To” or “Shared Services” DMZ with the CI/CD solution.  But I would create another DMZ to contain the rest of the toolset that feeds the CI/CD solution.  Servers should be properly security hardened and monitored as though they are in-scope for PCI compliance even though they are not in-scope.

There you have it.  The basics of how Agile and PCI can coexist.




October 2021
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031

Months