10
Nov
20

The PCI Dream Team Rides Again

Please join us on Thursday, December 10, at Noon ET/1700 UTC as the PCI Dream Team discusses all things PCI EXCEPT PCI DSS v4. LOL!

You can register here for this free one hour session.

As usual, if you wish to submit questions before the session, please send them to our email box at pcidreamteam AT gmail DOT com.

We look forward to all of you attending this session.

30
Sep
20

The Second Draft of PCI DSS v4 Has Been Released

Not that anyone can discuss it because we are all under a non-disclosure agreement (NDA) and therefore we cannot discuss it. However, the Council Web site quietly announced the release of the new version on September 23. The comment period is open until November 13. You can read more here about it.

If you are a QSAC or a Participating Organization, your Point of Contact for the Council can download your copy of the new PCI DSS at the PCI Portal as usual.

For the rest of you, you will continue to stay in the dark until the final version is released. That said, as far as I am aware there have been no changes in the working dates for release which continue to seem to be a 2022 rollout.

11
Aug
20

Join Me On September 3

I am speaking at the Toronto ISACA Lunch & Learn session on PCI and the Cloud on Thursday, September 3, at Noon ET. You can go here to register.

I look forward to “seeing” you at this event.

22
Jul
20

PCI Dream Team Is Back On BrightTalk

The subject is unsupported software and devices and how to handle them. But of course, any PCI or security question is welcome. Join us on BrightTalk on Tuesday, July 28, at Noon ET, 5PM BST. You can register here or view the recording at the registration link as well.

As usual, you can submit question live during the session as well as any time before or after the session by sending them to ‘pcidreamteam AT gmail DOT com’.

We look forward to “seeing” you all next week.

07
Jul
20

The Security/Compliance Disconnect

I was speaking with someone recently and they tossed out one of the most despised phrases I know:

“Compliance is NOT security!”

I told them to stop right there and take it back or our discussion was over.  Since they really wanted my opinion on the actual topic at hand, we continued.  But I felt the need to explain why I find this statement so repulsive.  Which, by the way, has nothing to do with being an auditor.

The first point I make when discussing this phrase is about security frameworks and that they are merely the foundation for a good security program, not the whole enchilada.  They are only the starting point and that great security programs must go well beyond these frameworks.  The bottom line is that achieving compliance with any security framework means your organization can execute the basics consistently.

The next important point I like to make to people who spew this trope is that if they read any of the data breach or security reports from the likes of Verizon, Trustwave, Security Metrics or any other recognized security company, what do you see?  That the organizations breached could not comply with any of the recognized security frameworks be it PCI DSS, CoBIT, NIST, HIPAA, pick your poison.  Unfortunately, as these reports point out in annoying detail, organizations rarely execute the basics consistently because if they did, they would likely not have been breached.  Which really punches a huge hole in the whole compliance does not equal security argument.

Another point about this statement is that organizations high five over being compliant with a security framework when it really means that they are mediocre at best.  Yet time and again I hear back after PCI assessments that management is so proud that they were assessed compliant.  “Yea, we achieved mediocrity!”

Finally, there is how do you measure how well your security program is operating?  You must have a “yardstick” of some sort and to do that, so you need one of the security frameworks as your yardstick.  Given that these frameworks are only the basics, you need to add in all the additional controls your organization has in place that go beyond the framework.  That activity typically identifies a huge gap in the security program – there are few if any additional controls.  So, there you sit with say the PCI DSS as your “yardstick” and your organization cannot consistently execute the basic security controls in that framework.

Yeah, that is it!  It is the yardstick’s fault!

26
Jun
20

The 2020 PCI Community Meetings Go Virtual

A lot of us remember when the 2017 NACM in Orlando was cancelled due to Hurricane Irma.

Troy Leach announced on Twitter yesterday that the 2020 NACM would be virtual as would all of the other Community Meetings.

It will not be the same, but at least we will be virtually together.

20
May
20

DevOps And PCI – Part 2

In the first post on this topic we discussed the terminology of DevOps and how segregation of duties can get complicated with DevOps.  In this post we will continue to investigate DevOps and discuss the issues you can encounter with change control, documentation and PCI scope.

Change Control

These days it is not unusual to hear DevOps people be proud of hundreds or even thousands of implementations or deployments per day.  That is until someone like a PCI assessor starts inquiring about what, if anything, is done to formally approve all those deployments?  The conversation with developers typically begins to deteriorate as you discuss requirement 6.4.5.2 which states:

“Documented change approval by authorized parties.”

The normal response is that the approval is provided in Jira, ServiceNow or whatever change management tool is being used.  That leads to a discussion of the guidance for requirement 6.4.5.2 which states:

“Approval by authorized parties indicates that the change is a legitimate and approved change sanctioned by the organization.”

With the rapidity and the volume of changes, then next question asked is how can an authorized party assess a change is legitimate and sanctioned if they never actually physically see and review the change that is deployed?

This leads to a discussion of how Jira, Jenkins, Puppet, whatever CI/CD toolsets work along with the automation involved in the change process as well as the “controls” embedded in the workflow of those tools.  The bottom line of which is usually that the only potential human intervention in the process might occur if the code needs a manual code review.

What requirement 6.4.5.2 is about is ensuring that the change process involves human intervention by getting management to approve what is being put into production and that segregation of duties has been ensured and no fraud or other illegal activity has been introduced into the process.  The reason is that we are talking about code that is processing, storing or transmitting sensitive authentication data (SAD) or cardholder data (CHD).  The potential for implementing code that skims that information or does other nefarious actions is too great to just trust a fully automated process with no human intervention.  This risk can all be driven home with a discussion of the 2013 Target breach where the CI/CD process was compromised to repeatedly push malware to thousands of point of sale devices.

While I am only talking PCI in-scope code in this case, fair warning, HIPAA, SOX, GDPR and other regulations are going to require a similar control.  Sensitive information is worth too much today and there is just too much risk that people will take any opportunity to siphon off sensitive data any way they can if appropriate controls are not in place.  If your process is totally automated and cannot detect such fraudulent activities, the ability to put who knows what into the code is too easy.  The last thing any organization wants is to be breached and then try to defend itself when they had poor or no internal controls to prevent the breach.

The bottom line here is that in our haste to push out software we have compromised the controls in that process.  Those controls need to be put back into place to minimize the risk presented by pushing malicious software into applications without a thorough vetting by management.

Documentation

Another area where compliance falters with DevOps is documentation.  Confluence, SharePoint Wiki or a similar tool will be used for documentation and that is where most assessors/auditors will be pointed for their requests for formal documentation.

The first problem with this is when you are an outside assessor/auditor, because you do not have access to the internally used tool.  That can be remedied several ways, but it is always a hurdle because insiders are so used to the fact that everyone they typically work with has access.

Once the assessor/auditor has access, the next problem for all assessors/auditors is finding what they need for their assessment/audit.  Regardless of whether the assessor/auditor gets PDFs or has online access, the most common reason for this issue is terminology.  A lot of times what an assessor/auditor is trying to find will be referred to by the organization in terms that are not consistent with industry or technology accepted terminology.  While all of these documentation tools have search capabilities, searching the document trove for what an assessor/auditor needs for evidence can be highly problematic.  Never mind the fact that clients get frustrated as well because the evidence exists, but the assessor/auditor cannot find it.

Related to these documentation systems is the fact that it can be difficult, if not impossible, for the assessor/auditor to get hardcopy or even usable PDFs of the documentation.  Let us face it, screen shots while readable can miss sentences at the end of a screen and therefore be missed altogether.  As a result, obtaining usable and legible evidence for the assessor’s/auditor’s work papers is not readily possible let alone having it searchable.  The fix to this is to use a browser extension or addon that will create a PDF or image of an entire page.  But that too can run into issues if the organization has locked down their browsers and do not allow such installations.

Regardless of Agile/Scrum or Waterfall, the next problem with documentation is the fact that the documentation is limited or simply does not exist.  I have encountered more and more organizations that again point to The Agile Manifesto, Scrum and the like and state that none of these approaches specify that documentation is required.  It seems that the age-old adage of “if it was hard to develop, it should be hard to understand” is back in vogue.  Never mind the fact that with hundreds or thousands of deployments a day, keeping up with documentation outside of can be impossible.

Consistent use of a change management ticketing system such as Jira or ServiceNow also can be an issue.  It seems that some organizations have exceptions that do not require every change to their environment be entered into their change management solution.  Worse, the criteria used to determine what is and what is not entered is not consistently applied because the criteria were never officially documented nor formally approved.  As a result, there is no way to rely upon the information contained in the change management system to determine that change management is performed as required.

As a result, I am never surprised to have organizations scrambling to develop even business and IT policies, standards and procedures in addition to network diagrams, data flow diagrams, application documentation, database schemas, operations documentation and a whole host of other missing or incomplete documentation.

PCI Scope Implications

Lastly, there is the scoping issue related to the DevOps infrastructure.  Not all of it is usually in scope, but that all depends on how it has been implemented.

At the very least, the Jenkins, Puppet or Ansible portion of the infrastructure are going to be in-scope for PCI.  The reason is that those components feed the application updates into the cardholder data environment (CDE).  So those are considered “Connected To” systems and must be properly configured and secured to be PCI compliant.

Because these CI/CD solutions are “Connected To”, this can become problematic because of who has access to Jenkins, Puppet, et. al.  As I spoke of earlier, because of poor segregation of roles in the Active Directory system, it can turn out that developers have access to these systems and therefore come into scope for PCI compliance as well.  As a result, the whole concept of development separate from production for requirement 6.4.1 does not exist.

Obviously, this segregation of development and production problem only gets worse if you drag even more of the development infrastructure into scope.  Therefore, you want to ensure that only the Jenkins, Puppet, Ansible portion of CI/CD is in scope.

This will mean moving Jenkins, Puppet, Ansible, etc. into your “Connected To” or “Shared Services” network segment.  This can create some issues with the rest of the development environment because of firewall rules and access to it through a Jump Server.  So simply moving that solution into the new network segment may not be as simple as it appears.

Development Metadata

Before we go, there is one more topic that needs to be discussed and that is the metadata in all these development solutions.

We have touched on the controls surrounding the development toolsets, but we have not discussed securing these toolsets and the risks they present.  This may seem a bit odd because since when have we worried about the security of Visual Studio or other integrated development environments (IDE).  However, with the implementation of CI/CD solutions, all these tools become interlinked and integrated.  Essentially, all these tools make up an automated assembly line for building applications.

But even more importantly, for these tools to work together seamlessly, they need to share metadata about what they are doing.  This metadata might seem like it is benign information, but it is particularly important and controls how the applications are built, tested and deployed.  Essentially the metadata is the “secret sauce” that makes your application work as an application within your organization.

We have already discussed the security controls that will be required around the deployment toolset.  But the rest of the development toolset is also going to require security and controls to be implemented to ensure that your software factory’s assembly line does not become a huge risk for attacks.  Attacks that could maliciously modify your applications to stopping the assembly line altogether.

Before you think that this is unrealistic, I would again point you to the infamous 2013 Target breach.  I wrote about the breach at the time and walked people through how what was known about the breach at that time would have made the breach possible.  The success of that attack was to compromise the CI/CD process to implement their malware that skimmed cards in the point of sale system.  So, the idea that development environments are not a viable attack point is not out of the realm of possibility.  And it gets even worse when you add in the use of contract workers to the development process.

So, what should an organization do to address these risks?  I would recommend securing the entire application development environment to PCI configuration standards so that security monitoring of the entire environment can be performed.  That does not mean that all the environment needs to reside in your ‘Connected To” or “Shared Services” DMZ with the CI/CD solution.  But I would create another DMZ to contain the rest of the toolset that feeds the CI/CD solution.  Servers should be properly security hardened and monitored as though they are in-scope for PCI compliance even though they are not in-scope.

There you have it.  The basics of how Agile and PCI can coexist.

17
May
20

DevOps And PCI – Part 1

DevOps are all the rage in organizations that develop applications.  The move to become “Agile” through the implementation of methodologies such as Scrum to replace the traditional waterfall SDLC is ongoing in most organizations.  But these changes can create compliance issues with the PCI standards regarding software development.

Understanding The Terminology

First and foremost, we need to address the terminology surrounding DevOps.

But before we talk about those specific terms, we need to address the elephant in the room which is “Agile”.  The Agile approach to development traces its history back to early 2001 when a group of developers met at a Utah ski resort.  The result of that meeting was ‘The Agile Manifesto’.  However, the roots for Agile were sown even earlier as application development became unable to keep pace with business changes starting in the late 1980s.

The important thing to remember about Agile is that it is not a methodology.  It is merely a set of values (4) and principles (12) related to the development of software.  The Agile Manifesto never describes a roadmap or steps to follow as to how those values and principles should be used.  So, to refer to Agile as a methodology is a misnomer but you will constantly encounter it being referred to as though it were a methodology.

Interestingly enough, the methodologies used with the Agile approach were actually developed before Agile.  Of the number of them that sprang up in the 1990s, Scrum seems to have won out when it comes to a methodology.  Scrum was one of many methodologies such as Kanban, Crystal Clear, Extreme Programming, Feature Driven Development and Dynamic Systems Development Method that came out at that time to address the delivery of software solutions in a timelier manner.  But while Scrum is the most followed methodology, it can also include some of these other methodologies such as Extreme Programming (XP) for example when used.

Scrum involves three types of roles.

  • Product Owner: The Product Owner needs to be a person with vision, authority, and availability because they are responsible for continuously communicating the vision and priorities to the development team.
  • Scrum Master: The Scrum Master is not a project manager.  The Scrum Master’s primary responsibility is to remove any impediments that are obstructing the team from achieving its sprint goals.  The Scrum Master also is the primary contact with the Product Owner.
  • Team: The Scrum team is responsible for completing the work.  For application development, a Scrum team can contain anywhere from three to nine members.  For software projects, a typical team includes a mix of software engineers, architects, information security personnel, programmers, analysts, QA experts, testers, and UI designers.  The team is responsible for determining how it will accomplish the work to be completed.

The final term from Scrum that needs to be defined is Sprint.  A Sprint is a one month or less in duration project that will result in a releasable increment of a product, in this case, an application or application enhancements.  When a Sprint’s horizon is too long the definition of what is being built may change, complexity may change, and risk may change.  The concepts of Sprints are to enable predictability by ensuring inspection and adaptation of progress toward a Sprint Goal at least every calendar month. Another benefit is that Sprints limit risk to one calendar month of cost.

Once defined, some of the key characteristics of Sprints are:

  • No changes are made that would endanger the Sprint Goal;
  • Quality goals do not decrease; and,
  • Scope may be clarified and re-negotiated between the “Product Owner” and “Team” as more is learned.

With these behind us, let us now turn to the terms DevOps.

DevOps is a merging of development and operations staff to work together to develop and implement solutions that will essentially run 24x7x365 with (hopefully) minimal operational interaction.  DevSecOps merely formally adds in the collaboration of information security into that mix even though information security should be included in DevOps as well.

The final topic in our discussion of terminology regards the tools used by DevOps.  While there are a number of vendors in this space, the “Big Dog” at the moment in DevOps is Atlassian with their tools Confluence and Jira.  Another “Big Dog” is Microsoft’s GitHub and the other “Big Dog” in the DevOps tool world is Jenkins which is open source from CloudBees.

  • Confluence is used as a documentation repository for such items as policies, standards and procedures as well as business, application, network and other important documentation.
  • Jira is used as a project and change management ticketing system.
  • GitHub is used to manage the versions of applications.
  • Jenkins is used for automating the build, testing and deployment of applications into production.

All of these tools have competitors from vendors such as ServiceNow, Puppet, Ansible, Chef, Google, and other commercial and open source development and operations tool vendors.  Regardless of vendor, all solutions seem to have these three basic components of documentation repository, project/change management and deployment automation.  It is also not unusual to find multiple tools in place particularly with Jenkins, Ansible, Puppet and Chef.

Segregation Of Duties

The first and most contentious issue that comes up with DevOps is the segregation of duties.  This is typically one of the biggest discussions/arguments an assessor/auditor will get into regarding DevOps is when Agile fans argue that segregation of duties is inconsistent with Scrum, Agile and DevOps.  Their primary reason will be to point to the fact that nowhere in any of the documentation regarding these topics is the term ‘segregation of duties’ or the requirement to ensure segregation of duties.  They would be correct in that regard.

Unfortunately, corporate life is not driven by Scrum, Agile or DevOps in a vacuum.  Corporations are still required to comply with laws, contracts and regulations promulgated on them by government entities, business partners, financial institutions and other parties regardless of what is in their methodologies and approaches.  So, while the argument can be made that the methods and approaches do not state anything on the subject, there are other documents, contracts and requirements that do state it is required.

Whether we are discussing PCI, NIST, SOC, COBIT or any other recognized audit or compliance programs, segregation of duties between roles is and always has been required.  It is one of the key principles to ensure that people do not have the ability to corrupt a process because they have too much control over that process.  The reason it is, that time and again one of, if not the primary root cause of such illicit activities, is the failure to segregate duties and roles thus allowing one person too much control over a process.  The concept of segregation of duties being that the more individuals involved in a process the less likely a process can or will be abused.

In DevOps, the issue of segregation of duties gets complicated because it gets extended into the tools used in the process.  The concept of continuous implementation (CI)/continuous deployment (CD) relies heavily on the use of tools such as Jira and Jenkins to enable such an approach.  This means that the assessor/auditor needs to look into who has access to these tools and what rights they have to influence the workflows that exist in these tools.

This gets even more complicated by the fact that this requires analysis of user and access control information from tools such as Active Directory, RADIUS and even the tools themselves.  In my experience, it is not unusual to peel the onion on these access controls to reveal the fact that segregation of duties really does not exist as thought because all roles are granted to everyone in DevOps and the organization is relying on individuals’ honesty to ensure compliance.

DevOps also can suffer from segregation between production, quality assurance (QA), test and development environments.  This is because a lot of organizations that move to DevOps have the mistaken belief that the “Operations” and “Security” components become part of the development group.  The argument will be made that Agile is all about “breaking down silos”.  While that is true, the mistake they make is that Agile and Scrum were not a call to abrogate the knowledge and controls that all of the players involved bring to the table as separate disciplines.  The goal is to make the disciplines better work together to achieve a common goal in a Sprint.

Where this manifests itself most often is that developers have unfettered access to the production environment.  In a DevOps environment, it is not unusual to find developers scattered all throughout the environment.  They are developing code, they are operating production, they are diagnosing bugs, they are everywhere with no delineation of roles and responsibilities.  It is essentially a free for all.  Everyone pitches in where they need to be involved.

This organized chaos is supposedly “controlled” by Jira through its ticketing.  Agile advocates will claim that since everything has a ticket (not always a true statement) that they maintain segregation through Jira.  They will show the tickets to the assessor/auditor and display that there are different names on the ticket for the developer, the QA person, the people who approved promotion, etc.  While this is true, as I described earlier, the access controls will show that virtually everyone they gave as evidence of segregation can fulfill any of those roles whenever they so choose.  By definition, that is not segregation of duties because there are no actual controls in place to stop someone from running the whole process.

The bottom line in this discussion is that the segregation of duty controls in an Agile environment is usually illusory.  As such, it is management’s responsibility to periodically ensure that segregation of duty controls are truly implemented and testable.

In the next post we will discuss documentation, change control and PCI scope in an Agile environment.

30
Apr
20

The Last (Hopefully) Scoping Discussion

Back in May 2017, the Council finally issued their long awaited Information Supplement on Scoping and Network Segmentation.  Based on some questions I have received since then, there are apparently a lot of people that still have not read the official information supplement.

So, I am invoking “RTFM” which means the first order of business is to get everyone to read the information supplement before asking questions.  The second order of business is to forget everything that was discussed in the Open PCI Scoping Toolkit as the Council will tell you it does not apply and never did apply.  Even though they never offered any alternative until the publication of the aforementioned information supplement.  So, throw away all your copies of the Open PCI Scoping Toolkit as it is not usable anymore.

With the Council’s information supplement, there was a change in terminology in how we refer to the various network segments and what is in scope.  As you will see, the Council’s approach has simplified the scoping classifications.  Because of the pervasiveness of the Open PCI Scoping Toolkit, I have included some references to the categories used in the Toolkit to clarify the Council’s terminology.

  • Cardholder Data Environment (CDE) Systems – These systems are always in scope for PCI compliance. These are systems that are either: (1) a system that directly processes, stores or transmits cardholder data (CHD) or sensitive authentication data (SAD), OR (2) a system or component that is on the same network segment (i.e., same network subnet or VLAN) as a system component that directly processes, stores or transmits CHD/SAD.  With the Open PCI Scoping Toolkit, these were considered ‘Category 1A/1B’ systems.
  • “Connected To” or “Security-Impacting Systems” – These systems are also always in scope for PCI compliance. These systems are basically those that directly connect to systems in the CDE or could influence the security of the systems or data in the CDE.  In the Open PCI Scoping Toolkit, these were the ‘Category 2A/2B/2C/2D’ systems.  Unlike in the Open PCI Scoping Toolkit, the Council chose to simplify things and have only one category versus the “shades of gray” approach.  That said, there are more detailed criteria defined on page 10 of the information supplement that define these systems.  Examples include, but are not limited to, Active Directory (AD) servers, RADIUS servers, TACACS+ servers, Security Information and Event Management (SIEM) solutions, Network Time Protocol (NTP) servers, Domain Name System (DNS) servers and Domain Host Control Protocol (DHCP) servers.  These systems and devices can also be considered as “Shared Services” because they provide service not only to the CDE but also to out of scope systems.
  • Out of Scope Systems – There are four criteria for these systems: (1) The system must NOT process, store or transmit CHD/SAD AND  (2) the system cannot be on the same network segment or subnet as the CDE. AND  (3) the system cannot directly connect to any other system or component in the CDE  AND  (4) The system does not meet ANY of the criteria described for “Connected To” systems.  If all of these criteria are met, then the system is out of scope.  In the Open PCI Open Scoping Toolkit these were the ‘Category 3’ systems.

As we have found out at the Community Meetings since the publication of the information supplement, the Council will demand you use their scoping terminology.  If you use the Open PCI Scoping Toolkit scoping categories, you will be asked to restate your questions or comments using their terminology.  So please from here on out use the Council’s terminology whenever discussing scoping categories.

Why Is Scoping A Problem?

Scoping is a problem because organizations think it is the QSA’s problem.  However, the PCI DSS states on page 10:

“At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data, and identify all systems that are connected to or, if compromised, could impact the CDE (for example, authentication servers) to ensure they are included in the PCI DSS scope.”

Eight times out of ten, it falls into the QSA’s lap to determine and confirm PCI scope even though it is the assessed entity’s responsibility to define scope and the QSA’s role is to confirm that analysis.  This is why arguments over scope happen.  QSAs get into trouble because they follow the processes defined below and determine that the scope is not correct.  Had the assessed entity done their work, the argument likely would not have happened or at least would not have been as big as it became.

The purpose of this post is to explain what your QSA is doing when they asked for all the documentation and what they are doing that your organization should be doing before the QSA even shows up.  For QSAs, this is what you should be doing to ensure that the scope of your engagement is correct.

Follow The Data

The first thing that people seem to get wrong about scope is fixating on the storage of CHD and ignoring the processing and transmitting of CHD/SAD.  This is a big reason why voice over IP (VoIP) gets missed.  VoIP typically never stores CHD/SAD.  But when customers are making payments over the telephone, CHD/SAD is being discussed and that is what makes the telephone system a CDE and therefore in scope for the PCI assessment.

The key to resolving this is to follow the CHD/SAD through your networks.  When he was a Council trainer, Art (“Coop”) Cooper was famous for constantly telling his classes to, “Follow the data.”  Therefore, the data flow diagrams overlaid on your network diagrams are so especially important in determining PCI scope.  Done properly, these diagrams allow you to understand where the CHD/SAD flows through your organization (i.e., transmission), where it is processed, as well as where it ends up stored.

From that analysis, you can then document where, if anywhere, the CHD/SAD is encrypted and who manages the encryption keys.  If your organization manages the encryption keys, then you will need to prove and document that those intermediate devices between the encryption endpoints cannot decrypt the CHD/SAD in order to keep them out of scope.  If an outside third party manages the keys, then scope is reduced to where the encryption endpoint is in your environment.  For more about encryption and scope, see my Encryption Series of posts listed on the Post Series References page.

Once you have completed this activity, you have defined your CDE, likely many of them.  It is not unusual for organizations to have their VoIP network and solution as one CDE and then another for their eCommerce or brick & mortar retail.  But there could be even more CDEs depending on your environment.

One other caveat on scoping CDE.  Devices that are in the CDE that do not process, store or transmit CHD/SAD are in scope for PCI compliance.  These include devices and systems such as jump servers, switches, routers, Active Directory domain controllers, DHCP servers, DNS servers and firewalls.

And that is the rub in this process.  It is not unusual to have a client determine that their CDE is larger than they originally believed.  This is particularly true in environments that are rapidly changing.  The reason is that changes occur that involve the processing or transmission of CHD/SAD and people forget that those are also in scope because of their fixation on storage of CHD.  So do not be surprised to be surprised when this analysis turns up with in scope devices that were not believed to be in scope.

Connected To Systems

With the CDE(s) defined, we now we need to define all the systems that connect to the CDE(s), hence the “Connected To” designation by the Council.  The reason Connected To systems are in scope is because they can influence the security of the systems and devices inside the CDE.  The term you will hear some people use is that Connect To systems can be ‘infectious” to systems in the CDE.

The first place to start is by reviewing the firewall rules or access control lists (ACL) that segment your CDEs from the rest of your network segments.  You will likely find specific IP addresses for devices such as Active Directory domain controllers, security incident and event manager (SIEM), FTP, DNS, DHCP, RADIUS, TACACS+ and similar services.  It is not unusual to see application and database servers in a complete network subnet.

The second place to investigate are the organization’s most recent penetration testing results for network segmentation.  It still amazes me how even with a detailed examination of the firewall rules and ACLs that there are still devices that end up with connectivity into the CDE because of human error examining the rules and ACLs.  So use the network segmentation testing to double check your review of the firewall rules and ACLs.

Once you have identified all these networks you then need to make sure that you have an accurate inventory of all the systems and devices on these networks.  I typically ask for Nmap scans of the network subnets to make sure the inventory is complete.  I take the Nmap results and compare those to the organization’s configuration management database (CMDB) or whatever they use to track their system/device inventory.

I also make sure that all the devices and systems found in this process are contained in their internal vulnerability scanning.  Again, it is not unusual to find out that devices and systems are not being scanned quarterly for PCI which is why this check is important.

Now We Have PCI Scope

With all of this done, we now know the scope of the environment and what must be assessed.  But, remember, while you are done for the current assessment, this all needs to be performed again next year.

23
Apr
20

Upcoming PCI Dream Team Events

On Tuesday, May 5, at 11AM ET (3AM UTC) the Dream Team will be doing a virtual session for Secure360. Go here to register for the Secure360 Conference.

Then, on Wednesday, May 13, we are holding a GDPR Birthday Party on BrightTalk to celebrate the second birthday of GDPR.  While we will be taking PCI questions, we will also be entertaining questions on GDPR as well.  To register for the BrightTalk session, go here.

We look forward to your attendance at both of these events.  As always, if you cannot attend either of these sessions, you are more than welcome to submit questions at pcidreamteam AT gmail DOT com.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

December 2020
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 2,304 other followers