The Dilemma Of PCI Scoping – Part 1

Based on the email comments of late, there are apparently a lot of you out there that really do not like the Open PCI Scoping Toolkit.  I am not sure exactly what post where I mentioned the Toolkit got you all wound up, but I have definitely hit a nerve.  From the comments in these messages, it is painfully obvious that the reason the SIG failed was that none of us are in agreement about how much risk we are willing to accept.  And that is why no PCI assessment is ever the same because organizations and even QSAs from the same QSAC can have different levels of risk tolerance.

I, too, have to admit that I think the Toolkit needs some work, but it is the best framework we have to start a discussion on the topic.  And that is the problem, the topic.  Until the Toolkit appeared, scoping discussions had no real framework and everyone had their own definitions.  And as I have pointed out before, while there are a lot of people out there that might not know the nuances of the PCI DSS, it seems that everyone “knows” what is in scope and what is out of scope.

As a result, QSAs have found out through the “School of Hard Knocks” that everyone has their own view of scope and there was no good guide to explain how or why to draw the line let alone discuss the topic civilly in some cases.  I view the Toolkit as the bare minimum.  If an organization wants to get even more restrictive and have more categories, great, that is their prerogative.  However, if they want to go less than the Toolkit, in my very humble opinion, they can do it without me.  The bottom line is, regardless of whether you are using the Toolkit or have your own approach, document the definitions of your categories and provide examples so that everyone can understand your rationale and then discuss the impacts on your organization’s PCI scope.  Without such a document, we are not going to have productive discussions on scope.  That is why I lean toward the Toolkit, it gives me a starting point to get a productive discussion started.

We seem to all be able to agree on the Category 1 and 3 systems, because those are clear and easy to identify.  Category 1 systems are always in the cardholder data environment (CDE) because they directly process, store or transmit cardholder data or define the CDE and are therefore always in-scope.  Category 3 systems never, ever process, store or transmit cardholder data and are therefore always out of scope.

It’s those pesky Category 2 systems (the ones that connect in some way to/from the CDE) that get everyone’s undies in a bunch.  The group that developed the Toolkit did their best to break them out in ways that made sense but were still understandable and easy to use.  The more that I have thought about it, the more I think they came up with the best compromise. In my opinion, if you start adding any more categories or sub-categories to the existing definitions you will lose almost everyone due to complexity, including security people.  However, I also don’t believe that simplifying Category 2 is an answer either.

But if the discussion about Category 2 is tough, the fact that the Toolkit allows for Category 3 systems to exist on networks with Category 2 systems sends some security purists right over a cliff.  Their rationale is that Category 3 systems could be easily attacked and therefore allows a beachhead to compromising Category 2 systems.  While this is true, the idea of totally isolating Category 2 systems is not realistic for most organizations because of the ramifications of such a decision.

Why Isolation Is Not An Option

Security purists seem to think isolation of the CDE is the answer.  From an outsourcing perspective, that would provide isolation.  But in my experience, even outsourcing is not as isolated as one would think.  Here is why I think that isolation does not work whether doing it internally or through outsourcing.

Isolation means physically and logically separate directory systems with no trust relationships between the isolated environment and normal production.  I have seen all sorts of technical gymnastics to secure directory systems inside the CDE that can still leave too many holes in firewalls so that the trust relationship can exist.  If you are truly serious about isolation, then you need to have true isolation and that means physically and logically separate directory systems.  This also means duplication of credential management and introducing the possibility of errors when provisioning accounts.

The idea of leveraging your existing solutions for network and application management must be rethought as well.  This means separate security event and information management (SEIM) solutions, separate network management and monitoring, separate application management and monitoring, etc.  I think you get the idea.

Of course separate firewalls, routers, switches, intrusion detection/prevention, load balancers and other infrastructure are also going to be needed.  If you use RADIUS or TACACS+ for authentication, you will have to have separate systems for authentication to the infrastructure as well.  You will also need separate DNS and DHCP servers if you intend to provide those services inside the CDE.  Of course all of this duplicated infrastructure adds to the likelihood that mistakes will be made in configuration changes that could result in a breach of that isolation.

There are no “out of band” or virtual terminal access into your pristine isolated environment.  So you will need to provide separate PCs for operations and network personnel so that they have access to the isolated environment and then another physically separate system to your organization’s normal work environment.  Internal users with access to cardholder data (CHD) will also be required to have physically separate PCs for accessing the CDE.  This will also mean ensuring security of network switches inside the CDE by using MAC filtering or “sticky” MAC approaches to ensure that only the PCs that should have access to the CDE do have access.  And of course wireless networking is totally out of the question.

But wait, you will also have to invest in some sort of portable media solution so that you can get data from the isolated environment to the “normal” production environment and vice versa.  No connected databases or application integration because that will require holes into and out of the isolated environment.  This is where outsourcing for isolation also comes up short.  But without application and data integration, the economies of scale shrink almost exponentially as more and more data must be manually moved between environments.  This drives the cost of isolation almost through the roof and typically makes isolation too expensive for most organizations.

Various government entities have all tried this approach with mixed results as far as breaches are concerned.  So in practice, the isolation approach will still leave your organization with some amount of risk that must be managed.

So if isolation is not the answer what is the answer?  In Part 2 I’ll discuss what I think works.


19 Responses to “The Dilemma Of PCI Scoping – Part 1”

  1. 1 KenB
    June 5, 2015 at 12:31 PM

    The scoping for PCI is straightforward. Its outlined in the DSS in the “Scope of PCI DSS Requirements” section in the first sentence, “The PCI DSS security requirements apply to all system components included in or connected to the cardholder data environment.”

    Additionally, requirements 1.2.3 and 11.3.4 outline the steps organizations need to take to segment wireless and prove effectiveness of any claimed segmentation.

    If the systems are on the same LAN or have access into the CDE, they are required to comply with PCI DSS requirements. If a QSA accepts anything less, it shows a lack of integrity of the QSA.

    • June 5, 2015 at 3:09 PM

      While you might think it’s that black and white, there are varying shades of gray involved as to just how much in scope systems are or are not. Not all “connected to” systems are created equal because not all present the same level of risk. For example, there is a lot more risk presented by Windows domain controllers than there are by data entry workstations through a Citrix VDI solution. That is where the discussion on scoping always goes sideways because everyone brings their own risk acceptance model to the table and all of those need to be reconciled.

      • 3 KenB
        June 5, 2015 at 5:48 PM

        I enjoy your blog and agree with you the majority of the time but have to disagree on this one. As a QSA, I don’t see risk as a part of the judgement for scoping as its not outlined as a metric for scoping determination in the DSS. If a system component stores, processes, transmits or is connected to the CDE, its in scope.

        As for your example, VDI is not network segmentation but application control. I’ve had too many client attempt to claim VDI or Domain controls are segmentation, its not. This is one of the reasons I was glad the Council instituted 11.3.4 with the v3 DSS.

      • June 6, 2015 at 7:04 AM

        Maybe I really need to write a post on the subject of VDI and it’s need to use firewalls to control the access. I agree, VDI alone is not sufficient. Unfortunately, I have assumed that people understood that a firewall is required, but your comment makes it clear that it is not clear.

  2. 5 Richard U
    August 1, 2014 at 12:22 PM

    Excellent Post,

    PCI practitioners always have to look at the big picture when advising a client on network segmentation there really isn’t something like “going by the books” 100% with this one.

    Where I would like to clarify however are the various segmentation (VLANS, routers, firewall) techniques that are permissible by the Council and I think it should be stated clearly in the next PCI standard.

    • August 2, 2014 at 6:33 AM

      Wouldn’t we all like the Council to give more specifics. But that is the dilemma with standards. In order to keep them applicable and relevant, you need to forgo specificity and get a solution that can be applied to all technologies, not just one in particular.

      Such it is with network segmentation. The concept of segmentation can be accomplished through a variety of networking tools and techniques. The key is that it truly segregates one network from another and limits traffic between segments. Whether you accomplish that through VLANs with ACLs, firewalls and rules or routers with firewall feature sets is up to you.

      Trust me, the last thing you want is some outside standards body dictating only one or two particular ways of doing something. Because when the new thing comes along such as software defined networks (SDN) for example, they will either have to change the standard or tell people that SDN is not acceptable.

      How do you think such a thing would go over? Yeah, probably not very well.

      If you are still having issues with network segmentation, see my Post Series Reference page and read all of my posts on the subject.

      • August 4, 2014 at 8:20 AM

        Is there such thing as minimum requirements when thinking of segmentation using VLAN technologies, or Firewall. Any specifics on types of Firewalls, can you use SOHO firewalls for SMB. Rules and Polices.

        Tnx for all support


      • August 5, 2014 at 4:50 AM

        There are no minimum requirements for network segmentation. The key is that network traffic is controlled to/from the cardholder data environment (CDE) with the minimum number of ports.

        Where some SOHO firewalls can come up short is with logging and monitoring of traffic. In addition, some SOHO firewalls are no more than port blockers and do not perform stateful packet inspection. Stateful packet inspection is a PCI DSS requirement (1.3.6).

  3. 9 Robert
    July 31, 2014 at 4:47 PM

    Grateful for this post, as we are just beginning our journey into PCI Compliance.

  4. 10 Noor
    July 29, 2014 at 2:35 PM

    I love your posts and got question, we are arguing about category 3 systems. Let’s have an example:
    Category 1(CDE): are unix based;
    Category 2: are windows based (AD, logging..);
    Category3: let’s take the example of Symantec/Altiris server which is impacting just the category 2 systems and no connection to the CDE.
    If I follow the Toolkit, my Symantec Server is out of my PCI scope, however it does impact the security of the Category 2 systems. Shouldn’t the Symantec Server be in scope even there is no direct connection with the CDE? Some of us said you need just to assess the controls at the Category 2 level systems. Others argue it does impact the security of the CDE , it should be in scope!
    Your clarification is very welcome

    • July 31, 2014 at 4:00 AM

      If any device/system connects directly to Category 2 systems, then it must be in scope as well. Examples would include directory servers, DNS and DHCP servers, anti-virus servers, network management/monitoring servers, administrative workstations, etc. The rationale is that such devices/systems could become compromised and cause the other Category 2 devices/systems to be compromised as well.

      That said, it your Altiris server is compromised, it probably doesn’t matter because the attacker would have administrator privileges and could therefore do whatever they wanted to whatever devices/systems they wanted.

  5. July 28, 2014 at 6:12 AM

    First, appreciate all you’re doing for the PCI community. Newbies and “PROs” alike learn from each others’ information and opinions. As to the Open PCI Scoping Toolkit, I believe it to be a very good starting point on what to consider when scoping. As an ISA for our organization, I usually get into discussion with our system owners and infrastructure folks in determining what is in scope and not, for “PCI” systems hosted by our commercial network. I’d like to see more straight forward implementation decisions and something that the community and PCI can agree to endorse and implement as a standard.

    • July 29, 2014 at 1:17 PM

      The problem is that too many people want a completely “black and white” solution and there is none for scoping. It’s the same issue we face with network segmentation. In the end, an organization needs to document their rationale and assumptions on scoping and then have them tested by a third party (QSA) to determine if, what the organization is claiming, is valid.

  6. July 27, 2014 at 2:42 PM

    Once again guru, love the article and think your right on with your opinion. As you know I have working the other side of the PCI wall, managing the PCI programs, and obviously scoping is the foundation to the whole program. I think the toolkit did a great job of creating a high and medium level tier system to organize the scope, after 8 years of managing PCI scopes my opinion is they got it right and you got it right meaning you have to find a compromise with the T2 systems, but give your self the flexibility to move the line back or forward depending where you want to draw your risk line.

    your friend


    “If its too loud, your too old”

  7. 15 Terry
    July 27, 2014 at 12:40 PM


    I totally agree, its the best thing we have out there in my opinion that can give an entity something tangible they can get there teeth stuck into and also something bearing consistency between QSA’s especially working for the same QSAC (they can be very awkward conversations to have at the best of times without them being about QSA’s from the same company). Yes, it doesn’t fit all situations but as long as the decisions you make are rational and defendable as the post suggests you cant go far wrong. As to cat 2’s you just make a decision based on the role it plays and your understanding of the whole environment, for example two devices with differing risk/controls but classed as cat2b going through a firewall.

    Example 1 :- The device is only allowed to communicate over ICMP(Ping) to a network device categorised as Cat1b for availability tracking.

    Example 2 :- The device is allowed to communicate over numerous legacy ports to a database device categorised as CAT1a for MI stats.

    As stated they are both cat2b but due to the role/risk and nature of the 2nd I would delve deeper and apply more controls to that than the first.

    • July 27, 2014 at 3:10 PM

      Your examples are great. A device allowed to ping something in the CDE is a lot less risk than a device communicating with anther device in the CDE.

      However, when I get done, we’ll see if we’re still in agreement. LOL!

  8. July 27, 2014 at 7:27 AM

    Excellent post once again. We utlize the Open PCI Scoping Toolkit and have found it to be the most intuitive tool produced to date for PCI Scoping.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

July 2014

%d bloggers like this: