Posts Tagged ‘scoping

08
Aug
15

The Third Party Dilemma

I am starting to see more and more of this situation with my mid-size and larger clients, the third party that is using the client’s network to process and transmit cardholder data (CHD).

Where I consistently encounter this are at internal cafeterias where a third party operates the cafeteria and is providing their own point of sale (POS) solution to process card transactions. Another example where this is common are mailrooms that are operated by third parties and employees can buy stamps and ship personal packages with the third party taking cards for payment. Finally, another place where this is common is health care facilities, particularly hospitals, where the cafeteria is operated by a third party, the gift shop is operated by another third party, the pharmacy is operated by a third party and so on. As we go forward, I would expect that this situation will become more and more commonplace as organizations outsource more and more back office functions to third parties and focus on their core business.

A lot of these third parties have ended up on clients’ networks. They may or may not have been segmented away from the rest of the client’s network, but they typically sit behind the clients’ firewalls and other security measures. With the focus on requirements 12.8 and 12.9 regarding the management of third parties, these outsourcing environments are receiving new scrutiny as clients begin reassessing how these third parties are provided network and Internet access as well as PCI compliance, contract and other regulatory and legal issues.

So what are your options if you are involved in such arrangements? Here are some thoughts.

  • Ignore the problem and hope it goes away. Yes, believe it or not there are a lot of organizations that have found out that their organization is chock full of such situations and have just tossed up their hands and have decided to put off addressing the issue. Unfortunately, if the organization is required to perform a PCI assessment, this is not an option and they end up having to address it as part of their own assessment. Unfortunately, the problem does not go away because the third parties ask for an AOC of the organization for the network services they are providing.
  • Wide Area Ethernet. In this scenario, your organization becomes a telecommunications carrier providing Internet access to any third party over a separate WAN. This requires Ethernet WAN or Metro Ethernet equipment that support WAN grade service versus LAN grade service. Third parties are provided access to the Internet but must provide their own infrastructure such as firewalls and switches. The bottom line is that your organization becomes no different than any other carrier such as Verizon or AT&T and will be out of scope.
  • Wide Area Wi-Fi. Similar to Wide Area Ethernet using the same WAN infrastructure equipment but using Wi-Fi (802.11a/b/g/n/ac) to deliver network access. While this avoids installation of wiring infrastructure, it means a separate secured Wi-Fi network from your existing Wi-Fi. In addition, depending on how it is engineered, it could suffer from device overload if all of your third parties are in the same general area of your facility. But as with Wide Area Ethernet, your organization is considered a carrier and out of scope.
  • Another wireless alternative is putting your third parties on cellular connections. Where this can be problematic is in facilities that have poor cellular connectivity. In these situations, the organization may have installed cellular repeaters for carriers throughout the facility to improve cellular signals. However, not all of the facility may have repeater coverage where the third parties are located so there could be additional costs involved to get the coverage needed. Like Wi-Fi, cellular repeaters have limitations on the total number of connected devices, so areas where employees and the third parties congregate such as cafeterias could have issues with cellular access at breakfast, lunch and dinner times. This can be mitigated, but could create service issues for all users at heavy usage times.
  • P2PE or E2EE. Use of either of these solutions depends on your third party’s ability to use such a solution with their POS. With these solutions, you can create a separate VLAN for your third parties and they can all attached their points of interaction (POI, aka card terminals) to that VLAN and the traffic will be encrypted out to their respective processors. Where this solution does not work is when the third party uses a POS solution that does not support P2PE/E2EE. In addition, if all your third parties do not support P2PE/E2EE you may have to have a second solution for them. So it may be simpler to use one of the other solutions for consistency.
  • Physically Separate Third Party Network. This is a feasible option if you want to avoid the Wide Area equipment costs and requirements. However, the equipment used must be physically separate from your existing LAN equipment so as to qualify as being considered a carrier versus a service provider. As with the Wide Area solutions, you will not be providing firewalls or any other security services, just access to the Internet. Any security measures on this network would be the responsibility of each third party.
  • Separate Third Party VLAN. This is the option I typically encounter in most organizations. The organization’s network has a VLAN separate from its other networks but still relying on the organization’s infrastructure. The problem here is that this is not a carrier network because it is not physically separate from the internal network. Yes, there are ACLs in place that isolate the VLAN from others, but the infrastructure is shared and could come into scope if changes cause that to happen. This can still be acceptable if all third party traffic is encrypted such as with a VPN or P2PE/E2EE. But where this solution gets into trouble is when the organization providing the VLAN is also doing the encryption on behalf of the third parties. In the end, a VLAN solution will have to be assessed as a service provider because the organization is providing network access as a service not as a carrier.
  • Contract with their own carriers. This is an option, but potentially a rather messy option. That is because your third parties will need to contract with their own carrier which could create a wiring nightmare in your facilities. Particularly when new third parties come in or change carriers. There are ways to manage this but it requires planning and working with your third parties to make this effort successful.

These approaches all have their pluses and minuses, but hopefully you now have some ideas as to how to handle this issue.

Advertisement
20
Mar
15

You Make The Rules

At last year’s Community Meeting, there were a couple of instances where members of the PCI SSC reminded organizations that it is up to them to set the parameters of their PCI assessment. A lot of people in attendance took those comments to mean it is up to merchants and service providers to define the scope of their assessment, but it goes further than that.

For years organizations have complained that they receive varying advice from different QSAs even when the QSAs are from the same firm. Obviously this situation is frustrating for not only merchants and service providers, but for the QSAs as well.

To address this situation, the Council is telling all PCI stakeholders that it is up to the organizations being assessed to define the rules of the assessment. Not just the scope, but also what level of risk that the organization is willing to accept. So what does that mean? I intend to clarify that in this post.

And to be extra clear, this is not some excuse to create a set of rules that allow you to skate by. You must show your work and document your rationale. If your QSA has honest concerns about your work, then expect some push back and bringing your acquiring bank into the discussion. If your acquiring bank agrees with your rules, then you need to get that in writing from them and everyone should move on. But if your acquiring bank agrees with your QSA, then expect to make changes to your rules.

Scoping

Scoping is the responsibility of the organization being assessed, not your bank’s or your QSA’s responsibility. This requirement is even explicitly called out in the PCI DSS on page 10, second paragraph, second sentence.

“At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

The first step in defining scope is to define the rules of how to scope. This is the toughest part of scoping.

Complain about it all you want, but the Open PCI Scoping Toolkit is a good framework to start the discussion about how to scope a PCI assessment based on the risk presented by devices. My first recommendation is that I would highly recommend that you stick with the three categories. In my experience, organizations that create more categories just end up creating more confusion and consternation. However, there is no way to go with fewer categories without putting your entire network in-scope.

Categories 1 and 3 are not the ones in question. My personal opinion is that having two sub-categories in category 1 seems silly to me. Devices and systems that directly process, store or transmit cardholder data (CHD) or are in use within or define the cardholder data environment (CDE) are category 1 regardless. Category 3 devices/systems are those that never, ever come into contact with the CDE. In my opinion, these two categories are clear cut and pretty straight forward.

Where the arguments occur or will occur most often is over the category 2 devices and systems. You can accept the four sub-categories that are defined in the toolkit or come up with your own. If you are going to define your own sub-categories for category 2, the key point to remember is that category 2 devices/systems have direct or indirect influence over the category 1 devices/systems because they have access in some way to the CDE. The sub-categories are used to define the level or risk or threat these category 2 devices/systems represent to the category 1 devices/systems based on the type of access the category 2 devices/systems have to the CDE.

The difficulty with setting the category 2 sub-categories is that everyone has their own risk tolerance. It is those differences in tolerance that create the problems. Security personnel tend to be more conservative because it is their butt on the line if something bad happens. The further people get away from security, the more risk tolerant people seem to get because they do not have a complete understanding/appreciation of the minutiae.

Experience says that whatever and however you define category 2 devices/systems do it as simply and clearly as possible. I would highly recommend you keep the number of sub-categories to a minimum and that you use examples for each sub-category so that readers understand where devices/systems fall under your sub-categories. The key outcome of this effort is a formal document, similar to the Open PCI Scoping Toolkit that; defines your categories, the rationale for those categories, provides examples for each category and is approved by executive management.

Once you have defined your scoping categorization rules, then it becomes an exercise in categorizing your inventory of networks, devices and systems based on your criteria. Do not be surprised if during this process networks, devices and systems you thought were out of scope suddenly come into scope. This is not unusual because prior to this point you were just taking a scientific wild ass guess (SWAG) as to what was in-scope.

Risk Assessment

Once you have your scoping categories set, you need to roll that methodology into your risk assessment so that you can properly assess your risk. By doing so most organizations find that their risk assessment shows more devices/systems at higher risk because they are now in-scope for PCI compliance.

Take your scoping categories and convert them to weights for evaluating risks. For example, category 1 devices/systems would carry the highest risk weighting available. Category 3 devices/systems would carry the lowest risk weighting allowed. Category 2 systems would carry risk weights somewhere between the highest and the lowest based on how you have defined your category 2 sub-categories.

Define Your Terminology

This is very straight forward, but is usually missed by most organizations. Organizations need to define their terminology. In particular, what the organization means by ‘significant change’, ‘period’ and ‘periodically’. I wrote a post on this very topic a while back so I will not bore you here with that discussion. These are very important definitions that must be set.

However there are likely other terms that should be defined for your QSA and anyone else without intimate knowledge of your technology environment. My personnel pet peeve is the lack of definitions of acronyms that are commonly used by your organization but that might be easily misunderstood by anyone else. It never ceases to amaze me when people inadvertently treat an outsider as an “idiot” when they speak in acronyms and the outsider has no clue as to what was said because they are not insiders.

My favorite example of this was a person that kept referring to the ‘HSM’ during our interview. Given this was a PCI assessment, my assumption was that they were referring to a hardware security module, however the way they used ‘HSM’ in our interview seemed to be in the wrong context. So I asked them and they confirmed my suspicion, they had been referring to a custom application called high-level system messaging. Had it been in a mainframe environment, HSM could have been a reference to hierarchical storage management. This is why a glossary of terms and acronyms is a good thing to build, not just for PCI but for any newcomer to your environment.

Discuss This With Your QSA

Finally, do not keep your QSA in the dark as you work through this process. As you create your documentation and classify your devices and systems, run this all by your QSA to get their buy in before they start your assessment. Most organizations will not run into too much push back from their QSA unless they are trying to set the bar too low in a vain attempt to make the PCI compliance process too easy, i.e., checking a box. The last thing you should do is spring this on your QSA the day they start your assessment. And that includes if you update or change your rules between assessments.

All of this effort should result in a much more straight forward assessment because you have defined the rules and criteria to which you are to be assessed.

08
Oct
14

Do Not Jump To Conclusions

A QSA apparently posed a question to the Council regarding the scope of wireless headsets used in a client’s call centers.  In this case, the headsets rely on DECT technology.  The response from the Council was as follows:

“Although DECT is not specifically referenced in PCI DSS v3, it is a digital wireless telephone technology and given the scenario you are describing, PCI DSS requirement 4.1 and 4.1.1 would apply.”

The resulting LinkedIn discussion surrounded whether the DECT headsets are in-scope which, of course, they are in-scope.  However, the implication of the discussion was that, if in-scope, could the DECT headsets be considered as PCI compliant.  Let us walk through a discussion of this issue and develop a position on whether or not DECT headsets are a risk and can they be considered PCI compliant.

For those of us that do not have the PCI DSS memorized requirement 4.1 states:

“Use strong cryptography and security protocols (for example, SSL/TLS, IPSEC, SSH, etc.) to safeguard sensitive cardholder data during transmission over open, public networks, including the following:

– Only trusted keys and certificates are accepted.

– The protocol in use only supports secure versions or configurations.

– The encryption strength is appropriate for the encryption methodology in use.”

Requirement 4.1.1 states:

“Ensure wireless networks transmitting cardholder data or connected to the cardholder data environment, use industry best practices (for example, IEEE 802.11i) to implement strong encryption for authentication and transmission.”

For those of you not up on DECT, it does not rely on strong encryption as defined by NIST and other recognized sources.  The encryption used is 64-bit, almost as lame as DES.  But it gets worse; the protocol does not require the use of a secure authentication method to pair devices to their base station.  As a result, it is relatively easy to force authentication to a rogue base station.  To add to the threat, the theoretical transmission distance is 500m or around a third of a mile.  So it has the capability of transmitting fairly long distances.

Sounds like a PCI and general security train wreck does it not?

Now before we all go off and tell every one of our call center clients that DECT is no longer allowed, let us all take a big deep breath and look at this issue clearly.

The first question that should always be asked is what the real world likelihood of such an attack is.  In this case, would an attack on 20, 50, 100 or more DECT headsets make sense?  Probably not and here is why I believe that to be the case.

You would need as many rogue devices as actual headsets to surreptitiously pair with each individual headset in order to get the conversations.  This would require a large van with racks of notebooks in order to accomplish such an attack.  And that assumes that the transmission distance quoted in the standard can be relied upon.  However, based on the use of my own DECT phones at my home, I can tell you that my phones have issues 30’ away from my house, let alone a third of a mile away.

If that isn’t enough, the DECT cards required are no longer manufactured.  If you are lucky, you may be able to get them on eBay from Europe for about $25€ or $30USD.  I would take this as a good indication that DECT hacking was not a big thing.  But it does get worse; the cards use the PCMCIA interface (superseded in 2003) and, according to the limited number of eBay sellers, do not work reliably for hacking DECT when using the requisite adapter cables for connecting them to modern computers via USB.  As a result, the hack will also require a large number of old notebooks to execute.

The final nail in this coffin is that the known software exploit, ‘deDECTed’, appears to have languished in development (most likely because of the situation with the PCMCIA cards) and was only included in one distribution of BackTrack, now Kali Linux.  You can still download it, but without the requisite hardware, you are pretty much at a standstill.

While all of the tools exist, is this threat realistic?  Why would someone go through all of this effort when, in all likelihood, it would have been probably a thousand times easier to hack the call recording system?  Hacking the call recording system would skip all of the rigmarole of surreptitiously going after the headsets and skip straight to searching the recordings.

In my opinion, while there is a threat, the risk of that threat occurring is low.  Based on this analysis, I would feel comfortable judging these DECT headsets as being PCI compliant and would provide this analysis in my work papers so that reviewers could understand my rationale.

However, this is me talking from my willingness to accept this risk.  Other people and organizations might not be quite so willing and may decide to not allow DECT headsets or phones.  That is their decision but it should be made with information and discussion such as was provided here and not in a vacuum as a “knee jerk” response.

By the way, this technique of capturing people’s conversations is much easier to do with Bluetooth and such tools exist in Kali Linux to accomplish that attack.  However, the same issue of one rogue device to one Bluetooth device still exists.  Good news there, Kali Linux is available for smartphones, so you only need a lot of smartphones to execute the attack.  That is mitigated by the fact that the distance for Bluetooth is only 30’ or 9m.  So as long as a call center enforces a policy of no personal or foreign technology on the call center floor, then any headsets should be safe.

The take away from this post is to think through the implications of the Council’s directives before you go off advising organizations that certain technologies are not PCI compliant.  While I agree with the Council’s answer to the question, it did not immediately mean that the technology was now verboten just because the technology’s basic characteristics appeared to make it non-compliant.  QSAs and organizations need to assess the threat, the risk of the threat occurring and then make a decision as to whether or not that threat is something to be managed or avoided.

01
Aug
14

The Dilemma Of PCI Scoping – Part 2

In Part 1 I discussed how isolation is an answer, but probably not a viable answer for almost all but the most security conscientious of organizations such as the military, defense contractors or those that can afford that sort of painstaking luxury.  So unless you consider everything in scope for PCI compliance, is there a viable way to reduce scope?

Before we get to that though, we need a quick discussion on risk management as the solution is all predicated on the identification and management of risk.  Because if you cannot do an appropriate risk assessment, then the only choice you really have is to consider everything in scope I know the vast majority of you do not like that approach.

Assessing Risk

In order for my proposed solution to have a chance at working properly, an organization needs to understand its risks, what risks will be accepted and managed and what it will take mitigate the residual risks.  Doing a risk assessment is the way to do that, but most organizations avoid such an assessment for a variety of reasons.  The most common reasons I have heard are:

  • The risk assessment framework is too complex,
  • We tried this once before and never got any meaningful results,
  • We were never able to agree on the risks and their scores, or my personal favorite,
  • This spreadsheet is our risk assessment (it is not, but see Tom Benhave’s blog post on the topic as he does a much better job than I could explaining this topic).

The reason why risk is such a tough topic is that everyone has their own perspective on risk; good, bad or otherwise.  There are numerous articles and presentations on this phenomena but my favorite one is from Lance Spitzner of SANS who wrapped his around security awareness training and discusses risk at the start of his presentation describing why people are such poor judges of risk.  He uses various statistics regarding events that can happen in people’s lives to illustrate this fact.  My personal favorite example of just such a statistic is that people have a greater chance of dating a supermodel than of winning the PowerBall lottery.  Granted, both have tremendous odds, but the odds of dating a supermodel are still significantly less than the odds of winning the PowerBall.

The bottom line is that, without a decent risk assessment, an organization has no way to know the amount of risk the organization is willing to accept and how they will manage that risk.  The Council has repeatedly said that PCI compliance is supposed to consider risk and take a “risk-based” approach.  However, the problem is that we each have our own opinion of risk and what risks we are willing to take on.  But at the end of the day, no matter what an organization does, there is going to be risk.  The question is, “Are these risks my organization willing to take on?”  That question can only be answered by a risk assessment and an understanding of how risks can be managed and mitigated.

How your organization chooses which risks it is will to accept and how to manage those remaining risks are up to your organization to decide.  This is why the PCI DSS and all other security frameworks require an annual risk assessment to be performed.  The risk assessment process provides a framework for an organization to document their risks, understand those risks (size, frequency of occurrence, costs, etc.), how the risks can be managed or mitigated, then agree to what risks they will take on and how they will manage and/or mitigate those risks.

From here on we will assume that the organization has a valid risk assessment and that they are willing to take on the risks presented by the example I will discuss.

Managing Risk

Today’s integrated and connected world just does not lend itself to an isolationist approach due to the volume of information involved, business efficiencies lost and/or the operational costs such an approach incurs.  As a result, organizations need to take a hybrid approach of heavily protecting some components and taking on and managing the risks inherent to such an approach.

When it comes to the IT side of risk management and mitigation, most organizations rely on some form of near real time monitoring through collected system/event log data and other sources to monitor their environment(s).  Unfortunately, where this approach comes up short is that there are too many alerts to follow up and so alerts go unaddressed.  Almost every QSA can tell you about a discussion with operations personnel where the statement, “Oh, that’s a false positive alert, so I don’t have to worry about it” has been made.

This is the first problem you must address and make sure that this attitude never creeps back into your people that monitor alerts.  Anyone in operations that “knows” an alert is a false positive needs either: (1) re-education, or (2) your organization needs to seriously re-tune your alerting mechanism(s).  All you have to do is read the Target and Neiman Marcus press reports if you need examples of how bad things can get if your personnel are blowing off alerts because they believe they are not accurate.

In my experience, a lot of these problems are the result of bad or incomplete implementations of these systems.  Unfortunately, there are a lot of people out there that think that these solutions are more like a Ronco Rotisserie Oven where, as they famously say in the ads, “you can set it and forget it.”  Yes these solutions may be “appliances”, but that is where the comparison ends.

System incident and event management (SIEM) systems require fairly constant tuning and tweaking, beyond their own software and signature updates, to minimize false positive alerts in response to the changes to an organization’s networks and systems.  Yet time and again, I encounter monitoring and alerting systems that were put in place years ago (typically to meet PCI compliance) and have not been adjusted/changed since then while all around them changes have been occurring that affect their operation.

When interviewing the people responsible for these systems I hear statements such as, “Yeah, that alert started to appear when we implemented [name of change].  We were told to just ignore it.”  When asked why they have not tuned it out of the SIEM, you get either they do not have time, they do not know how, they do not have the rights to do that or, my personal favorite, the head of security or the security committee will not let us change that.

The reason this issue does not get addressed is that it has no visibility since alerts are tucked into the various monitoring tools.  So, the best way to address this situation is to give it visibility by automatically feeding all alerts into an organization’s help desk system.  This gives all alerts immediate visibility by putting them in an automated tracking and escalation process.  It also allows for triage and investigation activities to be documented and, based on the results of those triage and investigation activities, having the alert assigned to the right people/groups to address the alerts.

“Whoa, let’s not get crazy here,” I am sure some of you are YELLING at the screen.  There is no doubt this is a very brave step to take because this will potentially uncover something you probably did not want to advertise given the state of your existing alerting.  But that is typically only be a short term problem.  Unfortunately, it may be the only way to get the underlying problem of tuning and tweaking of the alerting systems completed and constantly addressed.

But taking such a step is not entirely a bad thing, at least in the long run.  A side benefit of doing such a thing is that it will focus an organization on triage activities for classifying the urgency of the alerts.  Not all alerts need immediate action, but a lot of them can require immediate investigation and then can be put on a back burner.  It will also give visibility to the number of alerts being dealt with on a daily basis after triage.  That typically results in identifying and justify why more staff are required to deal with the onslaught of alerts that need to be researched.

Another change that organizations should make is adding a box into their change control form that indicates what the impact of a change will have on the SIEM environment.  At least these three basic questions need to be answered in regards to SIEM or other monitoring systems.

  • Do new alerts need to be added and, if so, what do they need to monitor and what are the alerting thresholds?
  • Do existing alerts need to be modified and, if so, what modifications are needed?
  • Are there alerts that are no longer needed?

If you address these areas, you should have monitoring and alerting taken care of with a built in feedback loop to keep it that way.

In Part 3, I am going to wrap up my discussion on PCI scoping with a discussion of Category 2 and 3 systems.

27
Jul
14

The Dilemma Of PCI Scoping – Part 1

Based on the email comments of late, there are apparently a lot of you out there that really do not like the Open PCI Scoping Toolkit.  I am not sure exactly what post where I mentioned the Toolkit got you all wound up, but I have definitely hit a nerve.  From the comments in these messages, it is painfully obvious that the reason the SIG failed was that none of us are in agreement about how much risk we are willing to accept.  And that is why no PCI assessment is ever the same because organizations and even QSAs from the same QSAC can have different levels of risk tolerance.

I, too, have to admit that I think the Toolkit needs some work, but it is the best framework we have to start a discussion on the topic.  And that is the problem, the topic.  Until the Toolkit appeared, scoping discussions had no real framework and everyone had their own definitions.  And as I have pointed out before, while there are a lot of people out there that might not know the nuances of the PCI DSS, it seems that everyone “knows” what is in scope and what is out of scope.

As a result, QSAs have found out through the “School of Hard Knocks” that everyone has their own view of scope and there was no good guide to explain how or why to draw the line let alone discuss the topic civilly in some cases.  I view the Toolkit as the bare minimum.  If an organization wants to get even more restrictive and have more categories, great, that is their prerogative.  However, if they want to go less than the Toolkit, in my very humble opinion, they can do it without me.  The bottom line is, regardless of whether you are using the Toolkit or have your own approach, document the definitions of your categories and provide examples so that everyone can understand your rationale and then discuss the impacts on your organization’s PCI scope.  Without such a document, we are not going to have productive discussions on scope.  That is why I lean toward the Toolkit, it gives me a starting point to get a productive discussion started.

We seem to all be able to agree on the Category 1 and 3 systems, because those are clear and easy to identify.  Category 1 systems are always in the cardholder data environment (CDE) because they directly process, store or transmit cardholder data or define the CDE and are therefore always in-scope.  Category 3 systems never, ever process, store or transmit cardholder data and are therefore always out of scope.

It’s those pesky Category 2 systems (the ones that connect in some way to/from the CDE) that get everyone’s undies in a bunch.  The group that developed the Toolkit did their best to break them out in ways that made sense but were still understandable and easy to use.  The more that I have thought about it, the more I think they came up with the best compromise. In my opinion, if you start adding any more categories or sub-categories to the existing definitions you will lose almost everyone due to complexity, including security people.  However, I also don’t believe that simplifying Category 2 is an answer either.

But if the discussion about Category 2 is tough, the fact that the Toolkit allows for Category 3 systems to exist on networks with Category 2 systems sends some security purists right over a cliff.  Their rationale is that Category 3 systems could be easily attacked and therefore allows a beachhead to compromising Category 2 systems.  While this is true, the idea of totally isolating Category 2 systems is not realistic for most organizations because of the ramifications of such a decision.

Why Isolation Is Not An Option

Security purists seem to think isolation of the CDE is the answer.  From an outsourcing perspective, that would provide isolation.  But in my experience, even outsourcing is not as isolated as one would think.  Here is why I think that isolation does not work whether doing it internally or through outsourcing.

Isolation means physically and logically separate directory systems with no trust relationships between the isolated environment and normal production.  I have seen all sorts of technical gymnastics to secure directory systems inside the CDE that can still leave too many holes in firewalls so that the trust relationship can exist.  If you are truly serious about isolation, then you need to have true isolation and that means physically and logically separate directory systems.  This also means duplication of credential management and introducing the possibility of errors when provisioning accounts.

The idea of leveraging your existing solutions for network and application management must be rethought as well.  This means separate security event and information management (SEIM) solutions, separate network management and monitoring, separate application management and monitoring, etc.  I think you get the idea.

Of course separate firewalls, routers, switches, intrusion detection/prevention, load balancers and other infrastructure are also going to be needed.  If you use RADIUS or TACACS+ for authentication, you will have to have separate systems for authentication to the infrastructure as well.  You will also need separate DNS and DHCP servers if you intend to provide those services inside the CDE.  Of course all of this duplicated infrastructure adds to the likelihood that mistakes will be made in configuration changes that could result in a breach of that isolation.

There are no “out of band” or virtual terminal access into your pristine isolated environment.  So you will need to provide separate PCs for operations and network personnel so that they have access to the isolated environment and then another physically separate system to your organization’s normal work environment.  Internal users with access to cardholder data (CHD) will also be required to have physically separate PCs for accessing the CDE.  This will also mean ensuring security of network switches inside the CDE by using MAC filtering or “sticky” MAC approaches to ensure that only the PCs that should have access to the CDE do have access.  And of course wireless networking is totally out of the question.

But wait, you will also have to invest in some sort of portable media solution so that you can get data from the isolated environment to the “normal” production environment and vice versa.  No connected databases or application integration because that will require holes into and out of the isolated environment.  This is where outsourcing for isolation also comes up short.  But without application and data integration, the economies of scale shrink almost exponentially as more and more data must be manually moved between environments.  This drives the cost of isolation almost through the roof and typically makes isolation too expensive for most organizations.

Various government entities have all tried this approach with mixed results as far as breaches are concerned.  So in practice, the isolation approach will still leave your organization with some amount of risk that must be managed.

So if isolation is not the answer what is the answer?  In Part 2 I’ll discuss what I think works.

09
Nov
13

Removing The Drama Of A PCI DSS Assessment

I had to prepare a presentation for a client a while back giving them some tips on how to prepare and get through a PCI assessment as easy as possible.  I thought it might be good to share those thoughts.

Trust But Verify

This famous quote from US President Ronald Reagan is the mantra of a PCI assessment.

The PCI DSS is based on the “trust” that organizations are complying with the PCI DSS.  However self-assessment processes and QSAs are used to “verify” that the organization is, in fact, complying with the PCI DSS.  As a result, the organization being assessed not only has to produce documentation to that effect, but the QSA must also observe that the PCI DSS requirements are being followed.

The net is that, just because you say something is fact, your QSA must substantiate your statements so that they, too, will treat them as fact.  If you remember nothing else but this simple truth, you will understand why a QSA must do what they do.

Scope

If PCI assessments go wrong for any reason, this is probably the primary reason.  It fascinates me that people often profess ignorance of the PCI DSS, yet somehow become experts on the subject when it comes to scoping.

Remember point number one, trust but verify.  Under that premise, the PCI SSC makes a QSA’s primary responsibility to confirm the scope of the PCI assessment as they verify the facts.  As a result, in order to confirm that scope, the QSA must look at everything and then, through investigation and evaluation, determine that the areas you deem out of scope are, in fact, truly out of scope.

Let your QSA ask their questions and conduct their observations without arguing with them about scope.  They are only doing this because they are required to confirm the facts and your fighting with them about scope is only going to making them wonder what you are trying to hide.  The bottom line is that arguing with your QSA about scope only makes your assessment all the more painful and time consuming.

If you truly want to avoid arguing over scoping, get a copy of the Open Source PCI Scoping Toolkit.  Go through your environment and determine the categories of all of your systems and networks.  This is a good annual exercise because you need to prove your scope every year.

Applicability

According to the PCI SSC, there are five PCI DSS requirements that can never, ever be marked as ‘Not Applicable’: 1.2.3, 3.2.1, 3.2.2, 3.2.3 and 11.1.  I have discussed these all before but they deserve another quick discussion here.

Clients will argue ad nauseam that wireless is not implemented or is out of scope and therefore refuse to discuss wireless.  For requirement 1.2.3, a QSA is required to document the procedures they followed to rule wireless in or out of scope.  That of course means the QSA must investigate any wireless networks and evaluate if the controls are rigorous enough to keep wireless out of scope.  For requirement 11.1, the QSA must investigate and evaluate if the organization’s controls surrounding the detection of rogue wireless are appropriate regardless of whether or not the organization has implemented wireless networking.

3.2.1, 3.2.2 and 3.2.3 are all related to the securing of cardholder data when it is stored.  Even if an organization is not storing cardholder data on their systems, a QSA must document the procedures they used to confirm that cardholder data is not stored on the organization’s systems.  This usually involves a review of flat files and database schemas and the running of utilities and queries against those systems and databases looking for cardholder data.

The bottom line is do not argue about something being ‘Not Applicable’ and then hinder the QSA’s investigation to prove it is ‘Not Applicable’.  Do not get me wrong, you need to keep your QSA on point, but remember that QSAs are required to evaluate the situation and then document the process used to determine that a particular requirement is ‘Not Applicable’.  All you do by complicating that investigation is add more time to your assessment and, potentially, cause a requirement to be marked as ‘Not In Place’ instead of ‘Not Applicable’.

Yes, I Did Kind Of Ask That Earlier

Like security, the PCI DSS also works from a ‘defense in depth’ approach.  A lot of the questions QSAs ask are very similar just asked from a different perspective.  The people that develop assessment and audit programs will tell you that this is the most effective way to uncover the level of compliance with a given program.  The reason is that organizations who have not integrated a compliance program into their day-to-day operations will typically provide inconsistent or confusing answers to the similar questions.  Not that this is a perfect technique mind you, but it does work the majority of the time.

Please be patient with your QSA.  They did not write these procedures, but they are required to execute them.

Answer The Question

Most people suck when being questioned, particularly in a legal proceeding, including yours truly.  Lawyers always instruct anyone that will be called to testify in a legal proceeding to take their time, focus on the question being asked and only answer the question being asked.  Never, ever, ever provide any information outside of the question, i.e., do not elaborate.  The trouble is that lawyers know that silence is a vacuum and it is human nature to fill that vacuum with extraneous information.  Hence why they typically have long pauses between questions.

QSAs and auditors tend to operate under the same principle as a lawyer.  People get into trouble when they start talking about things that are outside of the question, out of scope or not relevant to the assessment.  Such responses will at first confuse the QSA for a moment as they try to reconcile your remarks.  But then, the QSA may question whether they truly understand the environment and, possibly, the scope of the assessment.  It is then that they may start quizzing you and your staff as they go back and reconfirm their understanding of the environment.  All of this takes time, time away from the assessment process as you cover old ground while the QSA re-verifies the facts.

The lesson to be learned here is that there is nothing wrong with saying, “I do not know.”  Or “I will have to look into that question and get back to you.”  The worst thing you can do is try and “tap dance” around the question or never really answer the question.  If you do not have the answer, then find out who does have the answer and point the QSA to that person.

Prepare

And finally, the best thing you can do to avoid all of these issues is to walk through the PCI assessment process and requirements with those of your staff that will be interviewed/observed and make sure they understand the questions to be asked and how they should be answered.

If you really want to know what the QSA will ask, why they will ask and the evidence they will require, get a copy of the PCI DSS ROC Reporting Instructions from the PCI SSC Document Library.  The Reporting Instructions document is the “Bible” for QSAs as it documents how they will be assessed in a PCI SSC Quality Assurance review.  Reviewing and understanding this document will go a long way to minimizing the “What do you need that for?” questions that all QSAs encounter.

For each requirement’s tests, the Reporting Instructions will tell you:

  • What observations, if any, need to be performed and documented.
  • What documents, if any, need to be collected and reviewed and what information needs to be identified in those documents.
  • What people, if any, need to be interviewed and about what topic(s).
  • What processes, actions taken or states of equipment, if any, need to be observed and documented.
  • Whether or not sampling can be used.

Using the Reporting Instructions, you can also gather a lot of the observations ahead of time.  Your QSA will still have to conduct some observations such as that default passwords are not used, that timeouts occur, that change management operates and the like.  But by gathering screen shots and documenting what you used as testing conditions will go a long way to making your assessment go much more smoothly and quickly.

Hopefully this discussion will help you get through your next PCI assessment without all of the associated drama that can come from such an exercise.

26
May
13

PCI Scoping Tool

UPDATE: In May 2017 the Council issued an Information Supplement on Scoping and Network Segmentation that supersedes the Open Scoping Toolkit and introduces a Council approved set of terms. That is the official document to be used when discussing scoping and network segmentation issues.

Since September 2012, the Open Scoping Framework Group has been providing free of charge the Open PCI DSS Scoping Toolkit.  If you are a QSA, ISA or someone responsible for PCI compliance and you have not gotten a copy of this fantastic document, you need to get a copy.  A copy is easy to get, just go to the IT Revolution Web site, register and you will be allowed to get your copy.

Never heard of the Open Scoping Framework Group (OSFG)?  Neither had I until late last fall when a friend told me to go check out their PCI scoping methodology.  The OSFG was started by Gene Kim whose name should be familiar to almost everyone as the founder of Tripwire.  He got together his DevOps group to tackle the issues faced by all of us trying to scope the cardholder data environment (CDE) and the result was the toolkit.

The toolkit defines three categories of systems.

  • Category 1 – System components that process, store or transmit cardholder data or are not isolated or restricted through controlled access from other Category 1 system components.
  • Category 2 – System components that have controlled access to a Category 1 system component.
  • Category 3 – System components that are isolated from all Category 1 system components.

People always get the reason why Category 1 devices are in scope because they are contained in the CDE.  While one would think that Category 3 components would be also just as easy to categorize as Category 1, but that is not necessarily the case.  The key is that Category 3 systems cannot have any access to Category 1 components.  While attempting to ping Category 1 components from the Category 3 component could be used, a better test is to use Nmap or similar port scanner from a sample of Category 3 components to scan the CDE IP address range to determine if any ports are open.

While Category 3 components can be troublesome, it is the Category 2 devices that usually give everyone a problem including, at times, yours truly.  The reason is the connectivity issue as it can be very unclear at times whether or not a device actually has connectivity to the CDE.

To assist in identifying connected systems, the toolkit breaks Category 2 systems into four sub-categories.

  • Category 2a – System components which, through controlled access, provide security services (e.g., authentication) to a Category 1 device.
  • Category 2b – System components which, through controlled access, can initiate an inbound connection to a Category 1 device.
  • Category 2c – System components which, through controlled access, can only receive a connection from a Category 1 device (i.e., cannot initiate a connection).
  • Category 2x – System components which, through indirect and controlled access, have the ability to administer Category 1 devices. Note: Category 2x devices have no direct access to/from Category 1 devices.

The first thing we need to clarify is what is meant by “controlled access.”  Controlled access means that the system components have: (1) limited network traffic to only that which is required for business operations; and (2) are business justified and documented.

It is the first point of controlled access that typically give people the most trouble.  The concept of “limited” network traffic just does not come across the same for everyone.  I have seen people try to justify limited traffic when just about every port north of 31000 is open for dynamic use.  Do not get me wrong, there are instances where a significant number of ports need to be open (look at Windows 2003 Active Directory as a prime example), but when the answer used to justify the large number of ports is, “We did it to be safe and avoid any problems” says to an assessor you did not want to research the exact ports you needed open or, worse yet, you have no idea what ports are needed to be open.

The sub-category most people will struggle with is 2x.  2x components are those that do not have direct access to the CDE but have access to 2a, 2b and/or 2c components that do have direct access to the CDE.  The best example of this sort of situation would be a system management console for a log management server.  The console has access to the log management server which does have access to the CDE, but the console should not have access to the CDE.  Since the console could be compromised and it has access to a component that has direct access to the CDE, it needs to be included in the scope of the PCI assessment.

The final pieces of the Open PCI DSS Scoping Toolkit I really like are the decision tree and the scenarios provided.  If these do not help explain how to scope your PCI assessment, nothing will.

Again, if you do not yet have a copy of the Open PCI DSS Scoping Toolkit, hopefully this post will entice you to get a copy.

19
Feb
13

Scoping Clarification

At the 2012 PCI Community Meetings, the PCI SSC made a presentation titled ‘PCI Standards Updates and Future Insights’.  Embedded in that presentation were a series of slides titled ‘Scoping & Segmentation Clarified’.  A number of writers have given reference to this clarification, but I have yet to see a discussion regarding the content of these slides.  So I felt someone should share with the world the content of these slides so that we are all on the same page.

“PCI DSS requirements apply to all system components, defined as any network component, server, or application that is included in or connected to the CDE [cardholder data environment]”

The first thing discussed are the misconceptions about PCI DSS scoping and what “connected to” really means.  Those examples of misconceptions pointed out as FALSE included:

  • Encrypted cardholder data (CHD) is always out of scope
  • “Connected to connected to” systems are not in-scope
  • Only systems that connect directly to the cardholder data environment are in-scope
  • Only inbound connections are in-scope
  • Directly connected to systems are not in-scope if they pass through an in-scope firewall

Encrypted CHD Is Out Of Scope

The only way encrypted cardholder data is ever considered out of scope is if, and only if, the organization being assessed does not have the ability to decrypt the data.  That said, it is up to the organization or their QSA to prove that the organization does not have access to the keys by documenting the assessment procedures that were performed to determine that the encryption keys could not be accessed and the cardholder data could not be decrypted.  So even though it may eventually be judged out of scope, there must be some form of investigation that proves that fact.

“Connected to connected to” Systems Are Not In-Scope, et.al.

The remaining four examples given as false are all related.  The guidance being provided by the PCI SSC should have been common sense to any security professional and QSA, but apparently was not as clear as we all thought.  As a result, the PCI SSC documented their thought process and provided this “guidance.”

“If a system can impact the security of the CDE (whether it is directly or indirectly connected), it is in-scope.

To be considered entirely out of scope, a network or system must be isolated from, with no connectivity into, the CDE (such that even if the system is compromised, it cannot impact the security of the CDE).

Restricting access by IP or port may limit the exposure of the CDE but does not automatically remove systems/networks from scope since there is still connectivity.

If connections are limited to specific ports or services, those systems are included in the PCI DSS scope to verify applicable controls are in place.”

Talk about opening a can of worms.  To a lot of people, this definition sounded a lot like Stalin’s “Doctor’s Plot.”  Exactly where do you draw the line on a network and what is connected to what?  To a lot of hard-line QSAs and some participating organizations (POs), this clarification essentially put everything back in-scope for PCI compliance because, in theory, any device on the internal network can be used to ultimately compromise the CDE.  To all of you out there that think this, take a pill and chill.  That is not what the clarification is saying unless there are no systems on your network that are truly isolated from the CDE.

The PCI SSC has left this decision to your QSA/ISA to determine where the boundaries of your CDE get drawn.  And that is based on the risk presented and how you have controlled access for the outliers on your network.  So, while in theory any device sitting on a network could be used as a staging point or beachhead to ultimately compromise the CDE, it all depends on the controls in place to minimize that potential risk.

As an easy example of what the PCI SSC is getting at with this clarification, any systems with access to the CDE are in-scope for PCI compliance.  According to the PCI SSC, a lot of QSAs were ruling systems out of scope such as backup systems, domain controllers, DHCP and DNS servers, management consoles and anything else used to manage, monitor or control devices inside the CDE.  The bottom line is that should any of these systems be compromised, the CDE is also likely compromised.  Limited access or not, these systems have access to the CDE and are therefore in-scope for the assessment.

The other part of the clarification is about just because a system has access to the CDE does not imply that all PCI requirements apply.  There are big differences between the access a backup system may have from a call center operators’ or an accountants’ PCs may have to the CDE.  As a result, it is up to the QSA/ISA to determine what PCI requirements are relevant based on the risk presented by the system and how it accesses the CDE.  At a bare minimum, any PC that has access to the CDE needs to be properly configured and security hardened, patched current, have anti-virus and anti-malware software and a firewall implemented.  If the PC has direct access to the CDE through something other than HTTPS as with a backup server or domain controller, then you should be treating these devices no different than a device inside the CDE. Whether or not additional requirements may be required will depend on the assessment of the PC and how it accesses the CDE.

Given this clarification, what should you be doing to determine the boundaries of the CDE?  Here are some of my ideas.

  • Scan you network for cardholder data (CHD) using OpenDLP, custom queries of databases and similar tools.  Document all systems that are found with actual CHD.  Make sure to check that the data found is actually CHD and not just 15 – 16 digit numbers that happen to pass a Luhn check.  We have encountered a lot of random log data from firewalls and intrusion detection/prevention solutions over the years that upon further inspection turned out to not be CHD.  The purpose of doing this scanning is so that you are reasonably certain that CHD does not exist on systems you did not know about.
  • Review your virtual LANs (VLAN) and related access control lists (ACL) and document what controls are in place to isolate systems from the CDE.  If ports are open between the CDE and a network segment, document why they are open.  A lot of times this sort of review results in discovery of systems that do not need CDE access but were granted access just in case.  The purpose of this review is to make sure that the CDE is truly isolated from the rest of the networks.  If not isolated, it may be possible to move systems into a few VLAN segments and isolate those segments to minimize your total CDE.
  • If possible, locate critical servers such as domain controllers, backup servers, etc. inside the CDE.  A lot of organizations have located one or two domain controllers inside their CDE and then limited communications to/from those domain controllers and domain controllers outside the CDE.  While the domain controllers outside the CDE are still in-scope if they communicate with the CDE domain controllers, such a move puts the majority of the risk on the CDE domain controllers.  In today’s SAN world, putting a backup server with a fiber channel connection back to the SAN used for virtual tape allows you to isolate your CDE backup process.
  • For those organizations that have virtual desktop technology implemented, consider creating virtual systems for providing access to the CDE.  A lot of organizations with virtual desktop technology have segregated this technology onto servers that only provide access to the CDE therefore limiting what virtual servers are in-scope.  Depending on the controls of the virtual environment will determine how much of that environment is necessary to be included in the assessment.  These virtual desktops should be built from a standard image and be strictly locked down so that new software cannot be installed by the user as well as they should be configured to log all user actions.  When using this approach, the user must have authentication credentials (i.e., user identifier and password) different from their other credentials.  You are also going to want some form of firewall between these virtual systems and the rest of the network and granting access to those systems that require access.
  • Another opportunity to minimize the CDE is the use of “jump boxes” on the network.  Jump boxes are accessed via remote desktop technologies to then gain access to the CDE and are typically used for network and server administration and management.  The jump box is configured so that any user’s activities are recorded and logged for later review.  Jump boxes are no different than the virtual desktop solution other than jump boxes are typically physical devices versus virtual devices.  The reason for using a physical device versus a virtual device is the jump box can be physically accessed if necessary in emergencies.  As with the virtual desktop solution, users of jump boxes must have user identifiers and passwords different from their other credentials and you will also need a firewall protecting the jump box.
  • For systems that have access to virtual desktops or jump boxes, these should still be security hardened, have anti-virus and anti-malware with current signatures and should also be timely patched.

A lot of people would think a virtual private network (VPN) over SSL, TLS, SSH or IPsec solution would also meet the requirement of isolation.  The problem with the VPN solution is that there is no isolation or gap between the system used to access the general network and the CDE.  The VPN does provide isolation of data transmissions but, if the system is compromised, the CDE is also likely compromised.  With the other solutions there are multiple ways that the CDE systems are isolated from the rest of the network.

Now we should all be on the same page.

10
May
12

More On Scoping

Everyone that is going through the PCI compliance process tries to get systems, processes, whatever, out of scope.  And while getting things out of scope is a good thing, it does not mean that they do not need to be assessed.  And this is one of the most contentious points of a PCI compliance assessment.

One of the biggest misconceptions about the PCI compliance assessment process is that, just because an organization says that something is out of scope, does not mean that it does not have to be examined.  The PCI compliance assessment process is all about trust, but verify.  So, when an organization says that a particular element is out of scope, it is up to their QSA to confirm that the item is, in fact, out of scope.

Take for example network segmentation that is used to delineate an organization’s cardholder data environment (CDE).  A QSA is required to confirm that the network segmentation implemented does in fact keep the CDE logically or physically separated from the rest of an organization.  That confirmation process will likely review firewall rules, access control lists and other controls on the network to prove that the CDE is segregated.  And going through these items can sometimes result in a lot of QSA effort, particularly as network complexity increases.

Another area where the out of scope effort can be messy is in the area of applications and whether they process, store or transmit cardholder data.  Proving that an application does not store cardholder data is typically fairly straight forward.  The QSA just examines the data schemas for files and databases looking for fields named credit card number or any 16 character fields.  A QSA will also typically run queries against the database looking for 16 digit numbers that start with known BINs.  I have been involved in a number of assessments where we have found cardholder data being stored in text and comment fields through our queries.  Determining whether an application is processing or transmitting cardholder data is more complicated and problematic.  It can take a quite a lot of effort to determine using an organization’s Quality Assurance or Testing facilities, but it can be accomplished.

The biggest clarification for v2.0 of the PCI DSS is that it is the responsibility of the organization being assessed to prove that their CDE is in fact accurate.  This had always been the implicit case, but with v2.0 of the PCI DSS, the PCI SSC has explicitly stated this fact.  Page 11 of the PCI DSS states:

“At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

As a result, the organization being assessed should provide proof to their QSA that they have taken an examination of all of their processes, automated and manual, and have determined what is in-scope and out of scope.  The results of this self examination are used by the QSA to confirm that the CDE definition, as documented by the organization, is accurate.

This clarification has resulted in a lot of questions.  The primary of which is along the lines of, “How am I supposed to prove that I have assessed my entire environment and made sure the CDE is the only place where cardholder data exists?”  While the implications of this question are obvious for the Wal*Mart’s and Best Buy’s of the world, even small and midsized merchants can have difficulties meeting this requirement.  And I can assure you that even the “big boys” with their data loss prevention and other solutions are not hyped on scanning every server and workstation they have for cardholder data (CHD).

For determining whether or not CHD is present in flat files on computers, there are a number of open source (i.e., “free”) solutions.  At the simplest are the following tools.

  • ccsrch – (http://ccsrch.sourceforge.net/) – If this is not the original credit card search utility, it should be.  ccsrch identifies unencrypted and numerically contiguous primary account numbers (PAN) and credit card track data on Windows or UNIX operating systems.  One of the biggest shortcomings of ccsrch is that it will not run over a network, so scanning multiple computers is a chore.  The other big shortcoming of ccsrch is that unless the data is in clear text in the file, ccsrch will not identify it.  As a result, file formats such as PDF, Word and Excel could contain CHD and may not necessarily be recognized.  It has been my experience that ccsrch tosses back a high number of false positive results due to its file format limitations and therefore recognizing data that is not a PAN as a PAN.
  • Find_SSNs – (http://security.vt.edu/resources_and_information/find_ssns.html) – While the file name seems to imply it only searches for social security numbers, it also searches for PANs and will do so for a variety of file formats such as Word, Excel, PDFs, etc.  Find_SSNs runs on a variety of Windows and UNIX platforms, but as with ccsrch, it does not run over a network; it must be run machine by machine.  Find_SSNs seems to have a very low false positive rate.
  • SENF – (https://senf.security.utexas.edu/) – Sensitive Number Finder (SENF) is a Java application developed at the University of Texas.  If a computer runs Java, it will run SENF so it is relatively platform independent and supports many file formats similar to Find_SSNs.  That said, as with the previous tools, SENF will not run over a network, it must run on each individual machine.  I have found SENF to have a much lower false positive rate than ccsrch, but not as low as either Find_SSNs or Spider.
  • Spider – (http://www2.cit.cornell.edu/security/tools/) – This used to be my favorite utility for finding PANs.  Spider will scan multiple computers over a network, albeit slowly and the fact that it has a propensity for crashing when run over the network.  However, it also seems to have a low false positive rate that is comparable to Find_SSNs.

I still use Spider and Find_SSNs for scanning log and debug files for PANs as I have yet to find anything as simple, fast and accurate when dealing with flat text files.  And yes, I use both as checks against each other for further reducing the false positive rate.  It amazes me, as well as my clients, the amount of incidental and occasional CHD that we find in log and debug files due to mis-configurations of applications and vendors who forget to turn off debugging mode after researching problems.

But I am sure a lot of you are saying, “Flat files?  Who stores anything in flat files these days?”  And that is the biggest issue with the aforementioned open source solutions; none of them will scan a database from a table schema perspective.  If the database data store does coincidentally stores clear text PANs as legible text, the aforementioned utilities will find it but that is pretty rare due to data compression, indexing and other issues with some database management systems.  As such, if you wanted to stay with open source, you had to be willing to use their code as a base and adapt it to scanning a particular database and table schemas unless you were willing to go to a commercial solution.  That is until OpenDLP (http://code.google.com/p/opendlp/).

OpenDLP is my personal open source favorite now for a number of reasons.  First, it uses Regular Expressions (RegEx) so you can use it to look not only for PANs, but a whole host of other information as long as it conforms to something that can be described programmatically such as social security numbers, driver’s license numbers, account numbers, etc.  Secondly, it will also scan Microsoft SQL Server and MySQL databases.  And finally, it will scan reliably over the network without an agent on Windows (over SMB) and UNIX systems (over SSH using sshfs).

At least I have gotten fewer client complaints over OpenDLP than I have for Spider for network scanning.  That said, OpenDLP can still tie up a server or workstation while it scans it remotely and it will really tie up a server running SQL Server or MySQL.  As such, you really need to plan ahead for scanning so that it is done overnight, after backups, etc.  And do not expect to scan everything all at once unless you have only a few systems to scan.  It can take a week or more for even small organizations.

But what if you have Oracle, DB/2, Sybase or some other database management system?  Unless you are willing to take the OpenDLP source code and modify it for your particular data base management system, I am afraid you are only left with commercial solutions such as Application Security Inc.’s DbProtect, Identity Finder DLP, ControlCase Data Discovery, Orbium Software’s Schema Detective or Symantec Data Loss Prevention.  Not that these solutions handle every database management system, but they do handle more than one database vendor and some handle most of them.

You should now have some ideas of how to scope your CDE so that you are prepared for your next PCI assessment.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031