Archive for March, 2010


The Purpose Of Penetration Testing

I have received a number of questions regarding my penetration testing post.  All of the questions seem to concern why penetration testing is required.  Penetration testing has always been a bone of contention within the network security community.  So it should be expected that questions would arise regarding why it is required under the PCI DSS and why the requirement was expanded to internal testing under v1.2.

As I point out in my original post, penetration testing can leave the target compromised if testing is not conducted properly.  Conducting penetration testing properly usually involves using non-production systems configured the same as the production systems.  Unfortunately, a lot of organizations including some large ones do not have such environments and penetration testing must be performed against the production environment.  Either way, once the penetration testing is completed the penetration tester needs to “clean up,” if possible, after the testing by removing any exploits from the devices tested.  Unfortunately, a lot of tools do not allow for satisfactory clean up and the only alternative is to rebuild the device that was compromised.  It is because of this that a lot of IT professionals do not like penetration testing.

Unfortunately, penetration testing is a necessary evil to ensure that an organization’s network infrastructure and servers are secure.  If all you ever conduct is vulnerability scanning, how do you know that the vulnerabilities identified cannot be used to further someone’s attack against your infrastructure?  And that is the problem.  Without conducting a penetration test to attempt to leverage the vulnerabilities discovered, you have no way of knowing the true risk presented to your network by those vulnerabilities.

The worse problem with vulnerability scanning is that non-security professionals assume that because a vulnerability is rated medium or low, it is no big deal.  They get a false sense of security because the vulnerabilities are not considered serious.  Without the penetration test, there is no way to show them that the low and/or medium risk vulnerabilities can be leveraged to compromise their network and potentially gain access to their cardholder data environment.  I have seen many an example where networks were compromised through supposedly low risk vulnerabilities that ultimately allowed the penetration tester a beachhead from which to launch even more sophisticated attacks from inside the network.  And if you think this is unrealistic if you have properly secured your network, there are people successfully attacking just such networks.

With v1.2 of the PCI DSS, the penetration testing requirement was expanded to include the internal cardholder data environment.  The reason for expanding penetration testing to the internal network was in response to the breaches that have occurred.  Verizon Business Services and Trustwave had analyzed the breaches they had forensically investigated and came to the conclusion that around 70% of those breaches were the result of insider human error.  And while the initial compromise did not necessarily directly involve the cardholder data environment, once the attacker was inside, there were limited barriers to compromising the cardholder data environment.  As a result, the PCI SSC revised the PCI DSS requirements and mandated penetration testing as well as quarterly vulnerability testing for external and internal assets in the cardholder data environment.


Conducting Your Own PCI Report On Compliance Assessment

We are starting to hear rumblings within some organizations regarding conducting their own PCI Report On Compliance (ROC) assessment.  While as a QSA I have a vested interest in not seeing this become a common practice, I can understand why organizations would want to at least examine this option.  Particularly since the latest study from The Ponemon Institute says that the cost of a PCI ROC is between $225,000 and $500,000.

Remember, a ROC is only required when an organization is conducting a minimum of six million Visa, MasterCard or Discover transactions or two and a half million American Express transactions or one million JCB transactions.  We are not talking about your Mom & Pop store around the corner or even your local chain.  We are talking about organizations such as Wal*Mart, Exxon/Mobil and  The obvious reason organizations want to conduct their own PCI ROC is the belief that in-sourcing the assessment will save them money.  However, as I will discuss later, there will likely be few if any cost savings.

The main benefit of conducting your own ROC assessment is that you use your own personnel.  While the PCI DSS is silent on this subject, MasterCard and Visa Southeast Asia require the use of internal audit personnel.  The use of internal audit personnel would seem to be the obvious choice because of their independence, their familiarity with assessing business processes against a standard and organizing and retaining the necessary documentation.  Most major merchants already have an internal audit function, so there is no need to necessarily increase headcount.  On the face of things, conducting a ROC internally sounds fairly straight forward.

As usual before you can save any money, there are some issues to conducting your own PCI ROC assessment.

  • Most internal audit personnel are financial auditors not IT auditors and, as such, they have limited technology skills.  And even with organizations that have an IT audit capability, the level of technical skills required by the PCI ROC is more than the skills available.  The PCI ROC requires a significant amount of technological background in order to be conducted.  Skills such as analyzing firewall, router and switch configurations, understanding access control mechanisms and conducting vulnerability scans and penetration tests are required for an assessment to be properly performed.  As a result, your internal audit group will likely lack the necessary technical skills to conduct the assessment.  This means investing in either training your existing audit staff or contracting for the skills necessary, neither of which is cheap.
  • Until recently, there was a limited amount of PCI compliance training available for internal audit personnel.  With MasterCard’s requirement of training internal audit personnel by July 2011 if an organization desires to conduct their own assessment, the PCI SSC has developed training for non-QSAs and that training will be available in 2010.  However, such training is also not inexpensive.  In addition, training is required annually to stay on top of the PCI DSS, so it is an ongoing cost.
  • Since the ROC assessment process is an annual occurrence, the trained internal audit personnel will likely not retain all of the knowledge necessary to conduct the program from year to year because they do not conduct ROCs all year long like a QSA.  This will mean ramp up time as well as the potential that the assessment will miss possible compliance issues or issues may be misinterpreted.

I have personally seen the results of a number of internally conducted PCI ROCs.  None of them properly interpreted the PCI DSS requirements.  In all cases the internal assessment judged the organization as being in full PCI compliance.  However a review of the work showed that none of these organizations were PCI compliant.  In some cases, the evidence collected in no way meets the documentation requirements of the PCI SSC let alone documented compliance with the PCI ROC requirement.  And the worst problem, these PCI ROCs were all signed off by the organization’s Chief Financial Officer representing that the PCI ROC accurately represented their PCI compliance.

In the end, I am not convinced that organizations that desire to conduct their own PCI ROC will achieve the cost savings they believe they should achieve.  And while on the surface it may appear to be cheaper to do the assessment internally, the skills and high level of training required to obtain a proper PCI ROC are likely beyond the investment required to get the job done.  Finally, because your internal auditors are not conducting PCI ROCs every day there is a higher risk that your assessment may miss potential threats to your cardholder data environment.

I am not saying that you should not consider conducting your own PCI ROC.  I am just pointing out that it is not a simple internal IT audit.  It is also not as inexpensive as you might think.  So please think about this before you go down the internal assessment road and get your organization in trouble.

UPDATE: The PCI SSC now offers the Internal Security Assessor (ISA) certification program for internal personnel.  It is supposedly equivalent to their QSA certification program.  Like the QSA program, ISAs are required to re-certifiy annually, so it is not a one shot deal.  It is also not cheap at around $2,500 per person not including expenses.  For MasterCard Level 2 merchants that want to do their own assessment after June 30, 2011, you will be required to either hire a QSA or have internal assessors attend and pass the ISA certification.


Security Is Not Perfect

I am really getting tired of people complaining about data breaches, the lack of security and how it is all the fault of security initiatives such as the PCI DSS, the QSA involved or the carelessness of the organization that was breached.  It is not that QSAs, the PCI DSS and the like do not have a responsibility in all of this, but that responsibility seems to be over allocated to QSAs and the PCI DSS rather than focused on the real cause.  According to a recent Verizon Business Services report, the real cause is usually human error.

What are worse are the comments from credit card company executives that seem to portray the PCI standards as ‘THE’ standards for securing cardholder data.  What these people fail to either understand or acknowledge is that even if every security measure in the PCI DSS were put in place and worked exactly as they should, breaches would still occur.  Albeit at a much slower pace and with less information released.  Why?  Because, security is not perfect.  Never was and never will be.

But, but, but …

If you have told people that your security systems are absolutely, positively protecting your organization’s information assets and that nothing can go wrong, shame on you.  You need to better manage people’s expectations regarding security.  This expectation by people that security is perfect is something that needs to be debunked immediately.  Yes, the security measures put in place are protecting the organization’s information assets, BUT …   Those measures are not perfect, never will be perfect and at some point will break down resulting in some sort of incident.  Your job is to make sure that your security measures ensure that any incident that occurs is as small as possible.

But, but, but …

No ‘buts’ about it.  Any security professional knows that security only reduces or minimizes risks; it does not remove them all.  Yes, some risks may be eliminated when proper security practices are implemented.  Nevertheless, for the most part, some security risks will always remain, regardless of the security measures put in place.  Proper management of these remaining security risks should minimize the risks as much as possible.  However, minimized risk does not imply that the threat or vulnerability cannot be leveraged.  Minimized risk means that controls are in place to make the likelihood of a compromise using any remaining vulnerability as low as possible.  As a result, security programs such as the PCI DSS are focused on keeping all but the most dedicated attackers at bay.  It is what I call the “99-1 rule.”  Security focuses on protecting your organization from 99% of attackers.  However, the remaining 1% of attackers will cost you too much time, resources and effort to keep them out.  Your hope is to at least be notified when that 1% attacks.

A prime example of security not being perfect is the banking industry.  Banks have invested heavily in a variety of security measures including vaults, silent alarms, video recording, die packs and other deterrents.  Vaults have definitely reduced the amount of money lost in a robbery.  However, one would argue that silent alarms, video recording and die packs have not had a significant impact on the number of bank robberies.  Where silent alarms, video recording and die packs come into play is in reducing the number of successful robberies.

That is what the PCI DSS and similar security standards are all about.  They were developed to make the successful breach as difficult a thing to accomplish as possible.

However, there are people out there that are dedicated to breaching security for whatever reason.  Whether or not you are in their sights is why you rely on the control triad and defense in depth.  The idea being that this will give you a “heads up” if your organization is being attacked.  But be careful and do not become complacent.  Dedicated attackers are like hunters.  They research their prey so that they know about the defenses of their target and they develop sophisticated plans to defeat those defenses or at least keep them at bay.  These are people skilled in their craft.  They take a part-time job as part of the cleaning staff at a building where their prey is located so that they can scope out their quarry and determine where the weaknesses are located.  If they need other expertise, either they will acquire that expertise through training or they will team with someone that has that expertise.  In the end, if there is a way, these people will find it and exploit it.

If you want to see these concepts in action, watch any of the Ocean’s 11, 12 or 13 movies.  They may be campy but the planning concepts used in these movies mimics just how an attacker goes about planning to obtain your databases or other information.  And if you do not believe that such people or threats really exist, read my post regarding the Advanced  Persistent Threat (APT).

The key to remember is that you are never going to remove all of the risk, you are only going to manage it.  With proper controls in place, threats can be managed and the risk to your assets minimized.  However, do not let your guard down because that is when attackers will take advantage.


Network Segmentation – Take 2

I have had a couple of discussions recently regarding what constitutes good network segmentation.  Apparently, my original post was just too cryptic, so I’m going to use some examples in this post to hopefully clarify where people are going wrong.

The PCI DSS gives very little guidance on network segmentation.  In fact, the only statement near a definition says. “Network segmentation can be achieved through internal network firewalls, routers with strong access control lists or other technology that restricts access to a particular segment of a network.”  But those are the mechanics of network segmentation.  This definition does not specify or illuminate the additional controls required to ensure segmentation which is why I wrote the original post.

In my first example, the network in question is segmented by VLANs.  The cardholder data environment (CDE) is contained in one VLAN and there are another eight VLANs defined.  All VLANs are internal and none face the Internet.  Access control lists (ACL) have been implemented to control the communications between the various VLANs and the CDE VLAN.  Monitoring of all of the VLANs has been implemented through a variety of methods including network monitors, host monitors and system/event log monitoring and analysis.  Network administrators monitor consoles that bring up any critical alerts that could indicate a potential attack or compromise.  This definition sounds pretty good does it not?  The problem is that it is all in the details and the details tell a different story.

In reviewing the VLANs’ ACLs we determined that two of the VLANs have TCP and UDP ports 1 through 65535 open to the CDE VLAN.  Whoa!  Every port is open to the CDE VLAN from these two VLANs?  Yes, that is correct.  This is not what the PCI SSC thought was “strong access control lists.”  In digging further, we inquire as to why this condition exists.  We are told that, ”We were unable to determine what the applications needed to have open between these VLANs, so rather than break anything, we just opened everything to be safe.”  To be safe?  Safe is a term that has different meanings relative to each person’s view that uses it.  In this case, because the two VLANs were internal, apparently the other VLANs were considered also ‘safe’.

But a lot of network administrators would point to the monitoring as the way they control things.  Are you serious?  I do not care how much monitoring you do.  With every port open, that monitoring is going to likely generate enough false positives to make identifying the real threats like finding a needle in a haystack.  And this was confirmed later on when we observed the network administrators that monitor the network.  They were ignoring almost everything that came up on their screens.  When we questioned them about this, they said, “We have tried to tune the alerts, but have not been able to significantly reduce the false positives.  We get around 10,000 to 25,000 alerts a day.  So we do the best we can to find the real threats.”  The best we can?  Security is not forgiving, let alone for people that are doing ‘the best they can’.

The moral of this example is that if you have every port or close to every port open, you cannot consider your network properly segmented.  I do not care what the other controls are that you believe are in place.  You have to be realistic.  And justifying having all of those ports open has to be more than implying you were too lazy and did not want to make the effort to find the real answers.

My other example involves a network that does have a limited number of ports open between their CDE VLAN and their other VLANs, albeit there are quite a few open ports.  They also have monitoring in place and their network administrators are very diligent in ensuring that alerts are addressed as quickly as possible.  Unlike my first example, these folks are seeing around 300 to 500 alerts of which 10% to 15% are false positives.  The problem is with their documentation.  In reviewing the firewall rules that segment the VLANs we documented all of the ports open to/from the CDE VLAN to the other VLANs.  We interviewed the Manager of their network management and administration department and inquired as to the business reason for each of the open ports.  Of the 100 or so ports defined in the ACLs, they can only give us business reasons for about 20% of them.  Heaven forbid they should document the reason in the configuration file, but there is no other documentation available.  The Manager even tries to find documentation in the help desk system where they log all of their changes, but even after refining the search criteria, there are just too many records to sift through in our one hour meeting to find what we need.  Not even proof that management knows that these ports are open, the risks that are involved with these ports being open and that management approved that these ports be opened.

The moral here is that documentation is the foundation from which you build.  If you have a shaky foundation, you will have shaky security and are likely a candidate for a compromise and breach.  This is why documentation is important.  If you cannot remember why ports were opened, users were allowed access to data and other security relevant issues, how can you even think you are secure?  The answer is you cannot be secure if you cannot answer basic questions.

But it gets better.  This same individual earlier in our meeting had confirmed that they were the one that reviewed the firewall rules quarterly and showed us emails to prove that fact.  Then as we are going through the CDE ACLs, they say, “Oh, that rule should be removed.  It was for a business partner that we have not done business with in more than four years.”  Now, do you think I seriously believe that you are really reviewing these firewall rules quarterly when you admit that a given rule should have been removed four years ago?  We document four more firewall rules that should have been changed or removed.  It is situations like this that cause a QSA to shudder and then wonder what other ugly things are under the rocks and just how far you need or want to dig to find them.

Our moral here is telling the QSA what they want to hear when you know you will have to contradict yourself later on.  All it does is make you look incompetent.  But this situation also points out a good point regarding the duties of a QSA in conducting their assessment.  QSAs not only rely on interviews and documentation, they also rely on observations to ensure that organizations not only talk the talk but also walk the walk.

So what then is proper network segmentation?  A properly segmented network is much more than just technology.

The foundation of a properly segmented network starts with the control triad of preventative, detective and corrective controls.  Preventative network controls are going to be firewall rules and VLAN ACLs and any other controls that prevent or control access.  Detective network controls are going to be related to the monitoring you implement.  Monitoring can be real time and/or log analysis after the fact, but it should not be limited to just access to/from the CDE.  Monitoring also needs to include monitoring the network traffic for anomalous traffic.  Finally, you need corrective controls to ensure that any issues discovered with the preventative and detective controls are addressed as soon as possible.  Corrective controls are usually generated as action items created from such things as the lessons learned from an incident response plan or findings from an audit.

Once you have decided on the controls you will implement, you then need to create documentation that supports those controls.  For networks, the documentation that is key is to document every port that is open inbound to or outbound from the CDE environment.  Each of those ports will have been formally approved by management with the risk presented by having the port open.  And that risk analysis needs to include not just the port in question, but any other relevant ports, if necessary, as certain combinations of ports may increase or decrease the risk.  This risk analysis is important for a number of reasons.  First, it documents the basic analysis of risk and provides the rationale for having made a decision at that time.  That documentation can also save you if a breach occurs as you can understand what the people were thinking when they originally opened the port and also understand potential methods that might have been used to cause the breach.  This documentation is also important for the quarterly reviews as you can use the documentation to refresh your memory as well as assisting you in making changes to the rules if business conditions change.  Yes, I know firsthand that documentation is the last thing anyone wants to do.  But without it I will guarantee you will not remember six months or more down the road why you did what you did and for whom.  And in the security business, it is that sort of knowledge that can mean the difference between being secure and being a target.

The next item that needs to be documented is the users, programs, services and organizations that have access to the CDE.  In the case of programs and services, this should be tied to the aforementioned list of ports open.  In a breach, this documentation will reduce the number of likely suspects of where the breach came from.  As a result, you can see why it is important to limit the number of people, programs and organizations that have access to the CDE.

The final piece of documentation that needs to exist is what should be done in the event a problem or an alert is generated.  If people do not know what their responsibilities are in regards to providing feedback, then alerts will be missed or ignored and problems may not be addressed as quickly as they should.  Responses to problems or alerts should include detail regarding the conditions that created the problem or alert, the steps take to address the problem or alert and any issues that may have resulted from addressing the problem or alert.  If the problem or alert is not addressed in the timeframe required, there needs to be an escalation process so that the problem or alert receive the necessary visibility of management should they go unaddressed.

I hope this provides the additional examples of network segmentation.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2010