Intent Of Requirements – 11.2

As I discussed in my earlier post on the intent to requirement 6.1, requirement 11.2 is another requirement where there is a hard and fast metric of four “clean” quarterly vulnerability scans.  Nice idea, but tough to execute.

To remind everyone, the intent of vulnerability scanning is to; (a) ensure that if a patch has been applied, it is in fact in place and the vulnerability has been addressed and, (b) identify any new vulnerabilities.  11.2 is in response to at least two issues.  The first is that organizations that do not vulnerability scan have no way to assess the larger risk their technology infrastructure poses to the potential loss of data.  The second is to ensure that updates and patches are in fact truly correcting vulnerabilities.  I have personally seen situations where patches have removed other patches thus leaving a system at risk of compromise.  Granted these situations are rare, but without vulnerability scanning, the removal of these patches would have never been known.

Requirement 11.2 requires that vulnerability scanning is performed at least quarterly.  However, given the 30 day patching rule, the fact that most vendors have a monthly release of patches and the fact that scanning must be performed after all “significant” changes, an organization really needs to conduct monthly vulnerability scanning at a minimum just to stay compliant.  As a result, if you have an organization doing anything less than monthly scanning, you really need to question if they understand the vulnerability scanning requirement.  However, from a ROC reporting perspective, you really only need to report the results of four quarterly scans even though 12 or more scans are available.

Then there is the problem of the definition of a “clean” scan.  According to the PCI SSC, a “clean” scan is one that contains no severe, high or critical vulnerabilities.  You hear complaints about this time and again from organizations, “How do you ever get a ‘clean’ scan when new vulnerabilities are introduced almost every day?”  This is a very fair question when most security professionals know that the odds for obtaining a “clean” scan are against them because 94% of all vulnerabilities are rated severe, high or critical.  Let alone the fact that tests for vulnerabilities can show up in scanning engines long before the vendor has a patch just exacerbates this situation.  Is it any wonder security professionals are frustrated with this requirement?

This is an area that the PCI SSC has really failed to address and bring some reality to the requirement.  For a lot of large organizations, vulnerability scans that have no severe, high or critical vulnerabilities documented cannot physically be achieved without some rare alignment of the planets, stars and moons.  They could be scanning 24×7 and still could not have a complete, truly “clean” scan of their environment.  As a result, a “clean” scan is usually all in the eye of the beholder.

But that is not the entire problem.  A lot of organizations run packages such as Oracle Commerce, IBM Websphere or MICROS Simphony.  Oracle, IBM and MICROS typically have issues with some OS or system software vendor patches such as those for Windows, Linux or Apache.  They usually issue advisories to their customers indicating that these patches cannot be installed.  In these cases, severe, high or critical vulnerabilities may not get patched for months as these vendors develop an update to their solution as well as packaging the other vendors’ patches.  As a result, organizations with these solutions can have severe, critical or high vulnerabilities in their scans for quite a while.  In these cases, a QSA needs to determine if the organization has followed the vendors’ recommendations or used any other recommendations for mitigating the risks of having these vulnerabilities.  QSAs also need to see proof that once Oracle, IBM or MICROS issue their fixes for these situations, that these organizations implement them in a timely manner.

As a result of these constraints, I have developed my own definition of a “clean” scan.  Again, I am not advocating that vulnerability scanning does not have to result in “clean” results.  I am advocating that QSAs bring reality to the process.  In my world, a “clean” scan is a scan that results in severe, critical or high vulnerabilities that were not documented in any of the previous scans.  If there are repeated severe, critical or high vulnerabilities, those vulnerabilities should only be the result of vendor patching issues and the organization must have documentation that proves that management understands that they exist, what measures the organization has taken to mitigate their existence and when the vendor anticipates to have the vulnerabilities addressed.

Here is my rationale for this definition.  There are just too many factors that go against getting what the PCI SSC defines as a “clean” vulnerability scan.  Real world experience says that getting a “clean” scan is near to impossible, particularly in large environments.  I can count on one hand the number of organizations that I have worked with that have truly four quarters of “clean” scans and none of them are very large.  Therefore, as long as I can prove that severe, high or critical vulnerabilities are being addressed in a reliable and as timely a manner as possible, then the scans are clean.

All of you anal-retentive types I am sure are going to flame me as not following the standard.  However, what is a standard worth if it cannot be applied equitably to all?


5 Responses to “Intent Of Requirements – 11.2”

  1. February 3, 2011 at 2:01 PM

    I agree fully with your post. I’ve long had an issue with this item in the PCI DSS only because it makes things sound so deceptively easy. But really, I’ve yet to run a vulnerability scan that came back as clean, even with a small scope of Windows systems. You just outright fail. And depending on the scanner, some of the items are unpatchable or 0days or weaknesses that Microsoft isn’t fixing, and such. Unfortunately, managers expect and want a scanner that will scan and come back clean, without staffhours spent interpreting the results. I also have yet to see that ever happen.

    So that means you need a tool that will preserve scanning integrity (i.e. not turn off pieces of the scan) as well as past work in accepting/ignoring/explaining historical items. And the local expert who can interpret the scan results and do something about them when needed. Time, time, time! Only shops with existing security teams will ever get that done. No one else does it, if you ask me. They’re lucky to even have a scan run!

    I truly don’t see how anyone ever meets this requirement without heavy interpretation from the QSA on what they want or don’t want. And personal experience shows some of them just pass this with minimal effort/value… (And in a way, I don’t blame them!)

  2. February 3, 2011 at 11:29 AM

    What about using intrusion prevention products to compensate for severe vulnerabilities of which you have no control? A good network-based Intrusion Prevention System (IPS) will have vulnerability-based signatures which will block attempts to exploit known vulnerabilities.

    Also a good host-based database IPS would protect against database vulnerabilities which have not yet been patched.

    Your thoughts?

    • February 3, 2011 at 8:22 PM

      Either network- and/or host-based IDS/IPS and/or Web Application Firewalls (WAF) can go a long way in further protecting devices and applications from attacks. However, the problem becomes that some organizations begin to rely too much on those approaches, do little monitoring and also do little or nothing to improve and protect the actual application, thus resulting in a breach.

  3. February 3, 2011 at 11:26 AM

    At the PCI Summit Meeting this last year, this question was addressed by the council. They pretty much had the same take on this as you mention here, so anyone flaming you needs to check themselves. The PCI council mentioned that as long as measures were being taken to address vulnerabilities as they arise, rather than not doing anything at all about them, that not having four clean scans shouldn’t keep you from being compliant. As long as you show an you have made an effort to remediate the PCI affecting vulnerabilities, your QSA should not count it against you.

    • February 3, 2011 at 8:15 PM

      Unfortunately, I was unable to attend either the meeting in Orlando or the one in Barcelona so this is the first that I’m hearing that the Council addressed this point. However, I am glad that they agree with my approach. However, I continue to hear from clients and friends that acquiring banks are rejecting their vulnerability scans because they are not “clean.” So the word is not getting out that this approach is appropriate and acceptable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

February 2011

%d bloggers like this: