Archive for February 3rd, 2011

03
Feb
11

Intent Of Requirements – 11.2

As I discussed in my earlier post on the intent to requirement 6.1, requirement 11.2 is another requirement where there is a hard and fast metric of four “clean” quarterly vulnerability scans.  Nice idea, but tough to execute.

To remind everyone, the intent of vulnerability scanning is to; (a) ensure that if a patch has been applied, it is in fact in place and the vulnerability has been addressed and, (b) identify any new vulnerabilities.  11.2 is in response to at least two issues.  The first is that organizations that do not vulnerability scan have no way to assess the larger risk their technology infrastructure poses to the potential loss of data.  The second is to ensure that updates and patches are in fact truly correcting vulnerabilities.  I have personally seen situations where patches have removed other patches thus leaving a system at risk of compromise.  Granted these situations are rare, but without vulnerability scanning, the removal of these patches would have never been known.

Requirement 11.2 requires that vulnerability scanning is performed at least quarterly.  However, given the 30 day patching rule, the fact that most vendors have a monthly release of patches and the fact that scanning must be performed after all “significant” changes, an organization really needs to conduct monthly vulnerability scanning at a minimum just to stay compliant.  As a result, if you have an organization doing anything less than monthly scanning, you really need to question if they understand the vulnerability scanning requirement.  However, from a ROC reporting perspective, you really only need to report the results of four quarterly scans even though 12 or more scans are available.

Then there is the problem of the definition of a “clean” scan.  According to the PCI SSC, a “clean” scan is one that contains no severe, high or critical vulnerabilities.  You hear complaints about this time and again from organizations, “How do you ever get a ‘clean’ scan when new vulnerabilities are introduced almost every day?”  This is a very fair question when most security professionals know that the odds for obtaining a “clean” scan are against them because 94% of all vulnerabilities are rated severe, high or critical.  Let alone the fact that tests for vulnerabilities can show up in scanning engines long before the vendor has a patch just exacerbates this situation.  Is it any wonder security professionals are frustrated with this requirement?

This is an area that the PCI SSC has really failed to address and bring some reality to the requirement.  For a lot of large organizations, vulnerability scans that have no severe, high or critical vulnerabilities documented cannot physically be achieved without some rare alignment of the planets, stars and moons.  They could be scanning 24×7 and still could not have a complete, truly “clean” scan of their environment.  As a result, a “clean” scan is usually all in the eye of the beholder.

But that is not the entire problem.  A lot of organizations run packages such as Oracle Commerce, IBM Websphere or MICROS Simphony.  Oracle, IBM and MICROS typically have issues with some OS or system software vendor patches such as those for Windows, Linux or Apache.  They usually issue advisories to their customers indicating that these patches cannot be installed.  In these cases, severe, high or critical vulnerabilities may not get patched for months as these vendors develop an update to their solution as well as packaging the other vendors’ patches.  As a result, organizations with these solutions can have severe, critical or high vulnerabilities in their scans for quite a while.  In these cases, a QSA needs to determine if the organization has followed the vendors’ recommendations or used any other recommendations for mitigating the risks of having these vulnerabilities.  QSAs also need to see proof that once Oracle, IBM or MICROS issue their fixes for these situations, that these organizations implement them in a timely manner.

As a result of these constraints, I have developed my own definition of a “clean” scan.  Again, I am not advocating that vulnerability scanning does not have to result in “clean” results.  I am advocating that QSAs bring reality to the process.  In my world, a “clean” scan is a scan that results in severe, critical or high vulnerabilities that were not documented in any of the previous scans.  If there are repeated severe, critical or high vulnerabilities, those vulnerabilities should only be the result of vendor patching issues and the organization must have documentation that proves that management understands that they exist, what measures the organization has taken to mitigate their existence and when the vendor anticipates to have the vulnerabilities addressed.

Here is my rationale for this definition.  There are just too many factors that go against getting what the PCI SSC defines as a “clean” vulnerability scan.  Real world experience says that getting a “clean” scan is near to impossible, particularly in large environments.  I can count on one hand the number of organizations that I have worked with that have truly four quarters of “clean” scans and none of them are very large.  Therefore, as long as I can prove that severe, high or critical vulnerabilities are being addressed in a reliable and as timely a manner as possible, then the scans are clean.

All of you anal-retentive types I am sure are going to flame me as not following the standard.  However, what is a standard worth if it cannot be applied equitably to all?

Advertisement



Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

February 2011
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28