On July 1, 2012, requirement 6.2.a went from a “best practice” to an official requirement. Since v2.0 of the PCI DSS was issued, there has been a very active discussion regarding what the PCI SSC was trying to get at with this revision. But it is not just requirement 6.2 that is involved in this process, requirements 2.2, 6.5.6, 10.4, 11.2.1 and 11.2.3 also include references to 6.2 and, with requirement 6.1, comprise the PCI DSS vulnerability management program.
To refresh people, requirement 6.2 states:
“Establish a process to identify and assign a risk ranking to newly discovered security vulnerabilities.”
There is a note that is included with requirement 6.2. That note is a clarification regarding how risk rankings should be set.
“Risk rankings should be based on industry best practices. For example, criteria for ranking “High” risk vulnerabilities may include a CVSS base score of 4.0 or above, and/or a vendor-supplied patch classified by the vendor as “critical,” and/or a vulnerability affecting a critical system component.”
As simple as this requirement is, it is amazing how complicated organizations attempt to make it. All the PCI DSS desires from this requirement is that there is a documented process that identifies vulnerabilities and assigns a risk ranking to them, if the organization desires to change that ranking.
The purpose of the revision to requirement 6.2 is to provide organizations with the ability of determine what they patch, when they patch, based on the risk presented by the vulnerability in the organization’s environment. The Participating Organizations (PO) had pushed for this change in the PCI DSS v2.0 because, according to the POs, QSAs were demanding patching of software that either did not exist in the environment or existed on systems out of scope.
I can appreciate the POs situation as I had encountered numerous horror stories regarding patching over the years. In a number of cases, POs that were running Apache for Windows were being told to patch Microsoft Internet Information Server (IIS) even though it was not installed because versions of vulnerability scanners and patching programs incorrectly reported that IIS was not patched. In another situation, a QSA was requiring IBM iSeries systems patched for Web server patches that were unrelated to IBM Websphere.
The key clarification is that the PCI SSC does not require an organization to reinvent the wheel and create a unique ranking system. If an organization wants to use the CVSS ranking approach, that is perfectly fine. What the revision to requirement 6.2 allows is that if the CVSS for a vulnerability that has a CVSS of 4.2 for example, the organization can revise it to a value below 4.0 as long as they document the process for that downgrading of the CVSS.
To perform this calculation, I recommend using the National Institute of Standards and Technology’s (NIST) Common Vulnerability Scoring System (CVSS) calculator. The key metric to adjust to your environment when you do not have systems that run software with the vulnerability is under Environmental Score Metrics, the ‘Percentage of vulnerable systems (TargetDistribution)’ value. By adjusting this value to ‘None’ or ‘Low’, the overall CVSS score will drop well below 4.0.
The next piece of formal documentation you need to have is the explanation for why you changed the CVSS score. If you do not have this documentation, then you are not allowed to change the CVSS value. This documentation does not need to be extensive, just the explanation that justifies the changes you made to the variables used to compute the CVSS.
The other stumbling block for organizations is requirement 6.1 which states:
“Ensure that all system components and software are protected from known vulnerabilities by having the latest vendor-supplied security patches installed. Install critical security patches within one month of release. “
It is the final sentence in requirement 6.1 that creates the most consternation, “Install critical security patches within one month of release.” But that is not the only deadline. The note for 6.1 also has a deadline.
“An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months.”
If a risk-based approach is followed and systems are prioritized, then critical infrastructure needs to be patched within one month and the remaining systems patched within three months. While this gives a bit more leeway, organizations still can have issues getting patches implemented within a month and even within three months. For merchants with simple environments and low device counts, complying with requirement 6.1 is annoying but can be accomplished.
But for organizations with large device counts and documented change control processes, getting patches done in a 30 day cycle is typically not possible. Why you might ask? Because by the time the patch is released by the vendor, the organization obtains the patch, tests the patch in their various applications’ environments, puts the patch through their quality assurance and regression testing processes for those environments, and then implements the patch in production. The quickest some of these organizations can get a patch from release to production is 45 days, but more likely 60 days. Because of this long patch cycle, these organizations scan more often, monitor in real-time and implement other mitigating controls to manage the risks related to the vulnerabilities they cannot patch in the 30 day window.
Another problem outside of organizations’ control is application vendors. A lot of point-of-sale (POS) and e-Commerce software vendors issue updates on quarter, semi-annual or even annual bases. These vendors explicitly document in their contracts that organizations are not allowed to independently patch their systems as that will void the vendor’s support agreement. As a result, an organization can be technically out of compliance with requirement 6.1 for months and mitigations can only do so much.
This has been discussed at length during QSA and open sessions at the PCI Community Meetings. POs and QSAs argued that the 30 day deadline was not realistic and explained why. Finally, QSAs were told to use their judgment and evaluate the organization’s vulnerability management process and determine if vulnerabilities could “fall through the cracks.” If the vulnerability process is considered rigorous and vulnerabilities were believed to be processed without being lost, then the vulnerability management process could be allowed to meet the requirements of 6.1 regardless of the timeline in which vulnerabilities were actually patched.
So what are the lessons to be learned?
- The 30 and 90 day patch timeframes are goals to shoot for, and you should always try to meet those deadlines. But as long as your organization can prove it has a rigorous vulnerability management program and have documentation that it works and works reliably, it is in compliance with requirement 6.1.
- You do not need to reinvent the wheel and come up with a new vulnerability ranking system. Use the CVSS and modify the inputs as necessary to reflect your organization’s particular environment.
- Any changes you make to a CVSS for a vulnerability need to be justified and documented.
- Document your vulnerability management policies, standards and procedures and live by those documents. If you cannot prove your process works, then you are not in compliance with the PCI DSS.