The next reason I believe the process is broken is with the automated scanning processes. They do not seem to be accurately assessing the security of Web servers, firewalls, routers and other externally facing devices that form an organization’s perimeter. In my opinion, it seems that in our drive to bring down the cost, we have created a false sense of security by having the concept of a “passing” scan and only requiring scanning on a quarterly basis.
It is hard to believe that external vulnerability scanning came into vogue more than a decade ago. There were no automated solutions. No popping out to a Web site, entering IP addresses and scheduling a scan. It was all manual. A person was required to configure the scanning solution and then launch the scanner.
There was a proven methodology used for approaching vulnerability scanning. That methodology involved approaching the organization as an attacker would. . The security professional would go out to domain registrars and other official sources to obtain potential IP address ranges, registered domains and other public information. Then the professional would develop a profile of the organization’s external security posture using Nmap and similar tools as well as running non-intrusive scans to identify systems and potential entry points. But even more insidious, the security professional would do what became to be known as “Google Hacking” to find out what, if any, information was available on the Internet that might facilitate the attack. Log data, firewall configurations and even authentication credentials could be obtained through Google Hacking. Even today, while Google does its best to minimize the information they collect, it is amazing the amount of information that can still be obtained with specially crafted Google searches.
Once the preliminaries were done, it was time to actually scan. The most accepted approach used at that time was to run two scans, one fully external with security measures enabled and one with testing done inside the DMZ. The scan done with the security measures enabled was to confirm that an organization’s firewalls, load balancers or other front ends were functioning as configured. Essentially this scan showed what an attacker would discover in their scanning of the organization’s external presence. There were two purposes of the second scan. The first was to provide an inventory of vulnerabilities that were being obfuscated by the security measures so that they could be addressed or mitigated.
The second purpose of the DMZ scan was for what some would call today a form of “threat intelligence”. The second scan provided a view of what an attacker would encounter if they were able to circumvent the organization’s external security. It answered the question of how large the attack surface was in the DMZ should it be compromised? What else could be subverted and used to further an attacker’s goal of getting even deeper into a network? The point of this part of the exercise was to provide the organization with an idea of what to look for should anomalies begin to appear in log data.
Throughout this scanning process, separate scans were run for firewalls, routers, load balancers, Windows devices, Linux devices, etc. The rationale for that was to obtain clear results based on the type of device. No reason to toss a bunch of Windows vulnerabilities against a Cisco firewall or F5 load balancer and then have to sift through the results to remove the false positive entries generated by the Windows devices they protect.
All of this information would be collated, analyzed and an actionable report produced for management and security professionals to absorb and understand the organization’s true external security posture. Once the draft report was delivered there would be one or more calls or meetings with the organization’s security personnel to tweak the message of the results, develop action plans to address the results and then a final meeting with management and security personnel to deliver the final report. At the end of this process, management and security personnel knew exactly where they stood from a security perspective and what needed to be done to ensure that they remained secure going forward.
Obviously, such an approach is fairly comprehensive and that costs money to produce. And it was the cost that drove the automation of the process so that it was less labor intensive.
In the mad dash to the lowest possible cost, the following is what typically happens today.
Organizations subscribe to an ASV vulnerability scanning service. If the organization is lucky, they go through a scoping call with a human being at the ASV service to confirm the IP addresses that are in-scope for PCI compliance. Having been on a number of these calls with ASVs, it is more of a questionnaire approach and there is very little done by the ASV to actually confirm the scope of the scanning. It is more or less a conversation that explains how to enter the IP addresses into the scanning tool’s Web interface and to schedule a scan, not a diligent conversation on whether or not a device is in scope or not. There is typically a brief discussion on how to use the Web site to dispute scanning results that caused a non-passing scan. In total, the call takes 20 to 30 minutes at which point customers are essentially on their own.
Scans are run on a quarterly time period. Scans may be rerun if vulnerabilities are identified that result in a non-passing scan. Once a passing scan is obtained, a passing certificate is generated and the organization moves on to the next quarter. Every now and then a result may be disputed, but those situations are typically rare. The bottom line is that the ASV process is fairly automated with very little, if any, human intervention.
From a tool perspective, the vulnerability scanner is tossing every vulnerability at every IP address they are given. Firewalls, load balancers, switches and routers are tested for Windows and Linux vulnerabilities. False positive results can be rampant but most scanners weed out the obvious false positive results based on OS and device signatures. But even if a few false positives remain, as long as they have a CVSS score of less than 4.0, no one cares. And the scanner is configured to run as quickly as possible because it needs to be available for the next customer so that the ASV does not have to have an individual scanner for every customer.
All of this is driven by customers’ desire to minimize cost as much as possible. Thus, in order to stay in business, the ASV scanning service providers have done an admirable job of driving down cost. But at what other costs?
While I think we can all agree that the current approach is flawed, the Council would point to the fact that the quarterly ASV scanning requirement is only a minimum and that organizations should be doing their own external vulnerability scanning more often than quarterly. And that is what is missed by a lot of organizations is that they need to do more than what the PCI DSS requires to be secure. And no matter how loud that message is repeated, it seems to get missed over and over again.
However, a lot of organizations do not do anything more than the PCI DSS requires. We have this “bare minimum” mentality because, if it is not mandated in the standard, then we must be secure if we do only what is required. This results in the false sense of security that organizations have when they only do what the PCI DSS or any other security framework for that matter mandates.
In addition, I cannot tell you how many organizations totally write off vulnerabilities that have a CVSS of less than 4.0 as “trivial” representing no threat. There is this mistaken belief that, because the CVSS score does not warrant immediate patching, it is not a threat. However, when you have them in relative abundance and of certain types, those “trivial” vulnerabilities can be put together in a particular sequence and used to compromise a network. To the shock and horror of a lot of organizations, their security is breached in moments during their penetration test using those “trivial” vulnerabilities.
Then we have the limitations of the tools themselves. Their vulnerabilities are constructed for the general audience, not a specific network. As a result, depending on security measures and configurations, a particular vulnerability for say HTTP or FTP might not register due to a load balancer or other devices in the communication path. Yet with a minor tweak, the same vulnerability could actually work. And that was another point of doing that second scan inside the DMZ back in the “old” days. Taking the two scans allowed you to correlate that while HTTP or FTP was open; it was vulnerable because that vulnerability was shown on the internal DMZ scan. As a result, you could take additional steps to ensure that the vulnerability could not be exploited from the Internet.
The bottom line in all of this is that ASV scans have become a pointless piece of the compliance assessment process. It has not been because the ASVs are bad, it is that the process they are forced to follow is not doing what it once did at the beginning. It does not help that merchants and service providers treat ASV scans as a “check the box” activity and that because they do something quarterly they are secure.
If the process is broken, what should be done to fix it? That is the subject of my next post.