Posts Tagged ‘changing standard

31
Oct
15

SSL Is Not Going To Go Quietly

A lot of organizations are finding out that just turning off SSL is just not an option. This is particularly true of merchants running eCommerce sites predominantly used by mobile customers or customers running older operating systems. To the surprise of a lot of IT people, it turns out that most mobile browsers do not support using TLS. And while most Western PC users have reasonably current browser software, the rest of the world does not and turning off SSL will remove a significant portion of some merchant’s customer base. As a result, for some organizations going “cold turkey” on SSL is just not an option without suffering significant consequences.

But there is a larger problem with SSL lurking inside almost every data center. That is with appliances and data center management software that have SSL baked into them for their Web-based management interfaces. A lot of vendors availed themselves of OpenSSL and other open source SSL solutions to secure communications with their appliances and solutions. To remediate these solutions, an organization might be lucky enough to upgrade the firmware/software. Unfortunately, a lot of organizations are finding that replacement is the only option offered by vendors to address these situations.

The bottom line is that because of these situations, SSL and early TLS will not be addressed by just disabling it and moving on. As the PCI SSC found out when they asked Qualified Security Assessor Companies and Participating Organizations about what it would take to address the SSL/early TLS situation, they were told about these issues and therefore set a deadline of June 30, 2016 to provide time to address these situations.

While organizations have until June 30, 2016 to address SSL and early TLS, that does not mean an organization can just sit by and do nothing until that deadline. Here are some things your organization should be doing to address SSL and early TLS if you are unable to just turn it off.

  • Get a copy of NIST Special Publication 800-52 Revision 1 titled ‘Guidelines for the Selection, Configuration and Use of Transport Layer Security (TLS) Implementations’. This publication is the Bible for how to minimize and mitigate the risks of SSL and early TLS.
  • Identify all instances of where SSL or TLS are used and versions supported. It is not just those instances that need to be remediated, but all instances. The reason is that TLS v1.3 is in draft specification and its release is likely just around the corner in 2016. That is why a complete inventory is needed so that when TLS v1.3 is available you will know what remaining instances will potentially need to be updated, upgraded or possibly even replaced.
  • Implement TLS-FALLBACK-SCSV to minimize the chance of SSL/TLS fallback. This option was developed to address the issue created by POODLE. However, be aware that only certain versions of browsers support this option, so it is not a perfect solution.
  • Monitor your external Web sites for SSL and early TLS usage. Track statistics of how many sessions are using SSL or early TLS so that you can determine usage of those protocols and therefore know the actual impact of any decision regarding those protocols. These statistics will also allow you to know when you might be able to pull the plug on SSL and early TLS with minimal impact.
  • Modify any external Web sites to present a message to anyone using SSL or early TLS to warn them that you will be no longer supporting SSL/early TLS as of whatever date your organization chooses to drop that support.
  • Where possible, configure the Web site to only use SSL or early TLS as the absolute last resort. Unfortunately, a lot of vendors modified their SSL solution to not allow this sort of change so do not be surprised if that does not become an option.
  • Develop a migration plan for your remaining instances where SSL or early TLS are used. Contact vendors involved and document what their plans are for dropping SSL and early TLS.
  • Be prepared to create compensating controls for SSL and early TLS that you will not be able to remediate by the deadline. Unfortunately, I have a sneaking suspicion that some vendors will miss the June 30, 2016 deadline as will some merchants be unable to turn off SSL by the deadline. As a result, those organizations will have to put compensating controls in place to maintain PCI compliance. These compensating controls will likely be messy and complex as enhanced monitoring will likely be the only controls available.
Advertisement
20
Apr
15

Why Requirement 5 Must Change

This issue came to a head recently when a colleague of mine attended an ISSA chapter meeting where there was a session given on anti-virus by someone from a US government intelligence operation. I had entirely forgotten about this until they brought it back up. The issue is the ineffectiveness of anti-virus solutions and why they are ineffective.

Most of us have seen the anti-virus testing results that are periodically pumped out by the various trade journals. They all point out that anti-virus is only around 30% to 40% effective in detecting malware. But what never seems to get brought up and clearly discussed is why anti-virus solutions are so bad at their job.

The reason is that anti-virus solution providers have taken a page out of the United States Centers for Disease Control (CDC) influenza playbook. The reason is the statistics that the speaker shared.

  • For every current piece of original malware, there are around 400,000 variants of that malware making the rounds on the Internet. Variants are easy to make which is why there end up being so many so quickly.
  • To scan a computer for every piece of malware developed since day one including variants would take around 40,000 hours (almost a month) to complete. And that is if you dedicate a core for that to run as well as a core to scan everything coming at you.
  • The signature files required to track all malware and their variants from day one would take up a significant portion of your hard drive.

Like the CDC does a scientific wild-ass guess (SWAG) to figure out what influenza vaccine to make every spring, anti-virus vendors do the same thing with their signature files every day. What anti-virus vendors do is select the most likely malware and variants your computer will encounter and that is what your anti-virus signature file will contain. The idea is that their heuristic engines and firewalls will hopefully detect the malware not included in the signature file.

Getting back to the PCI DSS, requirement 5.1.1 states that anti-virus solutions:

“Detect all known types of malicious software, remove all known types of malicious software, and protect against all known types of malicious software.”

Guess what?

Given the aforementioned revelations that signature files are incomplete, there is no anti-virus solution available today that meets those requirements of detecting and protecting against “all known types of malicious software”. All of us have, unknowingly or not, been “checking the box” on this requirement.

I along with a number of other security professionals have stated for years that anti-virus alone has never been adequate for protecting systems as portrayed in the PCI DSS, by the PCI SSC and by the card brands. If you truly want to protect systems from “all” malware as specified in the requirement, you need to use anti-virus in conjunction with a whitelisting/blacklisting and/or file change detection solution. Anti-virus alone is just not enough as the repeated tests of these solutions have pointed out over the years.

The reason you still need to keep anti-virus is that these solutions do what the others do not – quarantine or remove the malware. Quarantining or removing malware is truly an art form and has gotten even more so as operating systems have become more sophisticated in how they manage applications. The reason for this is that, while it is easy to install software, it has become very tricky in uninstalling it, if you can even uninstall it at all.

Anti-virus vendors spend the bulk of their research and development time and money in determining the best way at quarantining and/or removing malware. While a lot of whitelisting/blacklisting vendors have promised to add the ability of quarantining and removing malware, most have come to the realization that providing such features are beyond their current capabilities and not as simple as they have portrayed it in their sales meetings. As a result, I would expect it will take these whitelisting/blacklisting vendors years to have this capability if they even bother to develop it.

So what should the PCI SSC do?

The Council needs to require additional malware detection measures to requirements 5 so that organizations are truly protecting their systems against malware. In the immortal words of Bruce Scheier, what we have now is “security theater” – the appearance of security without security. Anti-virus alone is not cutting it, so it is time to enhance that capability by requiring more than just anti-virus.

The Council should also work with and demand that the anti-virus, whitelisting/blacklisting and file monitoring vendors provide some sort of integration between their respective products. That way when the whitelisting/blacklisting or file monitoring solutions detect an issue, the anti-virus solution can do the quarantine or removal of the suspected malware which it is typically very good.

Is this going to detect every piece of malware?

Sorry, but some will still get through (remember, security is not perfect). But the amount that gets through should be significantly less than with just anti-virus alone.

How much gets through will be up to how the tools are configured. As a lot of you have found out, just installing file monitoring software does not detect all file changes. That is because the installation does not get tweaked to protect everything it should. That takes time and effort that a lot of people do not provide because they have other things to get done. The better you implement the other tools, the fewer pieces of malware that will get through.

Reach out to the Council and let them know that you also think that requirement 5 needs improvement.

18
Feb
15

Council Surveys QSAs On SSL

This message popped into my inbox late yesterday.

20150217-PCISSCemailMsg

The survey in question contains the following questions.

20150217-PCISSCSurvey

All of my clients have gotten rid of SSL on their public facing Web sites.

The dilemma we have is that while SSL is dead, it is baked into so many products and appliances.  My clients are therefore stuck with appliances and software products that have SSL hard coded into them.  As a result, they will be dependent on their vendors to convert to TLS.

That said, what is the risk of using SSL internally?  Not a good practice, but truthfully, what is the risk?

In my opinion, using SSL internally for the next 12 to 24 months would not be the end of the world as long as it does not become a significant attack vector.

It will be interesting to hear the results of this survey.

07
Feb
15

SSL Is Officially Declared Dead

On January 30, 2015, QSAs received the latest edition of the Council’s Assessor Newsletter.  Buried in that edition was the following statement.

Notice: PCI DSS and PA-DSS v3.1 Revisions Coming

In order to address a few minor updates and clarifications and one impacting change, there will be a revision for PCI DSS and PA-DSS v3.0 in the very near future. The impacting change is related to several vulnerabilities in the SSL protocol. Because of this, no version of SSL meets PCI SSC’s definition of “strong cryptography,” and updates to the standards are needed to address this issue. (Highlighting emphasis added by the PCI Guru)

We are working with industry stakeholders to determine the impact and the best way to address the issue. While we do not have the final publication date, our goal is to keep you apprised of the progress and to provide you with advanced notification for these pending changes. We are also preparing several FAQs that will accompany release of the revised standards.

Should you have any questions, please contact your Program Manager.”

Because the announcement was titled about the coming v3.1 revisions to the PCI DSS and PA-DSS standards, I am sure a lot of QSAs missed this pronouncement.

Not that this should be a surprise to any QSA as the POODLE vulnerability effectively killed SSL.  The Council has now officially announced that SSL is no longer deemed to be strong cryptography.

Therefore, those of you still using SSL to secure transmissions containing cardholder data (CHD) need to stop that practice as soon as possible and convert to TLS or IPSec.

UPDATE: On February 13, 2015, the PCI SSC issued an update to their original announcement in the Assessor Newsletter.

07
Jan
15

SAQ A And SAQ A-EP Clarification

With the advent of SAQ A and A-EP, there seems to be confusion as to what meets what for each SAQ.  I thought I covered this rather well in my post titled ‘Of Redirects And Reposts’.  But apparently that was not clear enough.

For outsourced eCommerce solutions, the criteria from SAQ A states it can be used if and only if:

“The entirety of all payment pages delivered to the consumer’s browser originates directly from a third-party PCI DSS validated service provider(s).”

For some service providers it appears that there seems to be a lot of misunderstandings as to what constitutes “originates directly from a third party”.  A lot of processors believe that if their customers’ Web sites are not storing cardholder data (CHD) or sensitive authentication data (SAD) then they are out of scope regardless of the method used to process a customer’s payment.  What they seem to forget is that applications that process and/or transmit CHD/SAD are in-scope for PCI compliance just as those that store CHD/SAD.

For SAQ A and A-EP, the Council took their lead from Visa Europe as to what is meant by “originates directly”.  Visa Europe’s Processing eCommerce Payments guide has a great matrix that explains the difference between SAQ A and A-EP by payment processing type and merchant level.

Visa Europe SAQ A SAQ A-EP ROC Matrix

With redirects and iFrames, the merchant’s Web server never comes into contact with the CHD or SAD because the customer is communicating directly with the transaction processor’s server.  PayPal is a prime example of a redirect and meets the criteria of SAQ A.  With a direct post, JavaScript, XML or any other techniques, the merchant’s eCommerce server is at least processing and/or transmitting the CHD/SAD to the processor’s servers.  That is because there is some form of code/executable/script/etc. that is running on the merchant’s eCommerce server thus placing it directly in-scope.

Where things seem to get confusing is with processors that offer multiple methods of completing payments.  Unfortunately, it also appears to be just as confusing to the processors’ sales personnel as well.  We have encountered numerous instances where the processor’s sales people believe all of their solutions make the merchant out of scope when only the redirect/iFrame solution they have provides such a scope reduction.  All of their other solutions place the merchant directly in-scope.

The bottom line is that it is extremely important to get the transaction processor to explain how a payment is processed to determine whether your server is or is not out of scope.  Even if the sales person says the solution is an iFrame or a redirect, make sure to quiz them enough to ensure that they truly are delivering you an iFrame or redirect solution.

But a word to the wise.  Security professionals will question a merchant’s decision to not worry about the security of their eCommerce Web server because there still is a risk even with the redirect or iFrame approaches.  That risk is that the code/executable/script/etc. that invokes the redirect or iFrame on the merchant’s server gets tampered with or changed and now invokes a Web site that is not the transaction processor’s Web site.  As a result, a merchant’s customers’ CHD/SAD could be sent to Timbuktu and no one would be the wiser until goods/services are not provided due to non-payment.

As a result, security conscious merchants will, at a minimum, ensure their eCommerce servers are properly security hardened, patched current and will monitor the code/executable/script/etc. for changes.  Should a change be detected, the server would then be brought offline and fixed to ensure that transactions are properly processed.

Hopefully this provides everyone with clarity on how to use these SAQs peroperly.

One additional thing I would like to point out.  If you look at the Level 1 merchant line of the Visa Europe matrix, it shows ROC subscripted with either an ‘A’ or an ‘A-EP’.  I point this out because if you meet the criteria of either of the SAQs but are a Level 1 merchant, you can mark all of the ROC requirements not in the respective SAQ as ‘Not Applicable’ and only provide testing evidence for those requirements in the relevant SAQ.

26
Dec
14

PCI Compliance Is Getting More Rigorous

When Visa and MasterCard trotted out their security standards back in 2002 and 2003, the large eCommerce merchants that got to see them complained that they were too much.  Fast forward more than a decade and we still hear complaints that the PCI standards are too much.  Well if you are still complaining, things are about to get worse with version 3.  And the ever more consistent rumor is that business as usual (BAU) will be coming in v4.  If that comes to pass, I know some people that will likely jump out of windows as they did in the 1929 stock market crash.

So how is the PCI DSS getting more rigorous?

I spent some time analyzing the PCI DSS v3 as I did with v2.  From an analysis of v3 to v2, here are some of my findings.

  • There is an overall 11% increase in the number of tests in v3 versus v2.
  • Tests requiring some form of documentation have increased a whopping 83%. Not that 83% more documents will be required, just that there are 83% more tests where documentation is reviewed.  I will have more on this later in the post.
  • The number tests requiring interviews is up 48%. Again, not necessarily involving more people, just more questions to be asked and answered.
  • Tests requiring an observation of a process or activity are up 31%. As with the others, this is not a wholesale jump in new observations, but more an increase in things that must be observed.
  • Tests involving sampling are up 33%. This actually is an increase in the number of things sampled, but not all of the 33% increase are new samples.  This increase is the result of more clarifications from the Council to have QSAs explain what was sampled as it was implied in v2, but not explicitly requested.

Speaking of sampling, not only are the number of tests involving sampling increasing but the PCI SSC has told all of the QSAs that the days of “poor” or “inappropriate” sampling are over.  I have seen Reports On Compliance where QSAs have literally used a sample of one out of thousands under the rationale of “they are all configured the same”.  If you only tested one, how can you even draw the conclusion that the remaining thousands truly are the same?  You cannot and that is a big reason why the Council is getting picky on sampling.

The Council are also tired of incomplete samples.  The example most often quoted is there are 100 servers, half are Windows-based and half are Red Hat Linux.  A lot of QSAs were stopping there and sampling say five of each and calling their work complete.  Wrong!

What the Council is pointing out is that the QSA must go deeper in some cases when choosing their samples.  In the example above, the QSA needs to know the function of those servers so that they sample them based on their function such as database server, directory server, application server, etc.  In addition, the Council is also saying that it may be necessary to consider the applications involved as well to ensure that sampling provides a more complete picture of the environment.  In an assessment involving multiple applications, it might be necessary to sample database and application servers used by each application and not just a random sample of servers.

Finally, sampling might be higher for an entity’s first assessment or the first assessment by a QSA after a prior QSA.  The reason is that a higher sample size is warranted because all might not be as it is represented and minimal sampling would likely not reveal any issues.  This is common in the financial audit industry in situations where a new auditor is coming into the organization or the operations of the organization have been under increased scrutiny by regulators, banks or their prior auditors.

I earlier stated that documentation testing was up 83% and that was related to more testing of the same documents already being collected.  That is not to say that the amount of documentation is not increasing.  Regarding the amount of documentation required for v3 versus v2, I am estimating a conservative increase of around 100%.  I have been hearing horror stories regarding the amount of documentation being requested for v3.  I would not be shocked if the amount of documentation a QSA requires is up by 150% to 200% in some instances, particularly those situations where the QSA was not necessarily collecting all of the relevant documentation they should have been collecting.  A lot of this increase is that document counts now include observations which were considered separately in v2.

Based on this information, you should not be shocked if your QSAC increases the fees they are charging you for assessing your PCI compliance under v3.  Someone has to conduct all of those tests and review all of the extra documentation generated.  Even QSACs that have been doing the right thing all along are seeing impacts in the increases in testing required by v3.  But it has been definitely worse for those QSACs that were doing as little as possible to get an assessment done.  They are seeing the most impact from these changes and will likely find them highly onerous and difficult to justify the huge increases in professional fees required to cover their higher costs.  As a result, I would not be surprised if a number of QSACs stop doing PCI assessments because of the new requirements put on them.

But why are the changes occurring?

The primary reason is to minimize the “wiggle room” QSAs have in their testing so that assessments from one QSA to another are more consistent.  There has to be flexibility given to a QSA because organizations are never alike.  In addition what is compliant to one QSA can be non-compliant to another even within the same QSAC.  That occurs because every individual has their own sense of risk acceptance and avoidance.  This issue should be able to be taken out of the equation through discussion of the issue with the QSA and their superiors and, if necessary, development of mitigation strategies.

Under v2, a QSA that had a high risk tolerance could deem an organization compliant when the evidence would indicate that the organization is not compliant.  Or a QSA with a low risk tolerance could say one or more requirements are not in place in the same situation.  The new Reporting Template is an attempt to take the extremes out and reduce the wide swings in what is and is not compliant.  However, the new version of the PCI DSS does still allow some wiggle room for QSA/ISA judgment.

In addition to taking extremes in risk acceptance out of the assessment process, the Council is also trying to address the issue with QSAs that are judging organizations as PCI compliant when the QSA’s documentation does not support such a claim.  While the majority of QSAs thought this issue was addressed with the Reporting Instructions in v2, based on what the Council is telling us is that it apparently was not.  So the Council is getting stricter and stricter on their guidance as to what is acceptable through the language in the Reporting Template/Instructions as well as through their QSA training.

Another reason for the rigor is the breaches that keep occurring.  Each breach supplies information that might need to be incorporated into the PCI DSS.  One of the best examples of this is requirement 8.5.1:

“Service providers with remote access to customer premises (for example, for support of POS systems or servers) must use a unique authentication credential (such as a password/phrase) for each customer.”

This new requirement is in response to the significant number of breaches where the attacker gained access to a merchant’s cardholder data by knowing the remote access credentials of a vendor that is supporting the merchant such as those vendors that support point of sale (POS) solutions or card transaction processing.

Finally, the changes are also an attempt to circumvent some of the “legal” arguments that occur between the QSA and their client.  I am not the only QSA that has encountered clients that come up with very legal-like arguments and interpretations of what a particular test requires.  As a result, the Council has attempted to use wording in the tests and related testing guidance that reduces or even eliminates such interpretation arguments.  However, in my experience, clients that take this “legal” approach to their assessment are not going to stop.  They are not interested in security, they are interested in “checking a box”.  But the Council does no one any favors by only allowing QSAs and ISAs to read and have copies of the Reporting Template/Instructions until the client goes through their first PCI assessment under the new testing.  The Reporting Template should be a public document not one that only QSAs and ISAs have access.

06
Nov
14

The ASV Process Is Broken – Part 3

So what are my ideas on fixing the ASV process?

Modify The ASV Program

The conditions that drove the ASV process originally made sense.  Vulnerability scanning tools were predominately open source and anyone could do scanning and just about anyone was doing vulnerability scanning.  The results produced out of the open source tools could be highly questionable at best and the reporting was haphazard and about as trustworthy at times as a three dollar bill.  Even in large organizations, the people doing the vulnerability scanning did not necessarily have networking, security or even IT backgrounds.  Then there was a tremendously high false positive rate out of the open source tools.  As a result, most organizations ignored the results they received because they found that they could not be trusted.

The purpose of the ASV program was to bring some sanity and professionalism to the vulnerability scanning process.  MasterCard invented the ASV program (it was not called ASV then) back in 2005.  A test network was built and prospective ASVs were required to run their vulnerability scanners against this network and produce results which were then reviewed by MasterCard.  It was a much a test of the vulnerability scanning tool as it was of the person running the tool.  When the program transitioned to the PCI SSC, the Council added a multiple choice test to the process, but the virtual network testing and report review is still part of the process.

The trouble with this process is that the vulnerability scanning tool is no longer the problem.  Every ASV uses a commercial vulnerability scanning tool from either Tenable, Qualys, Saint, Tripwire or similar commercial tool vendor these days because they cannot afford to do otherwise.  Since these tool vendors are also ASVs, requiring a vulnerability scan for ASV certification has become a truly pointless exercise.  Other than the possibility of not properly entering the IP addresses to be scanned and running the wrong scanning policy, there really is very little that someone can screw up with a scanning tool.

The skill in vulnerability scanning today is reviewing the results, dealing with false positive results, working to address results with compensating controls and, with the Councils new edict on combining reports, working to get passing quarterly scans.

Therefore, in my opinion, training and testing of ASVs should be focused on the following.

  •  Determining the scope of vulnerability scanning.
  • Vulnerability scanning methodology.
  • Interpreting vulnerability scanning reports to confirm knowledge of the process and the meaning of the results.
  • What constitutes a false positive result and how to document a false positive result.
  • Development and documentation of an appropriate compensating control for a vulnerability.
  • Process for how to produce an acceptable passing scanning report from multiple reports.

And let us not limit ASV certification to just independent consulting firms.  As with the internal security assessor (ISA) program, open the ASV program to internal personnel as well.  Most large companies have independent vulnerability scanning teams that are as capable to more than capable than their ASV brethren.  There is no longer any reason that these internal people cannot do the ASV scans particularly if they meet the same standards and qualifications.

Approved Vulnerability Scanning Tools

I am not suggesting that the Council needs to develop a certification process for these tools as there are already plenty of sources that assess such tools.

The Council would publish a list based on the criteria developed by one or more independent tool assessment sources.  This list would define those tools acceptable to use for ASV vulnerability scanning.  The PCI DSS should then require that the QSA confirm that the vulnerability scanner used by the ASV is on the list in addition to confirming scope and the scanning policy used.

Require A Vulnerability Scanning Methodology

With the PCI DSS v3, the Council now requires penetration testers to use a documented and industry accepted penetration testing methodology.  Yet, there is no such requirement for vulnerability testing.

Most vulnerability scanning is done using what I call the “toss it against the wall and see what sticks” approach.  Basically, every possible vulnerability is run against every device.  Most commercial vulnerability scanners interpret banners, signatures and other markers to trim the list of vulnerabilities to be tested based on what they believe the target to be.  However, when you are scanning an external network blind, scanners cannot always properly interpret what an IP address resolves to as a device because of the mix of responses that they receive.  As a result, scanners do not necessary trim tests increasing false positive results or they trim them too much and the test is not complete.

Then there is the automated nature of today’s vulnerability scanning.  While I understand the desire to reduce costs of vulnerability scanning, the “point and click” nature of today’s ASV scanning has made it flawed.  And it gets worse as organizations get passing scans.  As a QSA, I cannot tell you how many passing scans I have reviewed where an organization could be hacked six ways to Sunday with the remaining vulnerabilities.  As a security professional, it scares me to death.  But as a QSA, while I can bring these up, they get no play because they do not have a CVSS of 4.0 or greater.  You hope that these vulnerabilities get picked up in an organization’s penetration test.

But there is no guarantee of that happening because the penetration tester’s vulnerability scanner may or may not pick up the same vulnerabilities.  As a result, part of the penetration testing methodology should include a review of all vulnerabilities found since the last penetration test and those should be tested for in the current penetration test to ensure they have been addressed.

Obviously, I have a preference to the methodology I discussed back in Part 2.  But there are a number of methodologies posted out on the Internet from a variety of good sources.  All I ask is that the vulnerability scanning methodology be integrated with the penetration testing methodology so that there are not gaps in coverage.

Require Monthly External Vulnerability Scanning

Before everyone panics, I am not asking that ASV scans be run monthly.  Although if the ASV program is modified, for organizations with internal ASVs that is a possibility.  I would still require the quarterly ASV scan, but I would add in monthly scans run by anyone deemed qualified as is allowed for internal vulnerability scans.

My primary rationale for this recommendation is driven by this simple fact.  When the dominant solution vendor releases patches on the second Tuesday of every month and the vast majority of those fixes have a CVSS score of 4.0 or greater, anyone that thinks quarterly scanning keeps them secure is seriously kidding themselves.  Not that a lot of security professionals bought into the quarterly vulnerability scanning requirement, even as a bare minimum.  But without the standard requiring it, a QSA has no leg to stand on other than to intimidate and shame people into doing monthly scanning.

Even if you are not Microsoft centric in your external environment, with the breaches that have occurred and the revelations of Shellshock and Poodle, it is painfully obvious that the quarterly requirement is not going to keep organizations secure.  I got a lot of calls after both of these vulnerabilities were announced with clients asking if their passing scans were no longer valid.  I was a bit schizophrenic in my thoughts.  On the one hand, I was glad they were at least thinking about the security implications of these vulnerabilities.  But their concern about their passing scans just highlights the importance of meeting a PCI requirement and passing their PCI assessment versus being secure.  Because, while I only got a few calls, you know that there are too many people that are congratulating themselves on dodging the bullets of Shellshock and Poodle because of the fortuitous timing of their  quarterly scans and that they got an additional 30, 60 or even 90 days to address them.

Then there are those organizations that run solutions such as IBM’s Websphere or Oracle’s eCommerce suites.  Both of these vendors not only patch their own application frameworks, but they also release those patches to the underlying operating systems that are compatible with their application frameworks.  But worse, these vendors do not release monthly patch releases, they do patch releases on quarterly, semi-annual or even annual bases.  As a result, there is a high likelihood that some operating system patches could be left out of these releases due to compatibility or timing issues.  The work around is to mitigate any remaining vulnerabilities through additional logging, additional monitoring, changes in firewall rules, changes in IDS/IPS rules, etc.  The additional vulnerability scanning could help organizations identify these issues and address them quicker than quarterly.

A side benefit of monthly scanning will be improving the ability of organizations and their QSAs to determine if an organization’s patching and mitigation processes are working according to requirement 6.1.  Quarterly scans typically document a lot of vulnerabilities, mostly those under a CVSS of 4.0.  As a result, whether or not an organization is properly managing their environment can be very difficult and time consuming leading to missing items that should be addressed.  Having reports more often can facilitate getting these issues addressed sooner rather than later and keeping the volume lower and less daunting.

The bottom line in all of this is that monthly scanning is required to even have a chance at being secure these days.  Yet the vast majority of organizations are only doing quarterly scans and thinking they are secure.  That practice must change.

So there we have it.  My thoughts on the ASV process and how I would go about fixing it.

01
Nov
14

The ASV Process Is Broken – Part 2

The next reason I believe the process is broken is with the automated scanning processes.  They do not seem to be accurately assessing the security of Web servers, firewalls, routers and other externally facing devices that form an organization’s perimeter.  In my opinion, it seems that in our drive to bring down the cost, we have created a false sense of security by having the concept of a “passing” scan and only requiring scanning on a quarterly basis.

It is hard to believe that external vulnerability scanning came into vogue more than a decade ago.  There were no automated solutions.  No popping out to a Web site, entering IP addresses and scheduling a scan.  It was all manual.  A person was required to configure the scanning solution and then launch the scanner.

There was a proven methodology used for approaching vulnerability scanning.  That methodology involved approaching the organization as an attacker would. .  The security professional would go out to domain registrars and other official sources to obtain potential IP address ranges, registered domains and other public information.  Then the professional would develop a profile of the organization’s external security posture using Nmap and similar tools as well as running non-intrusive scans to identify systems and potential entry points.  But even more insidious, the security professional would do what became to be known as “Google Hacking” to find out what, if any, information was available on the Internet that might facilitate the attack.  Log data, firewall configurations and even authentication credentials could be obtained through Google Hacking.  Even today, while Google does its best to minimize the information they collect, it is amazing the amount of information that can still be obtained with specially crafted Google searches.

Once the preliminaries were done, it was time to actually scan.  The most accepted approach used at that time was to run two scans, one fully external with security measures enabled and one with testing done inside the DMZ.  The scan done with the security measures enabled was to confirm that an organization’s firewalls, load balancers or other front ends were functioning as configured.  Essentially this scan showed what an attacker would discover in their scanning of the organization’s external presence.  There were two purposes of the second scan.  The first was to provide an inventory of vulnerabilities that were being obfuscated by the security measures so that they could be addressed or mitigated.

The second purpose of the DMZ scan was for what some would call today a form of “threat intelligence”.  The second scan provided a view of what an attacker would encounter if they were able to circumvent the organization’s external security.  It answered the question of how large the attack surface was in the DMZ should it be compromised?  What else could be subverted and used to further an attacker’s goal of getting even deeper into a network?  The point of this part of the exercise was to provide the organization with an idea of what to look for should anomalies begin to appear in log data.

Throughout this scanning process, separate scans were run for firewalls, routers, load balancers, Windows devices, Linux devices, etc.  The rationale for that was to obtain clear results based on the type of device.  No reason to toss a bunch of Windows vulnerabilities against a Cisco firewall or F5 load balancer and then have to sift through the results to remove the false positive entries generated by the Windows devices they protect.

All of this information would be collated, analyzed and an actionable report produced for management and security professionals to absorb and understand the organization’s true external security posture.  Once the draft report was delivered there would be one or more calls or meetings with the organization’s security personnel to tweak the message of the results, develop action plans to address the results and then a final meeting with management and security personnel to deliver the final report.  At the end of this process, management and security personnel knew exactly where they stood from a security perspective and what needed to be done to ensure that they remained secure going forward.

Obviously, such an approach is fairly comprehensive and that costs money to produce.  And it was the cost that drove the automation of the process so that it was less labor intensive.

In the mad dash to the lowest possible cost, the following is what typically happens today.

Organizations subscribe to an ASV vulnerability scanning service.  If the organization is lucky, they go through a scoping call with a human being at the ASV service to confirm the IP addresses that are in-scope for PCI compliance.  Having been on a number of these calls with ASVs, it is more of a questionnaire approach and there is very little done by the ASV to actually confirm the scope of the scanning.  It is more or less a conversation that explains how to enter the IP addresses into the scanning tool’s Web interface and to schedule a scan, not a diligent conversation on whether or not a device is in scope or not.  There is typically a brief discussion on how to use the Web site to dispute scanning results that caused a non-passing scan.  In total, the call takes 20 to 30 minutes at which point customers are essentially on their own.

Scans are run on a quarterly time period.  Scans may be rerun if vulnerabilities are identified that result in a non-passing scan.  Once a passing scan is obtained, a passing certificate is generated and the organization moves on to the next quarter.  Every now and then a result may be disputed, but those situations are typically rare.  The bottom line is that the ASV process is fairly automated with very little, if any, human intervention.

From a tool perspective, the vulnerability scanner is tossing every vulnerability at every IP address they are given.  Firewalls, load balancers, switches and routers are tested for Windows and Linux vulnerabilities.  False positive results can be rampant but most scanners weed out the obvious false positive results based on OS and device signatures.  But even if a few false positives remain, as long as they have a CVSS score of less than 4.0, no one cares.  And the scanner is configured to run as quickly as possible because it needs to be available for the next customer so that the ASV does not have to have an individual scanner for every customer.

All of this is driven by customers’ desire to minimize cost as much as possible.  Thus, in order to stay in business, the ASV scanning service providers have done an admirable job of driving down cost.  But at what other costs?

While I think we can all agree that the current approach is flawed, the Council would point to the fact that the quarterly ASV scanning requirement is only a minimum and that organizations should be doing their own external vulnerability scanning more often than quarterly.  And that is what is missed by a lot of organizations is that they need to do more than what the PCI DSS requires to be secure.  And no matter how loud that message is repeated, it seems to get missed over and over again.

However, a lot of organizations do not do anything more than the PCI DSS requires.  We have this “bare minimum” mentality because, if it is not mandated in the standard, then we must be secure if we do only what is required.  This results in the false sense of security that organizations have when they only do what the PCI DSS or any other security framework for that matter mandates.

In addition, I cannot tell you how many organizations totally write off vulnerabilities that have a CVSS of less than 4.0 as “trivial” representing no threat.  There is this mistaken belief that, because the CVSS score does not warrant immediate patching, it is not a threat.  However, when you have them in relative abundance and of certain types, those “trivial” vulnerabilities can be put together in a particular sequence and used to compromise a network.  To the shock and horror of a lot of organizations, their security is breached in moments during their penetration test using those “trivial” vulnerabilities.

Then we have the limitations of the tools themselves.  Their vulnerabilities are constructed for the general audience, not a specific network.  As a result, depending on security measures and configurations, a particular vulnerability for say HTTP or FTP might not register due to a load balancer or other devices in the communication path.  Yet with a minor tweak, the same vulnerability could actually work.  And that was another point of doing that second scan inside the DMZ back in the “old” days.  Taking the two scans allowed you to correlate that while HTTP or FTP was open; it was vulnerable because that vulnerability was shown on the internal DMZ scan.  As a result, you could take additional steps to ensure that the vulnerability could not be exploited from the Internet.

The bottom line in all of this is that ASV scans have become a pointless piece of the compliance assessment process.  It has not been because the ASVs are bad, it is that the process they are forced to follow is not doing what it once did at the beginning.  It does not help that merchants and service providers treat ASV scans as a “check the box” activity and that because they do something quarterly they are secure.

If the process is broken, what should be done to fix it?  That is the subject of my next post.

17
Sep
14

How Many Auditors Does It Take …

The title of this post sounds like the start of one of those bad jokes involving the changing of light bulbs.  But this is a serious issue for all organizations because, in today’s regulatory environment, it can be a free for all of audit after audit after assessment after assessment.  A never ending cascade of interruptions to their business operations as they go through audits and assessments, all in the name of ensuring controls are designed and functioning properly.

But another reason I have written this post is because of all of the comments that I have received that seem to paint my position as a reason why QSAs are not needed to conduct PCI DSS assessments.  I wanted to clarify for everyone my rationale for my position.

Besides those reasons, the larger reason this issue needs to be brought up and discussed is that the PCI SSC is pushing for organizations to adopt business as usual (BAU).  For those of you that did not read the preamble of the PCI DSS v3, BAU is the integration of relevant portions of the PCI DSS into an organization’s everyday activities.  A rather noble goal and only a recommendation at this time, one has to believe that BAU will at some point become part of the PCI DSS in a future version.

Any organization that takes the time to implement BAU is going to want to assess their implementation of BAU.  They will do this through internal/external audit activities, automated real-time monitoring via dashboards and other internal assessment processes.  Why bother with BAU if you are not going to use it to spot control issues before they become major problems?  That is, after all, the whole point of BAU.

Which brings me back to this year’s Community Meeting and the question I asked about reliance on other auditor’s/assessor’s work.  The reason for the question is to minimize, as best we can, the disruptive effects of the myriad of audits/assessments that some organizations are required to submit.  The answer provided by the Council was an emphatic “NO!” followed by some backtracking after the audience apparently showed its displeasure to the Council members on stage to their take it or leave it answer.

The reason for the audiences’ displeasure though is genuine.  A lot of organizations question the number of times user management controls such as identification of generic UIDs, last password change date, last logon date and the like need to be performed before such activities are deemed adequate?  How many times do facilities people need to be interrupted to prove that video monitoring is performed and the video is retained?  How many times do facilities have to be visited and reviewed for physical access controls?  There are numerous areas in all control assessment programs where those programs cover the same ground in varying levels of detail and focus.  It is these areas of commonality where the most pain is felt and we hear the lament, “Why do I have to keep covering this ground over and over with every new auditor that comes through?”

It is not like the PCI DSS cornered the market on control assessments.  Organizations have to comply with ISO, HIPAA, GLBA, FISMA, NIST and a whole host of other security and privacy control audits or assessments.  All of these audits/assessments share certain common controls for user management, physical security, facilities management, etc.  What differentiates the programs is the focus of what they are trying to protect.

One easy approach to address this situation is to combine audit/assessment meetings with personnel in physical security, facilities management, user management and the like.  Each auditor/assessor can ask their specific questions and gather evidence and conduct testing as they need.  Unfortunately, due to timing of reporting requirements, having common meetings might not always be possible.

But another approach would be to use internal auditors performing testing monthly, quarterly, etc. and then the QSA reviewing those results during their annual PCI assessment process.  There might be some independent testing required by the QSA for areas such as device configurations, change control and application development changes, but the sample sizes of any testing could be greatly reduced because of the testing done throughout the year due to the implementation of BAU.

If we as QSAs work with other auditors/assessors and agree to common criteria in our respective work programs that satisfy our common controls then we will not have to interrupt an organization to ask the same questions and alienate people as we do today.

Success of compliance programs is the result of making them as unintrusive and automatic as possible.  BAU is a great idea, but it will only succeed if the Council understands how BAU will be implemented in the real world and then adjusts their compliance programs and assessment approach to take BAU into account.  The quickest way to kill BAU is to make it painful and cumbersome which the Council is doing very effectively at the moment.

11
Sep
14

2014 North American PCI Community Meeting

Another year has come and gone and so has another PCI Community Meeting.  There were a number of interesting events at this year’s meeting.  Some I will cover here and some I still have to digest and determine what they really mean.

Good Bye Bob

This year’s meeting is the last one for the PCI SSC’s current General Manager, Bob Russo.  Over the years, Bob has been a good sport and has been a cowboy and other characters.  This year’s community meeting was no exception.  At Wednesday night’s networking event, Bob showed up as Gene Simmons’ brother decked out in silver colored platform boots, black tights, leopard spotted top, long black hair and doing his best to show off his tongue.

A lot of us over the years have pilloried Bob for various edicts and clarifications as he was the leader of the Council.  However, if we step back, Bob got the PCI SSC off the ground and took on the thankless task of combining the disparate security standards of the five card brands and giving us the common set of standards we have today.  As well as then asking us to do our best to ensure that those standards were followed.

Even though I have been critical at times of Bob, he has always been pleasant and cheerful to me and others at the community meetings and other events where he was present.  Bob recognized that there are always some of us in the crowd that are very passionate about security and tried to assist us in channeling that passion.

Bob stated that he will be doing a “Goodbye Tour” to the other community meetings this year, so make sure to thank him for his efforts, shake his hand and say your goodbyes at whatever meeting you are able to attend.

P2PE v2

The first versions of P2PE were lambasted for being pointless and the number of solutions certified, now at six, has somewhat proven that the newest of the PCI standards needed some work.  As a result, in November 2014 we will receive version 2 of the P2PE standard.  According to people I spoke with at the meeting that have seen the new version, the new standard should be much better. Is it perfect, no. But it supposedly is a better version than the originals.

The most notable change to the standard is the approach the Council has taken.  Based on the presentation made, they seem to abandoning the complete end to end model and are moving to a component approach based on how the solution will be implemented.

But the huge change to the standard is that a certified P2PE solution can be managed by a merchant without a third party.  That is, merchants can manage the encryption keys.

It will be interesting to see just how much the standard has changed since its last iteration only a year ago.  But most of all, it will be interesting to see how the new implementation approaches will work.

SAQs

The biggest clarification to come out of the community meeting on SAQs is the Council’s and card brands’ endorsement of using multiple SAQs for documenting compliance with the PCI standard versus doing an SAQ D.

This situation occurs when a merchant has multiple payment channels such as with merchants that have retail stores using traditional card terminals (SAQ B or B-IP) and an eCommerce presence that is outsourced (SAQ A or A-EP).

The other area of discussion that seemed to cause a bit of a stir was related to Web sites that use redirects or iFrames for payment processing.  The reason for this contention is the result of claims from vendors of these sorts of payment solutions in the past that claimed that their solutions placed merchants out of scope for PCI as it related to their eCommerce operation.

Ever since the issuance of the eCommerce information supplement in January 2013 and with the recent issuance by Visa of their eCommerce guidance, the outsourcing world has been buzzing about the implications.  Merchants of course have been going back to their eCommerce outsourcers and complaining about the fact that their eCommerce is no longer out of scope.

Reliance On Other’s Work

My final comment will be related to a question I asked at the Open Forum session on Wednesday.  We have been getting push back from our larger clients on our limited use of their internal audit work, SSAE 16 reports, ISO 27K audits and similar work, if we used it at all.  The driver is that clients want to minimize the amount of disruption to their personnel by all of the audits and assessments that are occurring these days.  This prompted me to ask the question at the Open Forum as to the Council’s advice on reliance on other auditor’s work to reduce sampling.

The answer I received was, “No, absolutely not.”  Quickly followed by, “Of course, I mean other auditors, not other QSAs and PA-QSAs.”

This blunt answer apparently shocked the audience as the people on stage reacted to that shock as well.  The people onstage then backed off saying that the Council would have to take the issue back and discuss it.

After asking this question I was approached by a number of people thanking me for bringing up the topic.  The bottom line is that organizations are audited and assessed out.  Most feel like one audit/assessment ends and another one begins.  But the truly annoying thing is that there are certain portions of all of these audits/assessment that cover the same ground over and over and over again such as with physical security, access controls and end user management.  Handled properly, it would not eliminate all testing, but it would definitely reduce the amount of testing and also reduce sample sizes.

But a very telling comment came from a member of the American Institute of Certified Public Accountants (AICPA) who told me that the AICPA has repeatedly tried to meet with members of the PCI SSC to discuss the SSAE 16 standard and how it could be used to reduce a QSA’s work only to be rebuffed by the Council.

Organizations would be more willing to go through PCI assessments if work done by their internal auditors as well as outside auditors could be leveraged to simplify their lives, not complicate them.  This will only become more important as the Council pushes organizations to adopt business as usual (BAU).

If I had one important take away for the Council to work on, it would be to work with other standards bodies such as the AICPA, ISO, FFIEC and the like and work toward providing guidance to organizations on how to use internal and external audit reports.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031