25
Apr
15

PCI Survey

A gentleman working on his Masters Thesis at the University of East London approached me a month ago for my take on PCI and his questions resulted in the post PCI SWOT Analysis.

Now this gentleman is looking for further feedback via a Survey he has constructed. I would like to encourage my readers to take his survey to further his research as well as provide all of us with feedback on PCI.

The survey can be taken here.  (https://docs.google.com/forms/d/1ua_KAPvBu0Ziu1GQ18vZj6qq1selSMMnLtVD-D1NBJo/viewform)

Thank you.

20
Apr
15

Why Requirement 5 Must Change

This issue came to a head recently when a colleague of mine attended an ISSA chapter meeting where there was a session given on anti-virus by someone from a US government intelligence operation. I had entirely forgotten about this until they brought it back up. The issue is the ineffectiveness of anti-virus solutions and why they are ineffective.

Most of us have seen the anti-virus testing results that are periodically pumped out by the various trade journals. They all point out that anti-virus is only around 30% to 40% effective in detecting malware. But what never seems to get brought up and clearly discussed is why anti-virus solutions are so bad at their job.

The reason is that anti-virus solution providers have taken a page out of the United States Centers for Disease Control (CDC) influenza playbook. The reason is the statistics that the speaker shared.

  • For every current piece of original malware, there are around 400,000 variants of that malware making the rounds on the Internet. Variants are easy to make which is why there end up being so many so quickly.
  • To scan a computer for every piece of malware developed since day one including variants would take around 40,000 hours (almost a month) to complete. And that is if you dedicate a core for that to run as well as a core to scan everything coming at you.
  • The signature files required to track all malware and their variants from day one would take up a significant portion of your hard drive.

Like the CDC does a scientific wild-ass guess (SWAG) to figure out what influenza vaccine to make every spring, anti-virus vendors do the same thing with their signature files every day. What anti-virus vendors do is select the most likely malware and variants your computer will encounter and that is what your anti-virus signature file will contain. The idea is that their heuristic engines and firewalls will hopefully detect the malware not included in the signature file.

Getting back to the PCI DSS, requirement 5.1.1 states that anti-virus solutions:

“Detect all known types of malicious software, remove all known types of malicious software, and protect against all known types of malicious software.”

Guess what?

Given the aforementioned revelations that signature files are incomplete, there is no anti-virus solution available today that meets those requirements of detecting and protecting against “all known types of malicious software”. All of us have, unknowingly or not, been “checking the box” on this requirement.

I along with a number of other security professionals have stated for years that anti-virus alone has never been adequate for protecting systems as portrayed in the PCI DSS, by the PCI SSC and by the card brands. If you truly want to protect systems from “all” malware as specified in the requirement, you need to use anti-virus in conjunction with a whitelisting/blacklisting and/or file change detection solution. Anti-virus alone is just not enough as the repeated tests of these solutions have pointed out over the years.

The reason you still need to keep anti-virus is that these solutions do what the others do not – quarantine or remove the malware. Quarantining or removing malware is truly an art form and has gotten even more so as operating systems have become more sophisticated in how they manage applications. The reason for this is that, while it is easy to install software, it has become very tricky in uninstalling it, if you can even uninstall it at all.

Anti-virus vendors spend the bulk of their research and development time and money in determining the best way at quarantining and/or removing malware. While a lot of whitelisting/blacklisting vendors have promised to add the ability of quarantining and removing malware, most have come to the realization that providing such features are beyond their current capabilities and not as simple as they have portrayed it in their sales meetings. As a result, I would expect it will take these whitelisting/blacklisting vendors years to have this capability if they even bother to develop it.

So what should the PCI SSC do?

The Council needs to require additional malware detection measures to requirements 5 so that organizations are truly protecting their systems against malware. In the immortal words of Bruce Scheier, what we have now is “security theater” – the appearance of security without security. Anti-virus alone is not cutting it, so it is time to enhance that capability by requiring more than just anti-virus.

The Council should also work with and demand that the anti-virus, whitelisting/blacklisting and file monitoring vendors provide some sort of integration between their respective products. That way when the whitelisting/blacklisting or file monitoring solutions detect an issue, the anti-virus solution can do the quarantine or removal of the suspected malware which it is typically very good.

Is this going to detect every piece of malware?

Sorry, but some will still get through (remember, security is not perfect). But the amount that gets through should be significantly less than with just anti-virus alone.

How much gets through will be up to how the tools are configured. As a lot of you have found out, just installing file monitoring software does not detect all file changes. That is because the installation does not get tweaked to protect everything it should. That takes time and effort that a lot of people do not provide because they have other things to get done. The better you implement the other tools, the fewer pieces of malware that will get through.

Reach out to the Council and let them know that you also think that requirement 5 needs improvement.

16
Apr
15

ASV Guidance For SSL/TLS Vulnerabilities

Hidden by all of the news about v3.1 of the PCI DSS being published, is a notice that was sent to all PCI approved scanning vendors (ASV) from the PCI SSC regarding how to handle SSL and “early TLS” vulnerabilities.

In regards to the “early TLS” comment, the Council did define the term by referencing everyone to NIST SP800-52 rev1. That NIST document essentially tells the reader that while TLS 1.1 is allowed, whenever possible, TLS 1.2 should be the only version used. In fact, NIST is highly recommending that all government entities move to TLS 1.2 by January 1, 2016.

FYI TLS 1.3 is in a draft specification by the IETF as we speak. I would expect that we will see TLS 1.3 released by the time the PCI SSC’s June 30, 2016 deadline.

With that covered, what is an ASV to do with a scanning customer’s SSL and TLS 1.0/1.1 issues?

According to the letter sent to the ASVs:

Prior to 30 June 2016: Entities that have not completed their migration should provide the ASV with documented confirmation that they have implemented a risk mitigation and migration plan and are working to complete their migration by the required date. Receipt of this confirmation should be documented by the ASV as an exception under “Exceptions, False Positives, or Compensating Controls” in the ASV Scan Report Executive Summary and the ASV may issue a result of “Pass” for that scan component or host, if the host meets all applicable scan requirements.”

The key here is that you must be mitigating the vulnerability and working to migrate to TLS 1.2.

So what would a mitigation plan look like? Most likely you would monitor for usage of SSL or TLS 1.0/1.1 connections to your devices that only support SSL and TLS 1.0/1.1.

For those of you that are not going to be able to migrate to TLS 1.2, the Council gives ASVs guidance there as well.

After 30 June 2016: Entities that have not completely migrated away from SSL/early TLS will need to follow the Addressing Vulnerabilities with Compensating Controls process to verify the affected system is not susceptible to the particular vulnerabilities. For example, where SSL/early TLS is present but is not being used as a security control (e.g. is not being used to protect confidentiality of the communication).”

The reason the Council has to be able to provide a solution past June 30, 2016 here is that it is my understanding that a lot of comments were received about “baked in” SSL that was going to require wholesale replacement of devices to correct the problem. A lot of those devices are IP-based point of interaction (POI) devices. ASVs have been instructed on the process to use to reduce the CVSS so that the vulnerability is no longer considered “high”.

If you have any further questions regarding this announcement, I would discuss it with your ASV. As with all things PCI, every ASV will have variations based on their own risk adversity as to what this pronouncement says.

14
Apr
15

SSL Update

“The report of my death was an exaggeration.” – Mark Twain

Today the PCI SSC announced that SSL and “early TLS” (whatever that means) will not truly die until July 1, 2016. This will allow a transition period for all of you stuck with vendor baked-in SSL as well as the procrastinators amongst us to get converted to TLS 1.2. I say TLS 1.2 because if you are going to convert, you really should go to the most current version and not just with whatever will get you by.

The complete summary of PCI DSS v3.1 changes can be found here.

UPDATE: Late on Wednesday, April 15, the PCI SSC released v3.1 of the PCI DSS. The Council uses NIST SP800-52 rev1 as the definition of “early TLS”. You can get a copy of the new version of the PCI DSS here.

03
Apr
15

PCI Issues Penetration Testing Information Supplement

On Thursday, March 26, the PCI SSC issued their latest information supplement. This one is focused on the topic of penetration testing. Not surprisingly, I have a number of comments based on what I have read. There are some good things that have come out of this effort. But in my opinion, the committee missed the boat in a few areas and those will lead to even more confusion between penetration testers, QSAs and clients.

Scoping

Talk about making a mess of a mess, I believe the writers accomplished that amazingly well in their discussion on scoping. This is probably the area that creates the most confusion and will likely create the most problems.

On page 5 and the first bullet, talk about confusing. It took me a number of readings to even get what I think they are trying to communicate.

“The scope of the internal penetration test is the internal perimeter of the CDE from the perspective of any out-of-scope LAN segment that has access to a unique type of attack on the CDE perimeter. Critical systems or those systems that may impact the security of the CDE should also be included in the scope. Testing must include both application-layer and network-layer assessments.”

Out of scope LAN segments are out of scope because there are no rules that allow them access to the CDE. I agree that you need to test that these LAN segments are truly out of scope, but if there is a unique type of attack available from such a LAN segment into the CDE, that would seem to imply that the LAN segment was not out of scope. Confusing and likely to cause a lot of pen testers to force a lot of unnecessary testing and fishing for bizarre issues on out of scope network segments.

The next sentence paired with the first seems to just add to the confusion because it implies again that the out of scope LANs are not out of scope. These should have been two separate bullet points in my opinion.

In the end, what I think the writers are saying is that even if deemed out of scope, all LANs and VLANs need to be confirmed to be out of scope. However, how they have said that in my opinion leads to a lot of confusion.

If the last one did not cause your head to spin around, this one probably will. It is the third bullet at the top of page 5.

“To be considered out of scope for PCI DSS, a system component must be isolated (segmented) from the CDE, such that even if the out-of-scope system component was compromised it could not impact the security of the CDE. Therefore, the penetration test may include systems not directly related to the processing, transmission or storage of cardholder data to ensure these assets, if compromised, could not impact the security of the CDE.”

Correct me if I’m wrong, but this seems to imply that everything is in scope for penetration testing. I know the writers of this supplement have used the word ‘MAY’, but we also know how ‘MAY’ tends to be interpreted by the Council (i.e., MUST) and how it will be interpreted by a lot of our clients (i.e., do not do it). Until a few penetration testing reports are reviewed by the Council, the verdict will be out on the correct approach. However, coupled with the theme created by these bullets, I am guessing that the committee could not agree on how to express how to scope the penetration test.

But if those bullets were not confusing enough on scoping, the writers add insult to injury. After the bulleted list at the top of page 5, the first paragraph states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

An internal pen test does not require testing from within the cardholder data environment (CDE)? This sounds like the “Three Hop Rule” all over again. But then we twist it further somehow justifying the first sentence.

All this diatribe does is just further obfuscates scoping. What simply should have been said is that everything in-scope for PCI compliance is to be penetration tested. Instead, we got a lot of words that, in my very humble opinion, do nothing to clarify scoping.

PA-DSS

On page 6 under 2.3.2 PA-DSS Compliant Applications we get the following guidance.

“If a payment application has been PA-DSS validated, the application’s functionality does not need to be tested as part of the entity’s PCI DSS compliance validation. However, the implementation of the application does need to be tested. This includes both the operating system and any exposed services, but not the payment application’s functionality (e.g., authentication, key management, transaction processing, etc.) since this was validated as part of the PA-DSS application validation.”

Under the PCI DSS, even PA-DSS validated applications still must be tested to prove that their implementations were completed per the vendor’s PA-DSS implementation guide. Yet for penetration testing, PA-DSS validated applications, their operating systems and exposed services are exempt?

I hope the acquiring banks are geared up for the fights this is going to cause between the penetration testers and their clients. For example, in the restaurant and hospitality industries, I can see the situation where penetration testing only occurs on firewalls, routers and switches because everything else is PA-DSS validated.

I do not believe this is what the Council wanted, but that is what this supplement is stating. If we are truly talking about security of our networks and systems, this is not how you achieve it.

Social Engineering

I was pleasantly surprised that this was included in this supplement. However, as I read through this section, I got more concerned that it was even included.

For section 2.5 Social Engineering on page 7, we encounter the following.

“Social engineering is the attempt to gain information, access, or introduce unauthorized software into the environment through the manipulation of end users. PCI DSS v3.0 reconfirms testing by requiring industry accepted penetration testing approaches (many of which include social engineering as part of their approach) and to have an approach to penetration testing that “considers the threats and vulnerabilities experienced by merchants in the last 12 months.” This may include social-engineering attacks as a method used for introducing malware into the environment.”

This makes social engineering attacks legitimate if the client has been breached by such methods in the past year. But does this include physical as well as logical social engineering? Since it is not explicitly called out, I would assume so, but it should have been defined. Regardless of that being explicitly called out, this is a great thing to see in this document.

On page 8, the last paragraph of the discussion on social engineering states:

“Social-engineering testing may not be appropriate or provide a meaningful result for all organizations. Although social-engineering testing is not a requirement of PCI DSS, an organization may consider documenting the reason(s) for foregoing social-engineering testing and include applicable documentation with the internal and external penetration test reports, particularly if social-engineering attacks were encountered in the last 12 months.”

What started with such promise ends on a very sour note. This last paragraph gives organizations an out. So I guess we can look forward to a lot of organizations issuing inventive explanations to avoid social engineering testing.

Some Miscellaneous Issues

On page 2, the seventh bullet down in the Terminology we have the following.

National Vulnerability Database (NVD): The U.S. government repository of standards based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance (e.g., FISMA).”

I know they just grabbed the description from the National Vulnerability Database (NVD) Web site, but that does not mean that it is appropriate for use when discussing penetration testing. The Common Vulnerabilities and Exposures (CVE) database maintained by Mitre Corporation would have been the better reference since the NVD uses the CVE as the catalog of vulnerabilities.

I have no idea how the Federal Information Systems Management Act (FISMA) ties into the NVD. If they had truly pulled the content exactly from the NVD Web site they would have grabbed the fact that it ties to the Security Content Automation Protocol (SCAP) not FISMA.

My last comment on this topic is regarding the fact that the NVD (or the CVE) enable automation of vulnerability management. Had they brought in the reference to SCAP, I might have had an easier time going along with things. The NVD/CVE do not contain the actual exploits, they are merely a catalog of the vulnerabilities. As a result, I question the value of either the NVD or the CVE as to it automating vulnerability management. Yes, the CVE catalogs all of the vulnerabilities, but that is the value. In my opinion, the use of either of these in regards to the automation of the vulnerability management process is a stretch. They are a foundation that will allow for automation, but they do not in and of themselves actually enable automation.

Under 2.3.3 Web Applications on page 6, we have the following.

“It is common for an environment to host a web application that was not specifically coded for the organization such as commercial, off-the-shelf web-mail interfaces, document-sharing tools, file-transfer services, network-device administrative interfaces, etc. In these instances, the web application does not typically need an application-layer penetration test as the entity is not responsible for the source code of this type of software. Instead, the tester should perform a network-layer test and ensure the software was implemented, configured, and is currently being maintained in a secure manner (disabling or uninstalling unused services, blocking unused ports, applying current updates, etc.).”

I can see it now, organizations arguing over the pen testing of IBM Websphere, Oracle eCommerce or similar Web application frameworks because they are a package. It is a framework that you then must write an application for, but penetration testers will get into arguments over that aspect.

A Few Good Things

Yes, I have complaints, but there are some good things that are contained in this supplement.

Probably the best thing about this information supplement is the checklist at the end of the document for evaluating penetration testing reports. The Council should have a similar item for vulnerability scanning reports as well.

The discussion of the penetration testing methodology in section 4 of the document is a good read for both penetration testers and the people that contract for penetration testing services. But keep in mind it is a framework and not the entire rule/procedure set for conducting a proper penetration test. Yes, there are some good details in areas, but it is not the detailed roadmap that some people will portray it.

Section 3 has a good discussion on how to evaluate the qualifications of a penetration tester. It was nice to see that while they document the common certifications a penetration tester may hold, they caution readers that certifications alone does not make a penetration tester qualified.

The case studies in section 6 probably do more to clarify than the previous five sections. They are good examples of properly conducted penetration tests.

Finally, there is a good discussion on the definition of a “significant change”. However, I think this discussion as well as a discussion about the words “periodic” and “periodically” should be put into a discussion about risk management and assessment, rather than brought up here. I wrote a post on this topic a while back to provide some guidance to people and it was nice to see that my thoughts jive with what they wrote in the supplement.

What Was Missed

The biggest item I think that was missed was how to handle the penetration testing of huge environments. Case study 6.3 discusses the concept of sampling somewhat, but it deals with a retailer with only six locations, not thousands of locations. There are also veiled references to sampling in section 2.3.4 in discussing a separate testing environment. But there is no deliberate and complete discussion on this very important topic.

Retailers such as Wal-Mart, Target and Home Depot have more than a thousand locations with 15+ checkout lanes meaning there are, at a minimum, 15K devices out in the field that technically need to be penetration tested. How does an organization penetration test such a large number of devices annually? The answer is that they do not test them all. It is physically impossible. However, this information supplement does not address that situation and provide guidance as to what the Council deems as acceptable.

While I use some of the largest retailers in the world as my example, it is not just them as there are mid-sized retailers as well as hospitality organizations that face this same issue. I had hoped that how to approach these situations would have been explicitly addressed in greater detail but it was not.

The writers also missed a huge topic regarding social engineering testing. Unlike vulnerability scanning and penetration testing, in my experience, it is virtually impossible for any organization to “pass” social engineering testing. So what would be considered “passing” in a social engineering test? That is not discussed but should have been. Since this document broaches the subject, it is probably a good topic to propose for a SIG for this year’s SIG recommendations so we get social engineering covered in an information supplement rather than as a sidebar to penetration testing.

It is not that this information supplement is an entirely bad document. But the advice it provides needs to be taken with a bit of caution. As usual, the shortcomings will be leveraged by all of the “check boxers” to do as little as possible so that you get that box checked for 11.3.

31
Mar
15

PCI Council Calls It Quits

Citing that the payment channels are now secure, the PCI Security Standards Council announced this morning in a press release that they were closing down effective June 30, 2015.

20150401-PCISSCDisbanding-AprilFoolLetter

A large sigh of relief was heard worldwide upon the announcement.

Have a happy April 1st otherwise known as April Fools’ Day.

27
Mar
15

PCI SWOT Analysis

SWOT – strengths, weaknesses, opportunities and threats

I had someone ask me about my thoughts on this sort of analysis of the PCI DSS. While these comments are PCI focused, I found that they actually apply to all security frameworks.

Strengths

The biggest strength in any security framework, PCI DSS included, is they are all based on the “best practices” from a wide variety of leading experts and organizations. Essentially, security frameworks are the shared knowledge base of what it takes to have basic security. We talk today about sharing breach information better and potentially in near real time, but security frameworks are the original method of sharing such information.

Weaknesses

Unfortunately, I see a number of weaknesses with security frameworks.

The largest weakness with security frameworks I see is that most people, including a lot of security professionals, seem to believe that complying with the framework is all it takes to be secure. With the PCI DSS a lot of this misinformation can be laid at the feet of the card brands. It was the card brands that originally marketed the PCI DSS as the “be all, to end all” for securing the payment process.

The unfortunate fact of life for security frameworks is that they only minimize and manage security risks, they rarely ever eliminate them. Therefore, even following the PCI DSS to the letter is no guarantee that an organization could not be breached. Yet this concept of risk minimization, risk management and the fact that security is not perfect consistently gets missed by executives. So when the inevitable breach occurs, executives go after the security people for supposedly misleading them.

Another area of weakness is the time with which it takes to make an update to the framework. In October 2014, the National Institute of Standards and Technology (NIST) issued a bulletin on secure sockets layer (SSL) indicating that they had found a flaw in the protocol and that they no longer found the protocol secure. A few weeks later the Internet was introduced to POODLE and SSL was declared insecure. It took a few months for the PCI SSC to react to this and officially declare SSL was no longer to be relied upon for secure communications. It took vulnerability scanners almost a month to begin flagging SSL implementations as high vulnerabilities as the CVE had not yet been updated. And we were recently informed that it will be April at the earliest before we will get the latest version of the PCI DSS. In the meantime, all of this administrivia did not stop attackers from using POODLE to their advantage.

The final weakness I see with security frameworks is that organizations find it impossible to execute them consistently at near 100%, 24×7. In theory the PCI DSS will provide reasonable security for all but the most dedicated attacks such as with advanced persistent threat (APT). For an organization to achieve basic security, they would have to execute the requirements of the PCI DSS at least at 95%+ and would have to remediate any issues within a few days. Unfortunately as we have seen in the recently released Merchant Acquirer Committee study, merchants are typically only compliant with the PCI DSS between 39% and 64% of the time – far from 95%+. Verizon’s recently released PCI report backs this up with their findings. The bottom line is that most organizations lack the discipline to execute any security framework consistently enough to achieve basic information security.

Opportunities

The biggest opportunity I see for the PCI DSS is it gives organizations the impetus to simplify their environments. The biggest reason for the failure to execute the PCI DSS consistently is because a lot of organizations have technology environments that mimic a Rube Goldberg cartoon. Only by simplifying that environment will an organization have a reasonable chance of securing it.

Another opportunity this gives organizations is a reason to enhance their security operations. Most organizations run bare bones security operations no different than other areas. However, what PCI compliance assessments typically point out is that those security operations are grossly understaffed and not capable of ensuring an organization maintains its compliance at that 95%+ level.

Related to these two opportunities is what the PCI SSC calls business as usual (BAU). BAU is the embedding of the relevant PCI requirements into an organization’s business processes to make it easier to identify non-compliance as soon as possible so that the non-compliance situation can be rectified. BAU is primarily designed to address the execution weakness but can also have a significant effect on the other weaknesses.

Finally, the last opportunity is to address the failings of an organization’s security awareness program. Organizations finally come to the realization that all it takes to defeat all of their expensive security technology is human error. The only way to address human error is extensive security awareness training. No one likes this, but in the end it is the only thing that remains when you have implemented all of the requisite security technology.

Threats

The obvious threat that will never go away is the attackers. As long as we have our interconnected and networked world, attackers will continue their attacks.

The final threat is complacency. A lot of organizations think that once they achieve PCI compliance that their work is done and that could not be further from the truth. Security is a journey not something you achieve and then move on to the next issue. The reason is that no organization is static. Therefore security must constantly evolve and change to address organizational change.

There are likely even more items that could be added to each of these categories. However, in my humble opinion, these are the key points.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

May 2015
M T W T F S S
« Apr    
 123
45678910
11121314151617
18192021222324
25262728293031

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,221 other followers


Follow

Get every new post delivered to your Inbox.

Join 1,221 other followers