Posts Tagged ‘PCI SSC

10
Nov
21

PCI Dream Team Holiday Event

On Wednesday, December 8, at 1PM ET/1800 UTC the PCI Dream Team will host its first ever holiday event as our “gift” to the PCI community.

To join us at our first holiday event, you can register here.

As with all of our sessions, please be prepared with your hardest PCI questions and concerns to stump the Dream Team. If you are unable to attend, you can always submit questions to pcidreamteam AT gmail DOT com and then review the recording of the session at TrustedSec.

So, hang the Mistletoe and let the Eggnog flow.

Happy holidays from the Dream Team (Ben, Coop, David and Jeff) and we look forward to “seeing” you at this holiday session.

Advertisement
24
Oct
21

Remote PCI Assessment Guidance Issued

At the end of September 2021, the PCI Council released a Guidelines and Procedures document on conducting Remote Assessments for PCI and card brand assessments.  Most of this document is a rehash of previous Council statements and guidance.  However, there is one new element in this document that all QSAs will need to read and comply with and that is the requirement of documenting a feasibility analysis to justify conducting a remote assessment.

Some of the examples the Council gives as valid reasons that an on-site assessment may not be feasible includes:

  • Restrictions on the ability to travel or meet in person due to health and safety concerns or government advisories.  We are all familiar with the COVID-19 pandemic and its impact on travel, particularly international travel.  However, I encountered this a while back due to a volcanic eruption in Iceland that cancelled my trip to Europe.  Since we had no way of knowing how long the eruption would cause travel disruptions and we were on a tight timeline, we conducted video conferences rather than travel.
  • Geographic locations that are physically inaccessible or difficult to reach.  I personally ran into this situation one several years ago when a data center in Europe that was supposed to be decommissioned before the next assessment remained operational.  The company I worked for had shut down their EU operations and there was no way to justify 16 hours of flight time for a two-hour data center walk through.  We held meetings with the data center operator via video conference and did a virtual walk through.
  • Testing required at a location is limited to documentation and interviews and no observations of processes, systems or physical environment apply.
  • The entity operates a virtual environment without physical premises or facilities.  This has become more and more common with entities that operate in The Cloud.  Why rent expensive office space when there is not need for it?  This situation only got more prevalent with the pandemic and will likely only increase in the future.

As the Council states in their guidance,

“For many assessments, a combination of onsite and remote testing may provide a suitable balance, as it allows for increased efficiencies in the assessment process while enabling an appropriate level of assurance to be achieved in the assessment result.  For example, documentation reviews can often be performed remotely without significant loss of assurance, whereas observations of processes and environmental characteristics will generally require an onsite review.”

Regardless of whether the assessment fits into one of the bullets above, the Council wants QSAs to formally document their analyses of why the onsite assessment cannot be performed and the risks that may present to meeting the assessment objectives.  This analysis needs to be completed prior to starting any testing and is supposed to be a joint effort between the assessor and the client.

Topics that the Council recommends be addressed include, but are not limited to:

  • Confidentiality, security, and data protection requirements.
  • Availability and effectiveness of the remote assessment technologies.
  • Effects on entity’s personnel.
  • Effects on operation support.
  • Assessment scope and completeness.
  • Quality and reliability of digital evidence.

The Council further states:

“During the analysis, the entity and assessor should identify any challenges and potential risks associated with the remote testing and determine whether it is feasible for testing to be thoroughly completed to produce a high level of confidence in the assessment results.

The results of the feasibility analysis—including the risks and challenges associated with use of the remote testing methods, and any mitigating controls for overcoming the risks and challenges—should be documented and agreed upon by both the entity and assessor. A copy of the feasibility analysis results should be included with the applicable ROC/ROV. Entities and assessors may be required to produce the analysis upon request by the PCI SSC or applicable compliance-accepting entity.

The key points from that statement above is that: (1) the feasibility analysis needs to be submitted with the ROC/ROV and, (2) if requested by the PCI SSC or compliance accepting entity (i.e., Brand or bank), the QSA is required to produce the analysis.  As a result, this is a non-optional exercise.

The feasibility analyses must document that:

  • The assessment is feasible to be fully completed at this time using onsite methods, remote methods, or a combination of onsite and remote methods.
  • The assessment is only feasible to be partially completed at this time.
  • The assessment is not feasible currently.

According to the guidance, it is only those assessments that are completely feasible that can be conducted.

The Council includes a very important note regarding the analyses.

“The feasibility analysis determines whether the use of remote testing methods is feasible for a particular assessment.  Determining that a remote testing method is feasible does not guarantee that use of the testing method will produce the level of assurance needed for the assessor to reach a finding; this will depend on how the remote testing method is implemented and used, whether the testing can be completed for all applicable components and areas, and whether sufficient evidence is provided for the assessor to make a determination.  Assessors and entities should continue to monitor and evaluate the effectiveness of the remote testing methods throughout the assessment to confirm whether the testing methods are performing as intended and whether additional testing may be needed.”

This concept of “assurance” appears to all be in the eye of the beholder.  Meaning, if the Council, Brands or Banks determine, in their opinion, that the remote methods are not providing appropriate levels of assurance, the ROC/ROV can be rejected.  Not that a lot of banks are going to reject ROCs/ROVs on this, but I can see the Council’s AQM reviews and Card Brands rejecting ROCs/ROVs on analyses that they deem flawed or incomplete.  The AQM process is the most concerning because a QSAC could end up in remediation due to a failure to appropriately document the remote assessment feasibility.

As with most edicts issued by the Council, they should have produced a form for this feasibility analysis so that everyone understands what is required from these feasibility analyses.  Can the feasibility analysis be documented in section 1.2 of the reporting template or is a separate document required?  I would recommend this for the obvious remote assessments of COVID and everything in The Cloud.  I would recommend a separate document for feasibility analyses that are longer in discussion.

Sadly, I foresee a lot of confusion and heartache in the QSAC community as we move through this new requirement.  That is because I see a lot of assessments that are blocked due to COVID travel restrictions or the assessed entity having no physical offices being rejected for “flawed” feasibility analyses when it should just be allowed with no further documentation or discussion.

It will take time to see how this shakes out.

UPDATE 11/29/2021 – I received a comment on this post (see below) and the confusion is beginning. A service provider has had one of their customers request the documentation regarding what is provided in Appendix A of the remote assessment guidance document as well as the remote assessment feasibility study. Since these are ROC documents, there is no requirement from the Council that requires any organization to turn over their ROC to any third party other than their acquiring bank or the card brands. The AOC is the communication document to third parties. If an organization wishes to turn over Appendix A from the guidance, that is the organization’s decision, but it is NOT mandatory nor it is required by the Council.

24
Nov
15

Information Supplements Versus The PCI DSS

At various times over the years, the Council has repeatedly told QSAs, Participating Organizations (PO) and anyone else that has asked questions about statements in the Information Supplements the following.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

So what are the point then of the Information Supplements?

Boy is that a good question. As a QSA, I often ask myself that very question after some of the inane conversations with clients and prospective clients regarding Information Supplements and their supposed “guidance”.

The first thing everyone should remember about Information Supplements is that they are developed and written by a committee at the suggestion of the Council, POs or as part of special interest work groups. These committees are made up of personnel from interested POs, QSAs, ISAs, vendors and anyone else willing to participate in their development. They are edited by a representative from the Council and reviewed by the Committee and are then submitted to all POs, QSAs and ISAs for review and comment. Similar in concept to the development and review of RFCs by the IETF.

The other key point about Information Supplements are that they are developed to give QSAs, ISAs and organizations ideas and guidance on how best to appropriately meet the requirements of the PCI DSS and the Reporting Template testing. Again, as the Council has repeatedly stated, the Information Supplements do not replace the explicit guidance and testing requirements in the PCI DSS and the Reporting Template. They are merely suggests on an approach.

Yet time and again, QSAs and ISAs get these priceless documents tossed in our faces and are told we do not know what we are talking about. “The Information Supplement says …” is always put out there as the justification as to why an organization is doing something it should not be doing or as the rationale for why the organization is not in compliance with the PCI DSS. And we again are forced to explain that the Council never has said that an Information Supplement replaces the guidance and testing in the PCI DSS or the Reporting Template.

The first question anyone, and I do mean anyone, should ask about any statement in an Information Supplement is, “Does the PCI DSS and/or the Reporting Template explicitly say the same thing?” Those are the only two documents that matter and the only documents that your organization will be assessed against. If it is not explicitly called out in either of those documents, then it is not accurate and does not reflect the compliance requirements.

As an example. I was on a conference call recently regarding the Council’s Information Supplement on penetration testing. This supplement was issued in March, 2015 and is possibly one of the most confusing and contradictory pieces of “guidance” we have ever encountered. In fact, it has created more confusion than it has actually clarified. In my very humble opinion, the Council would be better off taking it out of circulation because of all of the trouble it creates for QSAs, penetration testers, ASVs and clients. It is possibly one of the worst written of the Information Supplements and, while people both on the Committee that developed it and externally supplied the Council with numerous suggestions for changes, those changes were not incorporated into the document. Why those changes were not incorporated is anyone’s guess. But we in the PCI community ended up with possibly the worst expressed and misunderstood guidance available.

As usual, the client was arguing over the scope of their penetration testing. I get the fact that organizations want to minimize costs and scope as much as possible. However when you listen to some security professionals arguments on this topic, you just wonder how they got to their positions as they argue over not testing systems and devices that are painfully obvious to be in scope.

And as also is usual, the first piece of confusion regarding scope is in Section 2, page 5, first paragraph after the bullets and states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

One would think that to any reasonably intelligent information security professional, the first part of the sentence, “It is not a requirement to test from within the CDE to the servers inside the CDE;” would be considered a pure line of garbage. Never mind that none of the recognized penetration testing methodologies ever suggest such an approach. But people arguing never consider that fact. Nope. The people arguing are so focused on cutting their PCI compliance bill that it does not matter that the statement is pure and unsupported garbage. It is considered the gospel truth. Otherwise, why would the Council allow such a statement? Good question. We have asked the Council that question and the answer back is? You guessed it.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

Again, never mind it is in no way supported by the guidance provided by the PCI DSS for requirement 11.3 which says:

“The intent of a penetration test is to simulate a real-world attack situation with a goal of identifying how far an attacker would be able to penetrate into an environment. This allows an entity to gain a better understanding of their potential exposure and develop a strategy to defend against attacks.”

But argue that point they do even when you point out that arguing this point is basically arguing that any attacker would stop at the perimeter of the CDE and would go no further.

Seriously? If you believe that fact, you must also believe in Santa Claus, the Easter Bunny, the Tooth Fairy and any other of the multitude of mythical fictional creatures. Or you are just lying to yourself and are in serious denial about your organization’s security posture. But argue on they do.

Then you pair that to the second part of that first sentence of this paragraph that says, “… and testing exclusively from within the CDE perimeter will not satisfy the requirement.” Just adds to the out of scope argument.

As I point out when bitch slapped with this terrible writing, if you go back and carefully re-read the second part of the first sentence, what it points out is that penetration testing from only inside the CDE is not sufficient to meet the penetration testing requirements of the PCI DSS requirement 11.3. In no way does that sentence say or even further imply that the CDE is out of scope. It is actually saying that penetration testing should be done from within the CDE, but that penetration testing only inside the CDE does not meet 11.3. But people will still argue that the CDE is out of scope.

That the CDE is in scope is further supported by the definitions of “critical systems” from section 2.2.1 of the document which defines that not only are systems within the CDE in scope, but also those that are outside the CDE but could affect the security of those systems inside the CDE (i.e., what the Council and the Open PCI DSS Scoping Toolkit refer to as “connected to” systems). However, people arguing over scope rarely, if ever, tie these two section together and then argue that because they are in separate sections they cannot be possibly together even though the entire document is about only one subject, penetration testing and requirements in 11.3 of the PCI DSS.

So before you go off telling your QSA or ISA that the Information Supplement says something. Think about what the information supplement says. Is the guidance from the Information Supplement even implied in the PCI DSS? Read the guidance in the PCI DSS and the testing procedures from the Reporting Template. If the PCI DSS or the Reporting Template do not explicitly have the same language in them that the Information Supplement has, then the Information Supplement is merely a suggestion.

And if the guidance from the Information Supplement does not make sense, pull your head out of your posterior and use some God given common sense. Ask your QSA or ISA to explain it, before going off halfcocked and thinking that someone could actually think such things made sense.

But again, why would the Council allow such statements? Good question. We have asked the Council that question and the answer back is? You guessed it.

“Information Supplements only offer guidance to organizations and do not replace or supplant anything stated in the PCI DSS.”

Clear as mud? You bet.

But what did you expect? It is PCI.

For all of you in the United States, have a happy and safe Thanksgiving holiday.

01
Jun
15

Supplemental Validation Procedures Coming

In the April 2015 Assessor Newsletter (received just last week) from the PCI SSC was the following announcement.

Coming Soon – Supplemental Validation procedures for Designated Entities

The analysis of PCI DSS compliance trends as well as the recent data breaches involving cardholder data has revealed that many organizations continue to view PCI DSS compliance as a periodic exercise only, and fail to implement processes to ensure that their PCI DSS controls are continuously enforced. This approach has been shown to result in a lapse in security controls between validation assessments. Organizations must remember that security is an ongoing process that must be incorporated into an entity’s overall strategy and PCI DSS security controls must be maintained on a continual basis.

In response to these trends, the PCI SSC is planning to issue additional validation procedures that are designed to help organizations illustrate how they are maintaining PCI DSS security controls on an ongoing basis. These supplemental validation procedures are due to be published in the upcoming weeks, along with guidance for understanding how and to whom these procedures may apply. Stay tuned!

Could it be that business as usual (BAU) is coming before v4 of the PCI DSS is released?

Who are these “designated entities”?

As the newsletter says, “Stay Tuned!”

UPDATE: On Friday, June 5, the Council issued the ‘PCI DSS Designated Entities Supplemental Validation’ standard. It can be downloaded from the Council’s Web site.  The document gives the following as examples where these supplemental procedures apply as entities that: (1) store, process, and/or transmit large volumes of cardholder data, (2) provide aggregation points for cardholder data, (3) have suffered significant or repeated breaches of cardholder data, or (4) anyone the card brands determine should go through this process.

20
Apr
15

Why Requirement 5 Must Change

This issue came to a head recently when a colleague of mine attended an ISSA chapter meeting where there was a session given on anti-virus by someone from a US government intelligence operation. I had entirely forgotten about this until they brought it back up. The issue is the ineffectiveness of anti-virus solutions and why they are ineffective.

Most of us have seen the anti-virus testing results that are periodically pumped out by the various trade journals. They all point out that anti-virus is only around 30% to 40% effective in detecting malware. But what never seems to get brought up and clearly discussed is why anti-virus solutions are so bad at their job.

The reason is that anti-virus solution providers have taken a page out of the United States Centers for Disease Control (CDC) influenza playbook. The reason is the statistics that the speaker shared.

  • For every current piece of original malware, there are around 400,000 variants of that malware making the rounds on the Internet. Variants are easy to make which is why there end up being so many so quickly.
  • To scan a computer for every piece of malware developed since day one including variants would take around 40,000 hours (almost a month) to complete. And that is if you dedicate a core for that to run as well as a core to scan everything coming at you.
  • The signature files required to track all malware and their variants from day one would take up a significant portion of your hard drive.

Like the CDC does a scientific wild-ass guess (SWAG) to figure out what influenza vaccine to make every spring, anti-virus vendors do the same thing with their signature files every day. What anti-virus vendors do is select the most likely malware and variants your computer will encounter and that is what your anti-virus signature file will contain. The idea is that their heuristic engines and firewalls will hopefully detect the malware not included in the signature file.

Getting back to the PCI DSS, requirement 5.1.1 states that anti-virus solutions:

“Detect all known types of malicious software, remove all known types of malicious software, and protect against all known types of malicious software.”

Guess what?

Given the aforementioned revelations that signature files are incomplete, there is no anti-virus solution available today that meets those requirements of detecting and protecting against “all known types of malicious software”. All of us have, unknowingly or not, been “checking the box” on this requirement.

I along with a number of other security professionals have stated for years that anti-virus alone has never been adequate for protecting systems as portrayed in the PCI DSS, by the PCI SSC and by the card brands. If you truly want to protect systems from “all” malware as specified in the requirement, you need to use anti-virus in conjunction with a whitelisting/blacklisting and/or file change detection solution. Anti-virus alone is just not enough as the repeated tests of these solutions have pointed out over the years.

The reason you still need to keep anti-virus is that these solutions do what the others do not – quarantine or remove the malware. Quarantining or removing malware is truly an art form and has gotten even more so as operating systems have become more sophisticated in how they manage applications. The reason for this is that, while it is easy to install software, it has become very tricky in uninstalling it, if you can even uninstall it at all.

Anti-virus vendors spend the bulk of their research and development time and money in determining the best way at quarantining and/or removing malware. While a lot of whitelisting/blacklisting vendors have promised to add the ability of quarantining and removing malware, most have come to the realization that providing such features are beyond their current capabilities and not as simple as they have portrayed it in their sales meetings. As a result, I would expect it will take these whitelisting/blacklisting vendors years to have this capability if they even bother to develop it.

So what should the PCI SSC do?

The Council needs to require additional malware detection measures to requirements 5 so that organizations are truly protecting their systems against malware. In the immortal words of Bruce Scheier, what we have now is “security theater” – the appearance of security without security. Anti-virus alone is not cutting it, so it is time to enhance that capability by requiring more than just anti-virus.

The Council should also work with and demand that the anti-virus, whitelisting/blacklisting and file monitoring vendors provide some sort of integration between their respective products. That way when the whitelisting/blacklisting or file monitoring solutions detect an issue, the anti-virus solution can do the quarantine or removal of the suspected malware which it is typically very good.

Is this going to detect every piece of malware?

Sorry, but some will still get through (remember, security is not perfect). But the amount that gets through should be significantly less than with just anti-virus alone.

How much gets through will be up to how the tools are configured. As a lot of you have found out, just installing file monitoring software does not detect all file changes. That is because the installation does not get tweaked to protect everything it should. That takes time and effort that a lot of people do not provide because they have other things to get done. The better you implement the other tools, the fewer pieces of malware that will get through.

Reach out to the Council and let them know that you also think that requirement 5 needs improvement.

14
Apr
15

SSL Update

“The report of my death was an exaggeration.” – Mark Twain

Today the PCI SSC announced that SSL and “early TLS” (whatever that means) will not truly die until July 1, 2016. This will allow a transition period for all of you stuck with vendor baked-in SSL as well as the procrastinators amongst us to get converted to TLS 1.2. I say TLS 1.2 because if you are going to convert, you really should go to the most current version and not just with whatever will get you by.

The complete summary of PCI DSS v3.1 changes can be found here.

UPDATE: Late on Wednesday, April 15, the PCI SSC released v3.1 of the PCI DSS. The Council uses NIST SP800-52 rev1 as the definition of “early TLS”. You can get a copy of the new version of the PCI DSS here.

03
Apr
15

PCI Issues Penetration Testing Information Supplement

On Thursday, March 26, the PCI SSC issued their latest information supplement. This one is focused on the topic of penetration testing. Not surprisingly, I have a number of comments based on what I have read. There are some good things that have come out of this effort. But in my opinion, the committee missed the boat in a few areas and those will lead to even more confusion between penetration testers, QSAs and clients.

Scoping

Talk about making a mess of a mess, I believe the writers accomplished that amazingly well in their discussion on scoping. This is probably the area that creates the most confusion and will likely create the most problems.

On page 5 and the first bullet, talk about confusing. It took me a number of readings to even get what I think they are trying to communicate.

“The scope of the internal penetration test is the internal perimeter of the CDE from the perspective of any out-of-scope LAN segment that has access to a unique type of attack on the CDE perimeter. Critical systems or those systems that may impact the security of the CDE should also be included in the scope. Testing must include both application-layer and network-layer assessments.”

Out of scope LAN segments are out of scope because there are no rules that allow them access to the CDE. I agree that you need to test that these LAN segments are truly out of scope, but if there is a unique type of attack available from such a LAN segment into the CDE, that would seem to imply that the LAN segment was not out of scope. Confusing and likely to cause a lot of pen testers to force a lot of unnecessary testing and fishing for bizarre issues on out of scope network segments.

The next sentence paired with the first seems to just add to the confusion because it implies again that the out of scope LANs are not out of scope. These should have been two separate bullet points in my opinion.

In the end, what I think the writers are saying is that even if deemed out of scope, all LANs and VLANs need to be confirmed to be out of scope. However, how they have said that in my opinion leads to a lot of confusion.

If the last one did not cause your head to spin around, this one probably will. It is the third bullet at the top of page 5.

“To be considered out of scope for PCI DSS, a system component must be isolated (segmented) from the CDE, such that even if the out-of-scope system component was compromised it could not impact the security of the CDE. Therefore, the penetration test may include systems not directly related to the processing, transmission or storage of cardholder data to ensure these assets, if compromised, could not impact the security of the CDE.”

Correct me if I’m wrong, but this seems to imply that everything is in scope for penetration testing. I know the writers of this supplement have used the word ‘MAY’, but we also know how ‘MAY’ tends to be interpreted by the Council (i.e., MUST) and how it will be interpreted by a lot of our clients (i.e., do not do it). Until a few penetration testing reports are reviewed by the Council, the verdict will be out on the correct approach. However, coupled with the theme created by these bullets, I am guessing that the committee could not agree on how to express how to scope the penetration test.

But if those bullets were not confusing enough on scoping, the writers add insult to injury. After the bulleted list at the top of page 5, the first paragraph states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

An internal pen test does not require testing from within the cardholder data environment (CDE)? This sounds like the “Three Hop Rule” all over again. But then we twist it further somehow justifying the first sentence.

All this diatribe does is just further obfuscates scoping. What simply should have been said is that everything in-scope for PCI compliance is to be penetration tested. Instead, we got a lot of words that, in my very humble opinion, do nothing to clarify scoping.

PA-DSS

On page 6 under 2.3.2 PA-DSS Compliant Applications we get the following guidance.

“If a payment application has been PA-DSS validated, the application’s functionality does not need to be tested as part of the entity’s PCI DSS compliance validation. However, the implementation of the application does need to be tested. This includes both the operating system and any exposed services, but not the payment application’s functionality (e.g., authentication, key management, transaction processing, etc.) since this was validated as part of the PA-DSS application validation.”

Under the PCI DSS, even PA-DSS validated applications still must be tested to prove that their implementations were completed per the vendor’s PA-DSS implementation guide. Yet for penetration testing, PA-DSS validated applications, their operating systems and exposed services are exempt?

I hope the acquiring banks are geared up for the fights this is going to cause between the penetration testers and their clients. For example, in the restaurant and hospitality industries, I can see the situation where penetration testing only occurs on firewalls, routers and switches because everything else is PA-DSS validated.

I do not believe this is what the Council wanted, but that is what this supplement is stating. If we are truly talking about security of our networks and systems, this is not how you achieve it.

Social Engineering

I was pleasantly surprised that this was included in this supplement. However, as I read through this section, I got more concerned that it was even included.

For section 2.5 Social Engineering on page 7, we encounter the following.

“Social engineering is the attempt to gain information, access, or introduce unauthorized software into the environment through the manipulation of end users. PCI DSS v3.0 reconfirms testing by requiring industry accepted penetration testing approaches (many of which include social engineering as part of their approach) and to have an approach to penetration testing that “considers the threats and vulnerabilities experienced by merchants in the last 12 months.” This may include social-engineering attacks as a method used for introducing malware into the environment.”

This makes social engineering attacks legitimate if the client has been breached by such methods in the past year. But does this include physical as well as logical social engineering? Since it is not explicitly called out, I would assume so, but it should have been defined. Regardless of that being explicitly called out, this is a great thing to see in this document.

On page 8, the last paragraph of the discussion on social engineering states:

“Social-engineering testing may not be appropriate or provide a meaningful result for all organizations. Although social-engineering testing is not a requirement of PCI DSS, an organization may consider documenting the reason(s) for foregoing social-engineering testing and include applicable documentation with the internal and external penetration test reports, particularly if social-engineering attacks were encountered in the last 12 months.”

What started with such promise ends on a very sour note. This last paragraph gives organizations an out. So I guess we can look forward to a lot of organizations issuing inventive explanations to avoid social engineering testing.

Some Miscellaneous Issues

On page 2, the seventh bullet down in the Terminology we have the following.

National Vulnerability Database (NVD): The U.S. government repository of standards based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance (e.g., FISMA).”

I know they just grabbed the description from the National Vulnerability Database (NVD) Web site, but that does not mean that it is appropriate for use when discussing penetration testing. The Common Vulnerabilities and Exposures (CVE) database maintained by Mitre Corporation would have been the better reference since the NVD uses the CVE as the catalog of vulnerabilities.

I have no idea how the Federal Information Systems Management Act (FISMA) ties into the NVD. If they had truly pulled the content exactly from the NVD Web site they would have grabbed the fact that it ties to the Security Content Automation Protocol (SCAP) not FISMA.

My last comment on this topic is regarding the fact that the NVD (or the CVE) enable automation of vulnerability management. Had they brought in the reference to SCAP, I might have had an easier time going along with things. The NVD/CVE do not contain the actual exploits, they are merely a catalog of the vulnerabilities. As a result, I question the value of either the NVD or the CVE as to it automating vulnerability management. Yes, the CVE catalogs all of the vulnerabilities, but that is the value. In my opinion, the use of either of these in regards to the automation of the vulnerability management process is a stretch. They are a foundation that will allow for automation, but they do not in and of themselves actually enable automation.

Under 2.3.3 Web Applications on page 6, we have the following.

“It is common for an environment to host a web application that was not specifically coded for the organization such as commercial, off-the-shelf web-mail interfaces, document-sharing tools, file-transfer services, network-device administrative interfaces, etc. In these instances, the web application does not typically need an application-layer penetration test as the entity is not responsible for the source code of this type of software. Instead, the tester should perform a network-layer test and ensure the software was implemented, configured, and is currently being maintained in a secure manner (disabling or uninstalling unused services, blocking unused ports, applying current updates, etc.).”

I can see it now, organizations arguing over the pen testing of IBM Websphere, Oracle eCommerce or similar Web application frameworks because they are a package. It is a framework that you then must write an application for, but penetration testers will get into arguments over that aspect.

A Few Good Things

Yes, I have complaints, but there are some good things that are contained in this supplement.

Probably the best thing about this information supplement is the checklist at the end of the document for evaluating penetration testing reports. The Council should have a similar item for vulnerability scanning reports as well.

The discussion of the penetration testing methodology in section 4 of the document is a good read for both penetration testers and the people that contract for penetration testing services. But keep in mind it is a framework and not the entire rule/procedure set for conducting a proper penetration test. Yes, there are some good details in areas, but it is not the detailed roadmap that some people will portray it.

Section 3 has a good discussion on how to evaluate the qualifications of a penetration tester. It was nice to see that while they document the common certifications a penetration tester may hold, they caution readers that certifications alone does not make a penetration tester qualified.

The case studies in section 6 probably do more to clarify than the previous five sections. They are good examples of properly conducted penetration tests.

Finally, there is a good discussion on the definition of a “significant change”. However, I think this discussion as well as a discussion about the words “periodic” and “periodically” should be put into a discussion about risk management and assessment, rather than brought up here. I wrote a post on this topic a while back to provide some guidance to people and it was nice to see that my thoughts jive with what they wrote in the supplement.

What Was Missed

The biggest item I think that was missed was how to handle the penetration testing of huge environments. Case study 6.3 discusses the concept of sampling somewhat, but it deals with a retailer with only six locations, not thousands of locations. There are also veiled references to sampling in section 2.3.4 in discussing a separate testing environment. But there is no deliberate and complete discussion on this very important topic.

Retailers such as Wal-Mart, Target and Home Depot have more than a thousand locations with 15+ checkout lanes meaning there are, at a minimum, 15K devices out in the field that technically need to be penetration tested. How does an organization penetration test such a large number of devices annually? The answer is that they do not test them all. It is physically impossible. However, this information supplement does not address that situation and provide guidance as to what the Council deems as acceptable.

While I use some of the largest retailers in the world as my example, it is not just them as there are mid-sized retailers as well as hospitality organizations that face this same issue. I had hoped that how to approach these situations would have been explicitly addressed in greater detail but it was not.

The writers also missed a huge topic regarding social engineering testing. Unlike vulnerability scanning and penetration testing, in my experience, it is virtually impossible for any organization to “pass” social engineering testing. So what would be considered “passing” in a social engineering test? That is not discussed but should have been. Since this document broaches the subject, it is probably a good topic to propose for a SIG for this year’s SIG recommendations so we get social engineering covered in an information supplement rather than as a sidebar to penetration testing.

It is not that this information supplement is an entirely bad document. But the advice it provides needs to be taken with a bit of caution. As usual, the shortcomings will be leveraged by all of the “check boxers” to do as little as possible so that you get that box checked for 11.3.

31
Mar
15

PCI Council Calls It Quits

Citing that the payment channels are now secure, the PCI Security Standards Council announced this morning in a press release that they were closing down effective June 30, 2015.

20150401-PCISSCDisbanding-AprilFoolLetter

A large sigh of relief was heard worldwide upon the announcement.

Have a happy April 1st otherwise known as April Fools’ Day.

14
Mar
15

The 2015 Verizon PCI Report

A lot has been written about this year’s Verizon PCI Compliance Report particularly about how 80% of organizations cannot maintain their compliance. And at the very end of the report are a number of issues raised by Verizon regarding why maintaining compliance is so difficult for most organizations. It is those issues that I would like to discuss.

Scale and Complexity of Requirements

“I just don’t understand why this ERP upgrade is going to take 18 months to complete. Can’t we just put the DVD in the drive and upgrade it like Microsoft Office?” – Anonymous Executive to IT Management

The same could be said about any security framework. If organizations are struggling with PCI compliance, imagine how they are struggling with HIPAA, FISMA or ISO 27K compliance. Compliance with any of the security frameworks is not easy.

I disagree with Verizon’s claim that it is related to the fact that most organizations do not know the PCI DSS. After six years and three versions, I rarely run into an organization today that does not have a basic, overall understanding of the PCI DSS. These organizations may have some interesting ideas on what sections and requirements of the DSS mean, but they have definitely studied it and read about it. Therefore the idea that organizations are ignorant on the subject is far from the truth in my experience.

In my opinion, where the problem lies is that most organizations have not truly managed their technology environments thanks to interference with mergers and acquisitions, partially implemented applications, bring your own device (BYOD), the Cloud and the plethora of other disruptions that complicate organizations. Today, IT is a very important part of any organization, but it is not managed like it was in the “good old days”. There are too many stakeholders and the consumerization of technology has not helped the situation by making everyone an IT “expert”.

Most organization’s IT operations these days are a hodge-podge of technologies, applications and networks. I would equate it to the technological equivalent of a house’s attic and garage combined. We all know we should clean and straighten them out, but that project always sits on the back burner as there are other, more important or fun things to do.

As a result, for most organizations, there is just no easy way to simplify, segregate and isolate cardholder data (CHD) and comply with the PCI DSS without making the environment even more complex. Starting over is not an option for a lot of organizations.

That said I have encountered a few very brave organizations that have done just that, started over. Management at these organizations came to the realization that fixing the problem was too complex and expensive and that starting over was the cheaper, safer and easier way to go.

Uncertainty about Scope and Impact

“I don’t know much about PCI, but I do know my scope.” – Anonymous Manager to QSA

When application developers cannot explain how their applications work on a technical level. When anyone in any department can be in the IT business. When security personnel are order takers for firewall configuration changes reviewed and approved by management that have no clue as to the implications of those changes. When network people are providing a communications utility for communications traffic but have no idea how that traffic traverses the network.

Is it any wonder we have no idea how to scope a PCI assessment?

But there are larger problems as to why scoping is difficult. The root cause of why scoping is such an issue is that everyone’s risk tolerance is different. I drive race cars at very obscene speeds on race tracks (mostly) that I am sure a lot of people would view as insane. However, I think that people that skydive and do rock climbing are the insane ones. All of this points to everyone’s acceptance and avoidance of risk based on their own views.

There is a sidebar in the Verizon report calling the PCI SSC to provide guidance about scoping. Good luck with that. The Council had a scoping SIG a number of years ago that imploded due to the aforementioned issues with everyone’s risk tolerance. The result was a small band of people from the SIG that published the PCI Open Scoping Toolkit. The PCI Open Scoping Toolkit is not perfect, but it provides a framework to have an intelligent discussion about how to go about scoping and determine what is in-scope and why.

The key to solving the scoping issue resides with the organization, not their QSA, acquiring bank or any other external entity. Organizations need to use the PCI Open Scoping Toolkit to come up with their scoping framework and definitions. Once that has been agreed, then an organization needs to map out their applications and networks to determine their true scope. This is where tools from vendors such as Tufin, FireMon, SolarWinds and the like can provide assistance by documenting the network and then simulating data flows over the network.

With that approach, it is incumbent on QSAs and other auditors to accept these definitions for their assessment unless there is some significant or gross error in the organizations definitions. This will address the complaint that organizations have with QSAs. How often have we heard something such as, “The last QSA told us this was compliant.” If we all play by the same risk definitions the client has provided, then statements like that should go away.

Once an organization truly understands and has defined its scope, it can then understand the impact of existing operations and any changes.

The Compliance Cycle

This is what the Council is attempting to address with business as usual (BAU). The idea is that with security practices and monitoring embedded within an organization’s operations, security issues can be quickly identified and addressed before they become serious.

However, for this to work, organizations need to have their scope known as well has how their IT environment actually works. Without that knowledge, embedding the PCI DSS into the organization is a futile exercise.

Lack of Resources

Every organization is running “lean and mean” these days. Cost control is king. As a result, resources are stretched, sometimes to the point that any additional activities just cannot be accommodated without hiring someone. And hiring is not allowed. So implementing BAU is not going to go well if it goes at all.

On the information security front, finding qualified people is nearly impossible, even for consultancies. Organizations are finding that most information security professionals are heading to consultancies because the pay is better. Since security is hard on both the mind and the body, most people want to be reimbursed as much as possible for their efforts. As a result, most organizations cannot pay for in-house security resources. And then, even if they do ante up, typically the person that takes the position either gets bored once they fix everything, or gets frustrated when the organization refused to make required changes to ensure or enhance security.

Enter the managed security services provider or MSSP. The concept is that the MSSP provides the security talent at a more reasonable price yet organizations get the quality personnel needed to enhance and stabilize their security.

Where this goes wrong is that the MSSP and the customer are not on the same page as to each other’s responsibilities. This is from a mixture of sales people over promising as well as prospective customers hearing what they want to hear. Never mind that it is all documented in a contract.

To address this situation, the PCI SSC has come up with a new requirement, 12.8.5, which states:

“Verify the entity maintains information about which PCI DSS requirements are managed by each service provider, and which are managed by the entity.”

Under the v3 Attestation Of Compliance (AOC) form, this will not be as big a problem for an organization to maintain. However, if an organization has a lot of service providers and/or the service providers have v2 AOCs; this could be a very daunting task.

Lack of Insight in Existing Business Processes

“I’ve only been in this position for [2, 3 or 4] months. So I’m not fully up to speed on everything we do.” – Anonymous Manager to QSA

“I’d give you an organization chart, but it would be out of date by the time I printed it.” – Anonymous Human Resources Manager to QSA

In today’s fast changing business world, people get shuffled out of departments and divisions faster than people can manage the changes. As a result, finding anyone with any sort of insight into an organization’s business processes can be extremely difficult, if not impossible.

Then we go back to my earlier comment about lack of IT management. With the advent of the Cloud, some business divisions and departments have totally sidestepped the formal IT organization and set up their own operations in the Cloud. Did they know what they were doing? No! But that was beside the point, they at least now have IT solutions, never mind if they are secure or implemented properly. The only way to find these rogue operations is to quiz everyone in the organization about how they operate and what they use to operate.

Even then, I have run into situations where a new payment channel pops out of the woodwork at the last moment. Next year’s assessment issue or we will not get the one we are currently doing out the door.

Misplaced Confidence in Existing Information Security Maturity

A lot of organizations that have been doing IT for years and years get caught in this trap. Just because you have been doing IT for an eternity does not mean that you have been doing it right for the same amount of time or that you are doing it correctly now.

In a lot of IT organizations it is an unfortunate fact of life that areas such as special projects, business continuity planning or information security were used as those “safe” places to put the former IT Vice President or Manager out to pasture so they could retire. It did not matter if the individual could handle the job; it was a place to park someone and provide a gentle way out of the organization.

A rare few individuals made the transition and actually took up the challenge of mastering their new responsibilities. However, the vast majority just checked out, collected their pay check and then retired. This left the organization with a very immature security operation compared to the rest of IT’s operations. Add into the mix the changing landscape of IT with business divisions and departments doing their own thing unbeknownst to anyone and you can see how the maturity of information security could be easily misunderstood.

Then along comes the QSA to do the PCI gap analysis and it all comes to a head as the organization comes to the rude awakening that all is not as good as they thought and that significant gaps exist. To add insult to injury, the organization finds that fixing the gaps is going to take a lot longer than the 90 days they had set aside for that activity so that they could get their Report On Compliance (ROC) done in the same year.

The Verizon report is a great read and provides a lot of insights. Everyone should get a copy and read it, take it to heart and address your organization’s security shortcomings.

21
Feb
15

Incidental Contact

I have had a number of questions recently regarding how to deal with the occasional customer that sends cardholder data (CHD) or sensitive authentication data (SAD) to the merchant via email or instant messaging in blatant disregard to security.

Most people point to requirement 4.2 in the PCI DSS v3 and say it is not allowed for PCI compliance.  However, that is wrong.  Requirement 4.2 states:

“Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word is “send”.  Requirement 4.2 does not say a merchant or service provider cannot receive PANs by end-user messaging technologies, only that they cannot send them by those same messaging technologies.

The Council has always recognized that there were always going to be a small percentage of people that would ignore security and will send their CHD/SAD via any number of insecure methods all in the name of expediency or convenience.  As a result, the PCI DSS has been structured to allow for those occurrences, something a lot of QSAs refer to as “incidental contact”.  What is important to a QSA is how you handle incidental contact.

The first important point to make is that once CHD/SAD is received via an end-user messaging technology, the merchant or service provider cannot then forward the information on using email or similar technologies.  The merchant or service provider must break the chain of that communication as soon as possible.

Security purists will point to the fact that deleting such messages from their sources is not secure.  In some cases a message could exist overnight and therefore exist on backup tapes of some technologies.  While this is all true, we are not talking about a consistent flow of CHD/SAD, we are talking about an occasional occurrence.  Organizations will have to accept the risk that their end-user messaging systems will have some CHD/SAD in them but that the amount is trivial because of how they deal with such occurrences.  If your organization is not willing to accept this risk, then you will have come up with an approach that will allow you to stop such occurrences.

The other key point to make is that incidental contact does not necessarily bring the end-user messaging technology into scope for PCI compliance.  In my opinion, what a merchant or service provider needs to prove to their QSA is that such occurrences are not condoned by the organization (i.e., by policy, such exchanges are not recommended), employees are trained to handle such exchanges securely, and that the exchanges occur only occasionally.  The term “occasionally” is the tough one and is up to the organization to define for the QSA.  I have dealt with large organizations that could receive around 50 such messages a day on bad days, but the annual total of incidental contact was well below 1% of the total number of transactions.  The rule of thumb that I use is that as long as the volume of transactions received over end-user messaging never exceeds 1% of the total I consider that as incidental contact.  However, I could see acceptable arguments for a 2% threshold based on the type of customers of the organization.  However, going higher than that value would, in my opinion, be too great.

With that stated, what is an organization to do with such messages?

Some organizations prefer to not act on any end-user messaging that contains CHD/SAD.  They prefer to record the sender’s communication account information, delete the message and then send a message back to the sender explaining that they cannot accept CHD/SAD through the communication method and tell the sender to use one of their approved methods for communicating CHD/SAD.

Other organizations are all about customer service and will reluctantly accept such communications.  They will print out the communication and delete the original message.  Once they have processed the transaction, they redact the CHD/SAD, take a copy of the redacted original and then securely destroy the original.  I recommend redaction using a Sharpie marker or similar.  The reason for taking and retaining a copy of the original is so that, when held up to a light, the redacted digits cannot be determined as would be the case if the redacted original were retained.

Some organizations will use the transaction confirmation process as an opportunity to remind their customer that the sending of CHD/SAD via the end-user messaging technology should be avoided in the future.

We live in an imperfect world where people are not necessarily as security conscious as the world sometimes demands.  As a result, merchants and service providers need to be flexible in how they approach situations where their customers communicate with them through insecure channels.  Hopefully I have given you some ideas as to how to approach these situations and deal with them in as secure a manner as possible.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031