Posts Tagged ‘PCI DSS



27
Mar
15

PCI SWOT Analysis

SWOT – strengths, weaknesses, opportunities and threats

I had someone ask me about my thoughts on this sort of analysis of the PCI DSS. While these comments are PCI focused, I found that they actually apply to all security frameworks.

Strengths

The biggest strength in any security framework, PCI DSS included, is they are all based on the “best practices” from a wide variety of leading experts and organizations. Essentially, security frameworks are the shared knowledge base of what it takes to have basic security. We talk today about sharing breach information better and potentially in near real time, but security frameworks are the original method of sharing such information.

Weaknesses

Unfortunately, I see a number of weaknesses with security frameworks.

The largest weakness with security frameworks I see is that most people, including a lot of security professionals, seem to believe that complying with the framework is all it takes to be secure. With the PCI DSS a lot of this misinformation can be laid at the feet of the card brands. It was the card brands that originally marketed the PCI DSS as the “be all, to end all” for securing the payment process.

The unfortunate fact of life for security frameworks is that they only minimize and manage security risks, they rarely ever eliminate them. Therefore, even following the PCI DSS to the letter is no guarantee that an organization could not be breached. Yet this concept of risk minimization, risk management and the fact that security is not perfect consistently gets missed by executives. So when the inevitable breach occurs, executives go after the security people for supposedly misleading them.

Another area of weakness is the time with which it takes to make an update to the framework. In October 2014, the National Institute of Standards and Technology (NIST) issued a bulletin on secure sockets layer (SSL) indicating that they had found a flaw in the protocol and that they no longer found the protocol secure. A few weeks later the Internet was introduced to POODLE and SSL was declared insecure. It took a few months for the PCI SSC to react to this and officially declare SSL was no longer to be relied upon for secure communications. It took vulnerability scanners almost a month to begin flagging SSL implementations as high vulnerabilities as the CVE had not yet been updated. And we were recently informed that it will be April at the earliest before we will get the latest version of the PCI DSS. In the meantime, all of this administrivia did not stop attackers from using POODLE to their advantage.

The final weakness I see with security frameworks is that organizations find it impossible to execute them consistently at near 100%, 24×7. In theory the PCI DSS will provide reasonable security for all but the most dedicated attacks such as with advanced persistent threat (APT). For an organization to achieve basic security, they would have to execute the requirements of the PCI DSS at least at 95%+ and would have to remediate any issues within a few days. Unfortunately as we have seen in the recently released Merchant Acquirer Committee study, merchants are typically only compliant with the PCI DSS between 39% and 64% of the time – far from 95%+. Verizon’s recently released PCI report backs this up with their findings. The bottom line is that most organizations lack the discipline to execute any security framework consistently enough to achieve basic information security.

Opportunities

The biggest opportunity I see for the PCI DSS is it gives organizations the impetus to simplify their environments. The biggest reason for the failure to execute the PCI DSS consistently is because a lot of organizations have technology environments that mimic a Rube Goldberg cartoon. Only by simplifying that environment will an organization have a reasonable chance of securing it.

Another opportunity this gives organizations is a reason to enhance their security operations. Most organizations run bare bones security operations no different than other areas. However, what PCI compliance assessments typically point out is that those security operations are grossly understaffed and not capable of ensuring an organization maintains its compliance at that 95%+ level.

Related to these two opportunities is what the PCI SSC calls business as usual (BAU). BAU is the embedding of the relevant PCI requirements into an organization’s business processes to make it easier to identify non-compliance as soon as possible so that the non-compliance situation can be rectified. BAU is primarily designed to address the execution weakness but can also have a significant effect on the other weaknesses.

Finally, the last opportunity is to address the failings of an organization’s security awareness program. Organizations finally come to the realization that all it takes to defeat all of their expensive security technology is human error. The only way to address human error is extensive security awareness training. No one likes this, but in the end it is the only thing that remains when you have implemented all of the requisite security technology.

Threats

The obvious threat that will never go away is the attackers. As long as we have our interconnected and networked world, attackers will continue their attacks.

The final threat is complacency. A lot of organizations think that once they achieve PCI compliance that their work is done and that could not be further from the truth. Security is a journey not something you achieve and then move on to the next issue. The reason is that no organization is static. Therefore security must constantly evolve and change to address organizational change.

There are likely even more items that could be added to each of these categories. However, in my humble opinion, these are the key points.

Advertisement
21
Feb
15

Incidental Contact

I have had a number of questions recently regarding how to deal with the occasional customer that sends cardholder data (CHD) or sensitive authentication data (SAD) to the merchant via email or instant messaging in blatant disregard to security.

Most people point to requirement 4.2 in the PCI DSS v3 and say it is not allowed for PCI compliance.  However, that is wrong.  Requirement 4.2 states:

“Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word is “send”.  Requirement 4.2 does not say a merchant or service provider cannot receive PANs by end-user messaging technologies, only that they cannot send them by those same messaging technologies.

The Council has always recognized that there were always going to be a small percentage of people that would ignore security and will send their CHD/SAD via any number of insecure methods all in the name of expediency or convenience.  As a result, the PCI DSS has been structured to allow for those occurrences, something a lot of QSAs refer to as “incidental contact”.  What is important to a QSA is how you handle incidental contact.

The first important point to make is that once CHD/SAD is received via an end-user messaging technology, the merchant or service provider cannot then forward the information on using email or similar technologies.  The merchant or service provider must break the chain of that communication as soon as possible.

Security purists will point to the fact that deleting such messages from their sources is not secure.  In some cases a message could exist overnight and therefore exist on backup tapes of some technologies.  While this is all true, we are not talking about a consistent flow of CHD/SAD, we are talking about an occasional occurrence.  Organizations will have to accept the risk that their end-user messaging systems will have some CHD/SAD in them but that the amount is trivial because of how they deal with such occurrences.  If your organization is not willing to accept this risk, then you will have come up with an approach that will allow you to stop such occurrences.

The other key point to make is that incidental contact does not necessarily bring the end-user messaging technology into scope for PCI compliance.  In my opinion, what a merchant or service provider needs to prove to their QSA is that such occurrences are not condoned by the organization (i.e., by policy, such exchanges are not recommended), employees are trained to handle such exchanges securely, and that the exchanges occur only occasionally.  The term “occasionally” is the tough one and is up to the organization to define for the QSA.  I have dealt with large organizations that could receive around 50 such messages a day on bad days, but the annual total of incidental contact was well below 1% of the total number of transactions.  The rule of thumb that I use is that as long as the volume of transactions received over end-user messaging never exceeds 1% of the total I consider that as incidental contact.  However, I could see acceptable arguments for a 2% threshold based on the type of customers of the organization.  However, going higher than that value would, in my opinion, be too great.

With that stated, what is an organization to do with such messages?

Some organizations prefer to not act on any end-user messaging that contains CHD/SAD.  They prefer to record the sender’s communication account information, delete the message and then send a message back to the sender explaining that they cannot accept CHD/SAD through the communication method and tell the sender to use one of their approved methods for communicating CHD/SAD.

Other organizations are all about customer service and will reluctantly accept such communications.  They will print out the communication and delete the original message.  Once they have processed the transaction, they redact the CHD/SAD, take a copy of the redacted original and then securely destroy the original.  I recommend redaction using a Sharpie marker or similar.  The reason for taking and retaining a copy of the original is so that, when held up to a light, the redacted digits cannot be determined as would be the case if the redacted original were retained.

Some organizations will use the transaction confirmation process as an opportunity to remind their customer that the sending of CHD/SAD via the end-user messaging technology should be avoided in the future.

We live in an imperfect world where people are not necessarily as security conscious as the world sometimes demands.  As a result, merchants and service providers need to be flexible in how they approach situations where their customers communicate with them through insecure channels.  Hopefully I have given you some ideas as to how to approach these situations and deal with them in as secure a manner as possible.

18
Feb
15

Council Surveys QSAs On SSL

This message popped into my inbox late yesterday.

20150217-PCISSCemailMsg

The survey in question contains the following questions.

20150217-PCISSCSurvey

All of my clients have gotten rid of SSL on their public facing Web sites.

The dilemma we have is that while SSL is dead, it is baked into so many products and appliances.  My clients are therefore stuck with appliances and software products that have SSL hard coded into them.  As a result, they will be dependent on their vendors to convert to TLS.

That said, what is the risk of using SSL internally?  Not a good practice, but truthfully, what is the risk?

In my opinion, using SSL internally for the next 12 to 24 months would not be the end of the world as long as it does not become a significant attack vector.

It will be interesting to hear the results of this survey.

15
Feb
15

New PCI Compliance Study

Dr. Branden Williams and the Merchants Acquirer Committee (MAC) have issued a new report on PCI compliance and the impact of breaches on merchants and MAC members.  I had the pleasure of getting a preview of the survey results from Dr. Williams a few weeks before its publication.  Based on some of the online chatter I have seen, the study is being both applauded and chastised for its results.

First, who is the MAC?

“The MAC community includes acquirers/merchant banks, processors, independent sales organizations (ISOs), and others. MAC membership exceeds 500 firms.”

What was the response rate for the study?

“Approximately 20% of MAC members participated in the survey (although not all survey responses could be used in the analysis due to incomplete responses).”

While 20% might seem an awful low response rate for a survey, for those of us that conduct surveys, 20% is actually quite good.

One set of facts that was missing in the survey that I felt was important was how many merchants do the 100+ survey respondents cover and what is their breakdown by merchant level?  Branden very kindly ran a query and sent me back the following.

Level 1 Merchants:                  73

Level 2 Merchants:                153

Level 3 Merchants:             3,832

Level 4 Merchants:      1,140,623

Total:                              1,144,681

Based on this information, I would say that it reasonably represents the breakdown of merchant levels out in the real world.

The biggest finding of the study and what most people are pointing to is the low compliance percentages across the MAC members’ merchants.  Level 1, 2 and 3 merchants are only compliant around 67% to 69% of the time during their assessments.  However, most troubling is that Level 4 merchants are only 39% compliant.

Depending on the merchant level, these figures are not even close to what Visa last reported back in 2011.  Back then, Visa was stating that 98% of Level 1 merchants were reported as compliant.  Level 2 merchants were reported to be at 91% compliance.  Level 3 merchants were reported at 57% compliance.  As is Visa’s practice, it only reported that Level 4 merchants were at a “moderate” level of compliance.

So how do we square the difference in compliance percentages between the MAC and Visa numbers?  We do not because the numbers are like comparing apples to oranges.

The purpose of the study was to examine breaches and their impact on merchants.  As such, the study’s numbers indicate not only PCI compliance but also the number of organizations breached that were deemed PCI compliant, hence the much lower PCI compliance rates.

Visa’s numbers are based on filings of PCI Attestation Of Compliance (AOC) forms with processors and acquiring banks who then report those statistics up to Visa.  Visa, or any card brand for that matter, has never shared the complete equation of the number of merchants that were breached but filed an AOC indicating they were PCI compliant.  As a result, the figures posted by Visa are not representative of the study’s results and vice versa.

I think this study provides a much better look into PCI compliance than we have had from the card brands.  It shows that merchants have a significant amount of work to do maintaining PCI compliance.  I would highly recommend you download a copy of the report and share it with your management.

07
Feb
15

SSL Is Officially Declared Dead

On January 30, 2015, QSAs received the latest edition of the Council’s Assessor Newsletter.  Buried in that edition was the following statement.

Notice: PCI DSS and PA-DSS v3.1 Revisions Coming

In order to address a few minor updates and clarifications and one impacting change, there will be a revision for PCI DSS and PA-DSS v3.0 in the very near future. The impacting change is related to several vulnerabilities in the SSL protocol. Because of this, no version of SSL meets PCI SSC’s definition of “strong cryptography,” and updates to the standards are needed to address this issue. (Highlighting emphasis added by the PCI Guru)

We are working with industry stakeholders to determine the impact and the best way to address the issue. While we do not have the final publication date, our goal is to keep you apprised of the progress and to provide you with advanced notification for these pending changes. We are also preparing several FAQs that will accompany release of the revised standards.

Should you have any questions, please contact your Program Manager.”

Because the announcement was titled about the coming v3.1 revisions to the PCI DSS and PA-DSS standards, I am sure a lot of QSAs missed this pronouncement.

Not that this should be a surprise to any QSA as the POODLE vulnerability effectively killed SSL.  The Council has now officially announced that SSL is no longer deemed to be strong cryptography.

Therefore, those of you still using SSL to secure transmissions containing cardholder data (CHD) need to stop that practice as soon as possible and convert to TLS or IPSec.

UPDATE: On February 13, 2015, the PCI SSC issued an update to their original announcement in the Assessor Newsletter.

31
Jan
15

Merchant, Service Provider Or Both?

Apparently there are a lot of newcomers to the PCI compliance business and are asking bizarre questions regarding PCI.  One of the most common is if their organization is a merchant or a service provider or both?

Merchant

According to the PCI DSS v3 Glossary, a merchant is defined as:

“For the purposes of the PCI DSS, a merchant is defined as any entity that accepts payment cards bearing the logos of any of the five members of PCI SSC (American Express, Discover, JCB, MasterCard or Visa) as payment for goods and/or services. Note that a merchant that accepts payment cards as payment for goods and/or services can also be a service provider, if the services sold result in storing, processing, or transmitting cardholder data on behalf of other merchants or service providers. For example, an ISP is a merchant that accepts payment cards for monthly billing, but also is a service provider if it hosts merchants as customers.”

One of the points that create some of the most confusion is the point made at the end of the merchant definition that it is possible for a merchant to also be a service provider.  A lot of people think that this is a black or white, either or type of situation which it is not.

The key thing to determining if your organization is a merchant is if your organization signed a merchant agreement with a bank and has a merchant account with that bank.  If your organization did, then you are definitely a merchant.

Service Provider

Now let us talk about service providers.  In the same document, a service provider is defined as:

“Business entity that is not a payment brand, directly involved in the processing, storage, or transmission of cardholder data on behalf of another entity. This also includes companies that provide services that control or could impact the security of cardholder data. Examples include managed service providers that provide managed firewalls, IDS and other services as well as hosting providers and other entities. If an entity provides a service that involves only the provision of public network access—such as a telecommunications company providing just the communication link—the entity would not be considered a service provider for that service (although they may be considered a service provider for other services).”

The first thing to remember about service providers is that you can be tagged as a service provider and not be directly processing, storing or transmitting cardholder data (CHD) or sensitive authentication data (SAD).  We see this most often with organizations that provide managed security services (MSS).  In most cases, these organizations manage/monitor the devices that provide and/or secure the communications links.  As a result, these MSS providers can have access to unencrypted CHD/SAD whether they realize that or not.  If the MSS could be in contact with unencrypted CHD/SAD via the devices they manage, then they are in-scope for PCI compliance.

I can tell you from personal experience that service providers that are not directly processing, storing or transmitting CHD/SAD will push back and fight very hard to be ruled out of scope for PCI compliance.  It has gotten to the point that I have seen and heard of service providers taking customers to court for misrepresenting their business and to force their customer out of their service contract.  In the majority of the cases I am aware; it was shown that it was the service providers’ negligence from not explicitly asking whether or not PCI compliance was required by the customer.  So if you need to be PCI compliant, it is very important to make that clear to any service provider you are looking at just in case one or more of their services could come into contact with CHD/SAD.

Another way an organization can become a service provider is when they conduct card transactions on behalf of a third party.  The best example of this situation is with outsourced call centers.  While the call center might be conducting the card transactions on your systems, they are a third party that is processing and transmitting CHD/SAD through their workstations for your organization.  As a result, the call center is a service provider and is in-scope for PCI compliance.

Another way an organization can become a third party is if they are conducting transactions through their systems using a merchant account of a third party.  I have encountered this with call centers where the call center is using their own applications, but the merchant account used to process payments through is not the call center’s merchant account, it is the merchant account of the call center’s customer.

Both?

Finally, there is the example from the Merchant definition where the organization is both a merchant and a service provider.  As pointed out in the definition, this most commonly occurs with Internet service providers (ISP) and shared hosting providers that provide not only services for hosting a customer’s IT environment, but then accepts cards for payment for those hosting services.  From the hosting perspective, these organizations are a service provider and must comply with the PCI DSS for those services provided to their customers.  However, these organizations are also merchants because their customers can pay using a credit/debit card.

Some Closing Comments

Before I finish this post, I also want to add some comments regarding compliance reporting for service providers.

The first comment I would like to make is regarding reporting and compliance testing.  If you are a service provider, you only have the choice of a Self-Assessment Questionnaire (SAQ) D or a Report On Compliance (ROC).  If your organization processes, stores or transmits less than 300,000 card transactions, then you can use either the SAQ D or perform a ROC.  If your organization processes, stores or transmits 300,000 or more card transactions, then you are required to do a ROC.

If you are an ISP, MSS or similar service provider that does not process, store or transmit CHD/SAD, then you will not have a transaction count and therefore will fall on the under 300,000 transaction count rule.

Why would an organization that can do an SAQ D do a ROC?  If an organization desires to be listed on the Visa Global Registry of Service Providers or the MasterCard PCI Compliant Service Provider lists, then the service provider must do a ROC.  There are rules and fees for being included on these lists that each card brand Web site documents.  A knowledgeable QSA can help facilitate your listing on these sites as well as conducting the requisite ROC assessment.

A quick side note regarding Visa and service providers.  Visa is conducting a separate service provider inventory program that is outside of their Global Registry program.  This new inventory process has confused a lot of service providers and QSAs alike including yours truly.  For about the last year or so, Visa has been “registering” all service providers in an attempt to create a complete inventory of service providers.  This service provider inventory program has nothing to do with the Visa Global Registry and does not put any organization that is processed through it on the Visa Global Registry.

It is very important for service providers to know that the Attestation Of Compliance (AOC) form for the service provider is very different from the merchant version of the AOC.  The AOC for service providers provides a list of the services provided by the service provider that were assessed for the AOC.  This information is necessary for customers to know if all of their services were assessed for PCI compliance.  If a service was missed, then the merchant is responsible for assessing that service for PCI compliance.  So it is very important that you ensure that all services provided to your customers that require PCI compliance be assessed for PCI compliance.

Then there are the number of times I have received an AOC from a service provider only to find that it is a merchant AOC, not a service provider AOC.  With v3 of the PCI DSS, the Council has created separate SAQ D forms for merchants and service providers that will hopefully cure some of this issue.  It is incumbent on service providers to make sure that when they sign the AOC that it is a service provider AOC and all of the services are listed.  If not, then you need to go back to your QSA and get the right AOC form with the right information created.

And finally, my biggest pet peeve with service provider AOCs.  Some QSACs create these wonderful “Certificates Of PCI Compliance” that, while they look really nice, have no meaning to your customers and their QSAs.  No matter how many times the PCI SSC has stated that the only officially recognized document out of a PCI assessment is the AOC, I still encounter these certificates as “proof” of PCI compliance.  When asked to provide the AOC, I then get the indignant response that I should have everything I need.  In one case, I was even told I could not possibly be a QSA because I did not recognize the certificate as proof of compliance.

As I stated earlier, the service provider AOC is required to ensure that all service provided were assessed and QSAs are required to have copies of all service provider AOCs in order to show that all third parties have been officially assessed for PCI compliance.  No AOC means that the service provider is not PCI compliant and must be assessed as part of the customer’s PCI assessment.

I hope we are all now on the same page regarding the concepts of a merchant and a service provider.

26
Dec
14

PCI Compliance Is Getting More Rigorous

When Visa and MasterCard trotted out their security standards back in 2002 and 2003, the large eCommerce merchants that got to see them complained that they were too much.  Fast forward more than a decade and we still hear complaints that the PCI standards are too much.  Well if you are still complaining, things are about to get worse with version 3.  And the ever more consistent rumor is that business as usual (BAU) will be coming in v4.  If that comes to pass, I know some people that will likely jump out of windows as they did in the 1929 stock market crash.

So how is the PCI DSS getting more rigorous?

I spent some time analyzing the PCI DSS v3 as I did with v2.  From an analysis of v3 to v2, here are some of my findings.

  • There is an overall 11% increase in the number of tests in v3 versus v2.
  • Tests requiring some form of documentation have increased a whopping 83%. Not that 83% more documents will be required, just that there are 83% more tests where documentation is reviewed.  I will have more on this later in the post.
  • The number tests requiring interviews is up 48%. Again, not necessarily involving more people, just more questions to be asked and answered.
  • Tests requiring an observation of a process or activity are up 31%. As with the others, this is not a wholesale jump in new observations, but more an increase in things that must be observed.
  • Tests involving sampling are up 33%. This actually is an increase in the number of things sampled, but not all of the 33% increase are new samples.  This increase is the result of more clarifications from the Council to have QSAs explain what was sampled as it was implied in v2, but not explicitly requested.

Speaking of sampling, not only are the number of tests involving sampling increasing but the PCI SSC has told all of the QSAs that the days of “poor” or “inappropriate” sampling are over.  I have seen Reports On Compliance where QSAs have literally used a sample of one out of thousands under the rationale of “they are all configured the same”.  If you only tested one, how can you even draw the conclusion that the remaining thousands truly are the same?  You cannot and that is a big reason why the Council is getting picky on sampling.

The Council are also tired of incomplete samples.  The example most often quoted is there are 100 servers, half are Windows-based and half are Red Hat Linux.  A lot of QSAs were stopping there and sampling say five of each and calling their work complete.  Wrong!

What the Council is pointing out is that the QSA must go deeper in some cases when choosing their samples.  In the example above, the QSA needs to know the function of those servers so that they sample them based on their function such as database server, directory server, application server, etc.  In addition, the Council is also saying that it may be necessary to consider the applications involved as well to ensure that sampling provides a more complete picture of the environment.  In an assessment involving multiple applications, it might be necessary to sample database and application servers used by each application and not just a random sample of servers.

Finally, sampling might be higher for an entity’s first assessment or the first assessment by a QSA after a prior QSA.  The reason is that a higher sample size is warranted because all might not be as it is represented and minimal sampling would likely not reveal any issues.  This is common in the financial audit industry in situations where a new auditor is coming into the organization or the operations of the organization have been under increased scrutiny by regulators, banks or their prior auditors.

I earlier stated that documentation testing was up 83% and that was related to more testing of the same documents already being collected.  That is not to say that the amount of documentation is not increasing.  Regarding the amount of documentation required for v3 versus v2, I am estimating a conservative increase of around 100%.  I have been hearing horror stories regarding the amount of documentation being requested for v3.  I would not be shocked if the amount of documentation a QSA requires is up by 150% to 200% in some instances, particularly those situations where the QSA was not necessarily collecting all of the relevant documentation they should have been collecting.  A lot of this increase is that document counts now include observations which were considered separately in v2.

Based on this information, you should not be shocked if your QSAC increases the fees they are charging you for assessing your PCI compliance under v3.  Someone has to conduct all of those tests and review all of the extra documentation generated.  Even QSACs that have been doing the right thing all along are seeing impacts in the increases in testing required by v3.  But it has been definitely worse for those QSACs that were doing as little as possible to get an assessment done.  They are seeing the most impact from these changes and will likely find them highly onerous and difficult to justify the huge increases in professional fees required to cover their higher costs.  As a result, I would not be surprised if a number of QSACs stop doing PCI assessments because of the new requirements put on them.

But why are the changes occurring?

The primary reason is to minimize the “wiggle room” QSAs have in their testing so that assessments from one QSA to another are more consistent.  There has to be flexibility given to a QSA because organizations are never alike.  In addition what is compliant to one QSA can be non-compliant to another even within the same QSAC.  That occurs because every individual has their own sense of risk acceptance and avoidance.  This issue should be able to be taken out of the equation through discussion of the issue with the QSA and their superiors and, if necessary, development of mitigation strategies.

Under v2, a QSA that had a high risk tolerance could deem an organization compliant when the evidence would indicate that the organization is not compliant.  Or a QSA with a low risk tolerance could say one or more requirements are not in place in the same situation.  The new Reporting Template is an attempt to take the extremes out and reduce the wide swings in what is and is not compliant.  However, the new version of the PCI DSS does still allow some wiggle room for QSA/ISA judgment.

In addition to taking extremes in risk acceptance out of the assessment process, the Council is also trying to address the issue with QSAs that are judging organizations as PCI compliant when the QSA’s documentation does not support such a claim.  While the majority of QSAs thought this issue was addressed with the Reporting Instructions in v2, based on what the Council is telling us is that it apparently was not.  So the Council is getting stricter and stricter on their guidance as to what is acceptable through the language in the Reporting Template/Instructions as well as through their QSA training.

Another reason for the rigor is the breaches that keep occurring.  Each breach supplies information that might need to be incorporated into the PCI DSS.  One of the best examples of this is requirement 8.5.1:

“Service providers with remote access to customer premises (for example, for support of POS systems or servers) must use a unique authentication credential (such as a password/phrase) for each customer.”

This new requirement is in response to the significant number of breaches where the attacker gained access to a merchant’s cardholder data by knowing the remote access credentials of a vendor that is supporting the merchant such as those vendors that support point of sale (POS) solutions or card transaction processing.

Finally, the changes are also an attempt to circumvent some of the “legal” arguments that occur between the QSA and their client.  I am not the only QSA that has encountered clients that come up with very legal-like arguments and interpretations of what a particular test requires.  As a result, the Council has attempted to use wording in the tests and related testing guidance that reduces or even eliminates such interpretation arguments.  However, in my experience, clients that take this “legal” approach to their assessment are not going to stop.  They are not interested in security, they are interested in “checking a box”.  But the Council does no one any favors by only allowing QSAs and ISAs to read and have copies of the Reporting Template/Instructions until the client goes through their first PCI assessment under the new testing.  The Reporting Template should be a public document not one that only QSAs and ISAs have access.

21
Dec
14

Forensic Examinations And Facts

I am watching the news reports on the Sony breach and laughing at all of the “facts” that are being bandied about.  I want to use the Sony breach as a teachable moment and explain that the “facts” may not be as factual as represented by the media, forensic examiners or even the FBI.  I have done a number of forensic investigations and from my own experience there is a lot of effort required to prove conclusively that a particular device or actor is the actual attacker.

So let us take a look at the “evidence” we have at this point and see if the conclusions drawn should be treated as facts.

My first issue is how quickly the FBI and Mandiant have come out with the “fact” that North Korea is behind the attack.  According to the timelines I have seen, it was on November 21. 2014 when Sony was told by the attackers, GOP, that Sony had been hacked.  So in around three weeks of time the FBI and Mandiant have figured out, definitively, it was North Korea that was behind the attack.  Granted, Mandiant and the Bureau could have been investigating this long before, but given the way the news reports were written, I have to believe that Sony had no idea anything was wrong until November 21.

Why do I find this timeline spurious?  It took Mandiant over three years to trace things back to the Chinese for their report, APT1, last year and we are to believe that the FBI has the skill and manpower to trace a “sophisticated attack” (Kevin Mandia’s words to Sony) back to North Korea?  I find that hard to believe.  Not because the Bureau and Mandiant are not skilled, but that it is just impossible to cram a year’s worth of investigation into a few weeks, regardless of the manpower tossed at the investigation.

In my own experience, I typically had ideas as to what and how things happened within a few weeks, but now the difficult work of determining exactly how things went down began.  It can take months or even years to figure out an attack if it is ever figured out.  It is why NTSB investigations of airplane crashes take at least a year to have a report issued.  Any attack may not be as simple or uncomplicated as you initially think.

“Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.”

We do know for a fact that hackers reuse other attackers’ code.  Why reinvent the wheel if you do not need to?  Hence the variants of all of the attack code to not only evade anti-virus but to also enhance or improve techniques and methods.  Just because there are similarities in some lines of code, algorithms, methods, etc., does not mean that it was the North Koreans that were the actual actors.  It just means that the attackers used code attributed to North Korea.  Key word, “attributed”.  To me, a far better piece of evidence would have been if the code had been written in Korean or a North Korean dialect.

“The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack.”

Hard coded IP addresses are evidence?  So does that mean that everyone is guilty if I write their telephone number on a napkin and that turns up as evidence?  No.  A better piece of evidence would have been log data that actually can tie those IP addresses to the data that was exfiltrated out of Sony.  Just because IP addresses are hardcoded in an application does not necessarily imply that the IP end point was in fact the actual endpoint.  Hackers regularly own other organizations’ and governments’ servers to obfuscate their actual location.  Just because there’s a hardcoded IP address in a piece of code does not necessarily mean that is the endpoint.  It just means that a device could be involved.

“Separately, the tools used in the SPE attack have similarities to a cyber attack in March of last year against South Korean banks and media outlets, which was carried out by North Korea.”

The attack on certain South Korean banks and TV stations in 2013 was never definitively pinned on North Korea, it was just suspected.  The prime piece of evidence was a Chinese IP address that was assumed to implicate North Korea.  So using the South Korean attack as though it was definitively proved to be done by North Korea is not a fact.

While I had some issues with the Mandiant report on China and their investigation methods, the information being offered as “facts” that North Korea is behind the Sony breach are positively appalling.  People want an answer immediately and so one is given regardless of accuracy or even believability.  However, this is a technology issue and so it is easy to feed the public supposed “facts” since only the true technology people in the world will know the difference.

Unfortunately a breach such as the one at Sony will take time, probably a lot of time.  I would not be surprised if we end up with a lot of “suspicions” and “assumptions” when a final analysis is done and released, if we ever get a definitive answer.  The reason I believe that is that I do not think Sony had the kind of security implemented and working given the amount of information that has been supposedly gathered by the attackers.  The other clue in this is that it was November 21 when Sony was notified by the attackers they had been breached.

The key take away here is that forensic examinations very rarely prove WHO the bad actor was that caused the breach.  This is particularly true when the attacker is outside the organization.  There are just too many ways that an attacker can obfuscate their actual identity/location.

What forensic examinations do provide is a road map of improvements and enhancements in an organization’s security measures and procedures to minimize future attacks.  Note that I did not say “prevent” future attacks.  I use minimize because security is never an absolute.  Anyone with an extreme desire to attack an organization will do so regardless of how well your security program is constructed and executed.

Bruce Schneier points out this very fact about determined attackers in his post on the Sony breach.  I have always referred to this as the ‘98-2 Rule’.  Properly implemented and managed information security keeps 98% of attackers out.  However it is the remaining 2% that are determined enough to figure out how to work around even the best security.  All any organizations can do about that remaining 2% is to put controls in place so that when the 2% get through, they are detected as soon as possible and their impact minimized.  This is why security frameworks are so important because they provide organizations with guidance as to what it does take to only have the 2% to worry about.

Given the limited evidence provided thus far, could it be that this is all a sophisticated marketing ruse that went sideways?  Would it not be apropos if Seth Rogen and his production company did the attack as a promotional stunt and the attackers they hired found out that Sony was ripe for such an attack and then went further than what they were supposed to?

Something to think about.

09
Dec
14

Significant Change And Periodic

UPDATED: Changed comments on requirement 10.6.2 to reflect the correct interpretation of that requirement.

No words or phrases in the PCI standards elicit more comments and questions than “significant change”, “periodic” and “periodically”.

So what do these mean?  Whatever you want to define them to mean as it is up to each organization to come up with formal definitions.  Those definitions should be based on your organization’s risk assessment.

Here are some suggestions as to appropriate definitions.

Significant Change

Significant changes are those changes that could impact or affect the security of your cardholder data environment (CDE).  Examples of significant changes are:

  • Changing devices such as firewalls, routers, switches and servers. Going from Cisco to Checkpoint firewalls for example is typically understood as a significant change.  However, people always question this concept particularly when going from a Cisco ASA 5505 firewall to an ASA 5520 or moving a virtual machine from one cluster to another.  The problem is that these moves can potentially introduce new vulnerabilities, network paths or even errors that would go unknown until the next vulnerability scan and penetration test.  And your luck would be that those tests are months away, not just a few days.
  • Changes to payment applications. This should be obvious, but I cannot tell you how many people argue the point on changes to applications.  Yet, application changes are possibly the biggest changes that can affect security.  Not only should applications be vulnerability scanned and penetration tested before being put into production, but code review and/or automated code scanning should be performed as well.  If any vulnerabilities are found, they must be corrected or mitigated before the application goes into production.
  • Upgrades or changes in operating systems. Upgrades and changes in operating systems should also be obvious as significant changes.  However, I have run into network and system administrators that want to split hairs over the impact of OS changes.  In my opinion, going from one version of an OS to another is just as significant as changing OSes.
  • Patching of operating systems or applications. While I do not think that patching necessarily results in a significant change, there are some patches such as updates to critical services such as .NET or the IP stack that should be considered significant.  If you are properly working through requirement 6.1 (6.2 in PCI DSS v2) for patch management, you should take this into consideration and indicate if vulnerability scanning and penetration testing are required after any particular patch cycle because of the nature of any of the patches being applied.
  • Network changes. Any time you change the network you should consider that a significant change regardless of how “minor” the change might appear.  Networks can be like puzzles and the movement of devices or wires can result in unintended paths being opened as a result.

I have a lot of clients that have an indicator in their change management system or enter “Significant Change” in the change comments for flagging significant changes.  That way they can try and coordinate significant changes with their scheduled vulnerability scanning and penetration testing.  It does not always work out, but they are trying to make an attempt at minimizing the number of special scans and tests that are performed.  But such an approach also has a side benefit when it comes time to do their PCI assessment as they can call up all significant changes and those can be tied to the vulnerability scans and penetration tests.

I would see this list as the bare minimum of significant changes.  As I stated earlier, it is up to your organization to develop your own definition of what constitutes a significant change.

Periodic and Periodically

Branden Williams was on a Podcast shortly after the PCI DSS v3 was released and made a comment that he felt that the number of occurrences for the words “periodic” or “periodically” were higher in the new version of the PCI DSS than in the previous version.  That got me thinking so I went and checked it out.  Based on my analysis, these words occur a total of 20 times in the PCI DSS v3 with 17 of those occurrences in the requirements/tests.  That is a 150% total increase over v2 and an increase of 113% in the requirements/tests.

First off, just to shatter some people’s perception of the word, “periodic” does not equate to “annual”.  Yes, there may be instances where an activity can occur annually and still meet PCI DSS compliance.  But that is likely a rare occurrence for all but the smallest organizations and is definitely not how the Council has defined it.

The Council uses the words “periodic” and “periodically” to reflect that an organization should be following the results of their risk assessment to determine how often or “periodically” they should perform a certain activity.  For some organizations, that might happen to work out to be annually.  But for most organizations it will work out to be something more often than annually.

So what requirements specific a periodic time period?  Here are some of the more notable occurrences.

  • 5.1.2 For systems considered to be not commonly affected by malicious software, perform periodic evaluations to identify and evaluate evolving malware threats in order to confirm whether such systems continue to not require anti-virus software.Typically this would be done annually, but forensic analysis of breaches has indicated that it needs to be done more often, particularly with Linux and other Unix derivatives. Based on threats semi-annual or even quarterly reviews may be needed for systems you believe to not warrant an anti-virus solution.
  • 5.2 Ensure that all anti-virus mechanisms are maintained as follows: Are kept current, Perform periodic scans, Generate audit logs which are retained per PCI DSS Requirement 10.7.Periodic scanning is always an issue with servers but, surprisingly, even more so with workstations. In my opinion, at a minimum, scans for viruses and malware should be done at least weekly.  This might need to be done daily if the systems are particularly at risk such as in call centers where the workstations my go to the Internet to be able to access competitor sales offerings.
  • 8.2.4.b Additional testing procedure for service providers: Review internal processes and customer/user documentation to verify that: Non-consumer user passwords are required to change periodically; and Non-consumer users are given guidance as to when, and under what circumstances, passwords must change.This requirement pairs with 8.6.2 which requires service providers with remote access to customers’ systems to not use the same credentials for each customer. A number of recent breaches have pointed out the issue such a practice can lead.  Not only are different credentials needed by the password for those credentials needs to change periodically, typically every 90 days.  This will likely spur the sales of enterprise credential vaults and similar solutions in the service provider ranks.But it is not just service provider’s credentials; it is also their customers’ credentials.  Service providers need to advise their customers to change their passwords periodically as well.  And that should also be at 90 day intervals at a minimum.
  • 9.7 Obtain and examine the policy for controlling storage and maintenance of all media and verify that the policy requires periodic media inventories.For this requirement, the PCI DSS already provides a required timeframe of at least annually.
  • 9.8 Examine the periodic media destruction policy and verify that it covers all media and defines requirements for the following:Periodic here typically means quarterly or even monthly if you have the volume of media to be destroyed. The key though is to secure the media until it is destroyed.
  • 9.9 Examine documented policies and procedures to verify they include: Maintaining a list of devices, Periodically inspecting devices to look for tampering or substitution, Training personnel to be aware of suspicious behavior and to report tampering or substitution of devices.Here periodic means at least daily, if not more often. I have clients that examine their points of interaction (POI) at every management shift change which works out to three or four times a day.  Given the POI is becoming the primary target of attacks, this will only become more important as time goes on given the current paradigm.
  • 9.9.2 Periodically inspect device surfaces to detect tampering (for example, addition of card skimmers to devices), or substitution (for example, by checking the serial number or other device characteristics to verify it has not been swapped with a fraudulent device).Again, periodic means at least daily, if not more often. I have clients that examine their points of interaction (POI) at every management shift change which works out to three or four times a day.  Given the POI is becoming the primary target of attacks, this will only become more important as time goes on given the current paradigm.
  • 10.6.2 Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.This requirement allows systems to be ranked using an organization’s risk assessment to drive how often log data from systems have to be reviewed.  While systems that directly process, store or transmitcardholder data (CHD) must have their log data reviewed at least daily, other systems that are in-scope can have their log data reviewed less often based on the risk they present to the CDE systems.  Based on assessing the risk to these “connected to” systems, you might be able to justify weekly or even monthly review of log data. I doubt this will have a significant impact because most organizations have implemented internal or outsourced system information and event management (SIEM) solutions and are monitoring all in-scope systems in near real time.  But for those few organizations that are struggling with log reviews without a SIEM, this will afford them a bit of breathing space.
  • 12.10.4 Verify through observation, review of policies, and interviews of responsible personnel that staff with responsibilities for security breach response are periodically trained.It amazes me the number of organizations that claim to not have had an incident in the last year, even a virus or malware outbreak. Either they were totally dealt with by their anti-virus solution (hard to believe) or I am not talking to thepeople that deal with these issues (probably more likely).  As a result, testing (which can satisfy this training requirement) is only being done annually just like business continuity plan testing.Given the ever increasing amount of threats, this sort of training needs to be done more often than just annually.  Organizations should be at least testing their incident response plan on a quarterly basis so that people keep their skills up as well we just exercising the plan and finding any gaps or processes that need adjustment.

Hopefully we are now all on the same page with these terms.

04
Dec
14

It Is The QSA’s Fault

“Usually when PCI-compliant companies are breached, the real culprit is the assessor, the person who confirmed the company had met the PCI Requirements.” Jeff Multz, Dell SecureWorks

This is a very interesting approach for an employee at a qualified security assessor company (QSAC) to use to drum up business, toss all QSAs, including his own organization’s QSAs, under the bus.  I know that is not what he meant to do, but that is certainly what he did with this statement in his posting a few days ago.

I think most QSAs know where Mr. Multz is coming from.  He is more than likely venting over losses to QSACs that we all know are more interested in revenue generation than security.  They further that goal by incenting their QSAs to do as many PCI assessments as possible in the shortest amount of time as well as identify opportunities for selling the QSAC’s security appliances to solve compliance problems.  And to just pile on, they further their revenue generation by being the low cost provider through a focus on volume of work over quality.  As Kurt Vonnegut said in Cat’s Cradle, “In this world, you get what you pay for.”

Getting back though to Mr. Multz and his statement that QSAs are responsible for all breaches, let us see how that plays out with a few breaches.

During the Target breach, it was the QSA that was socially engineered and gave away the keys to the kingdom and missed all of the alerts generated by the FireEye software.  At Neiman Marcus, it was the QSA that missed the alerts for 60+ days that the malware was reinstalling nightly.  It was the QSA that swapped out the points of interaction (POI) at Barnes & Noble for malware infested POI.

Sorry Mr. Multz, but it was employees and/or contractors at all of these organizations, not the QSA that had a part in these breaches and all breaches for that matter.  I really do not see how you can hold a QSA responsible for the inaction and errors of employees/contractors.  Organizations are not going to pay to have QSAs on site, 24×7, to babysit all of their employees to maintain compliance with PCI or any other compliance program.  Not only that, no security framework is ever going to stop breaches, all they do is hopefully minimizing the impact when a breach occurs.

However, Mr. Multz was not done.

“The PCI Requirements were created so that organizations would focus on securing their networks, but many assessors only focus on meeting the requirements rather than security.”

From this statement it is painfully obvious that Mr. Multz does not understand what an assessment is about and how the assessment process works.  The job of a QSA is to execute the tests as defined in the PCI DSS Reporting Template and report the results of that testing – nothing more, nothing less.  Organizations are judged by a QSA as compliant with the PCI DSS whether they are just squeaking by or if they have a full on security program next to none.  Organizations do not get “extra credit” or “atta boys” if they have gone beyond the requirements.

While the original intent of the standards was to focus on securing cardholder data, that got morphed by the wonderfully misdirected marketing job that was done by certain card brands before the PCI standards came together.  For those of us around the security industry more than a decade ago, we advised Visa and MasterCard to stop pushing their cardholder information security program (CISP) and site data protection (SDP) standards as “The Way” that was going to stop breaches.  We explained that, properly implemented, CISP and SDP should minimize the number of PANs obtained, but it would not completely stop breaches.  It was only recently that the card brands started to realize this fact and stop pushing the PCI standards as a panacea of security.  If you have noticed with the rollout of EMV, Visa, MasterCard and the PCI SSC have stated that EMV is not a “silver bullet” solution and in other statements stated there are no “silver bullet” solutions.  That is a long way from a decade ago when their security standards were sold as the “be all to end all” for stopping breaches.  Unfortunately for QSAs everywhere, that message is out there and we have to deal with it every day.

All of this is not to say that QSAs cannot and do not make recommendations to organizations regarding their security programs and how and where it needs to improve.  I constantly make suggestions during my PCI assessments on how my client needs to improve their security posture.  However, it is ultimately up to the organization to put such changes in place, not the QSA’s responsibility.  If an organization chooses inaction, I will bring it up again and again.  But as the old proverb states, “you can lead a horse to water, but you cannot make them drink”.

Where the PCI DSS assessment process truly fails is the point in time approach (with the exception of vulnerability scanning and a few other select requirements).  To address that shortcoming, the Council has introduced the concept of business as usual (BAU) and it is my guess that we will see that concept placed into the standard in the next version.  It will be then that QSAs will have to test PCI compliance over a 12 month period similar to testing procedures financial auditors perform for annual financial audits.

As a result, the inclusion of BAU as part of the PCI DSS will likely be the straw that breaks the camel’s back for a lot of organizations.  This is because BAU will require organizations to track their compliance with the PCI DSS 24x7x365 as they should have been doing all along.  But from experience, I can tell you that there is no organization I have ever encountered that was compliant with any standard all of the time because people make mistakes.  As such, BAU is designed to shed light on those mistakes and require organizations to identify them and remediate them.  For organizations just squeaking by, this will probably make PCI compliance truly impossible to achieve.  If you are one of those organizations complaining about compliance with the current PCI DSS, just wait until BAU gets added.  Organizations that are truly interested in security are already implementing BAU because they see the operational value in integrating security controls with their other business controls.  BAU will show the true colors of those organizations that want security versus those that are checking a box.

And that gets me to Mr. Multz’s actual reason for his post, what makes a good QSA?  Good QSAs understand that the world is not perfect nor is security.  Good QSAs know that compliance with the PCI DSS does not and will not eliminate breaches.  Good QSAs know that the goal of PCI compliance is to minimize security control errors, provide an ability to recognize security control errors as soon as possible and then remediate those security control errors such that the security controls are only non-compliant for the shortest possible amount of time.

But just because a company has such errors does not automatically mean that they are not PCI compliant.  A good QSA only judges an organization non-compliant when the QSA has evidence that problems are consistently recurring and are not being corrected in a timely manner or corrected at all.

I appreciate Mr. Multz’s frustration but as a QSA I do not appreciate him tossing me under the bus with the QSAs that are doing a disservice to PCI compliance.  Like any industry, there are good service providers and there are bad service providers.  Those of us in this industry all know who the bad ones are and we hope they will get weeded out.  But from my own long experience in consulting, that does not always happen.

So in my very humble opinion, Mr. Multz needs to suck it up and deal with it, but stop tossing QSAs under the bus in the process.  QSAs are only the messengers.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031