Author Archive for PCI Guru


The Future Of PCI?

The 2016 North American Community Meeting was a celebration of the PCI SSC’s 10th anniversary.  And as with such anniversaries, the Council provided a look back and thoughts on the future.  During these looks into the future, I found some of their assertions questionable and they caused me to question the Council’s thought processes regarding the future of the Council and their standards.

The first instance was at Stephen Orfei’s keynote on the first day.  The General Manager of the PCI SSC proudly announced that the Council trains around 5,000 people annually and that there are current just over 2,000 QSAs and over 1,700 ISAs.  He then went on to explain that this is only the beginning and that more QSAs and ISAs would be needed.  But such a statement seems to be counter to where I think PCI is headed.

From the very beginning, the goal of the PCI standards has been to protect sensitive authentication data (SAD) and cardholder data (CHD) and the removal of it from processes that do not require it.  With most merchants moving to P2PE, E2EE and tokenization, the only scope at these merchants is going to be the card terminal or point of interaction (POI).  The only organizations that will have SAD/CHD remaining will be transaction processors and acquiring banks.  With that premise then why would there need to be growth in QSAs?  In my opinion, with merchant scope radically shrinking, the need to increase QSA and ISA counts is a pipe dream.

If there will be less of a need for QSAs, there will also likely be fewer QSACs.  Right now there are almost 370 QSACs in the world.  If all that will be left to actually assess are transaction processors, issuers and acquiring banks, then the number of QSACs will also have to shrink.  That means more competition for those transaction processors, issuers and acquiring banks until the QSAC numbers get to a more reasonable level based on market demand.

I could see the need for ISAs to potentially go up, but I would expect a lot of those people will just be QSAs that go in-house as the QSA numbers shrink.  With the scope of merchants shrinking so much, the need for ISAs is not going to be as large as I think the Council believes.  However, because of the silly Council rule that you cannot convert from a QSA to an ISA without going through the ISA training program, the Council will still have ISA training revenue regardless for the time being.

eCommerce will continue to be an ever larger part of merchants’ business.  But again, most merchants are moving to redirects and iFrames to reduce PCI scope.  While I fully expect the Council to adjust SAQ A to finally realistically address the risks of even redirects and iFrames that will likely not require any increase in ASVs who currently number 107.  Never mind the fact that the ASV business rapidly became a commodity long ago in its rush for every ASV to be a low cost provider.  As a result, there is very little margin left, if any at all, in ASV scanning.  Most ASVs are only in the business because they need to offer vulnerability scanning services to allow their clients to “one stop shop” their PCI compliance.  As a result, I really doubt that there will be any growth in the number of ASVs and I would not be surprised if the number of ASVs also drop over the next decade.

The next time I felt like the Council was going down the wrong path was when I attended the small merchant session.  What a waste of peoples’ time.  During that session, I leaned over to one of my colleagues who was there and I said, “Why is this taking so long?”

“What is your problem?” They asked.

“Why are they not just telling these small merchants to go to P2PE and tokenization?  Just get this done and done right.” I said very frustrated.

In my mind the small merchant session was 45 minutes too long.  This topic is one of those rare instances where it could be discussed in one of those TED Talk like 20 minute sessions.  Small merchants are looking for a quick answer and they have one.  P2PE and tokenization.  Period.  End of discussion.  Yet the group on stage continued to blather on and on and on.

There you have it.  I feel much better now that I have that off my chest.


The Great Multi-Factor Authentication Debate

The Council brings back the Assessor Session to this year’s Community Meeting and it takes only one question to get passions flowing.  The question was to get a clarification of a comment made by Ralph Poore, Director, Emerging Standards at the Council, about multi-factor authentication (MFA).

First a little background to get everyone up to speed remembering that the US National Institute of Standards and Technology (NIST) SP800-63B standard in question is still a draft and has not been finalized.  However, everyone expects this standard to be adopted largely unchanged and with only minor wording revisions that would not affect the overall recommendations in the standard.

What NIST stated about SMS was in section Out-of-Band Verifiers of SP800-63B which states:

“Due to the risk that SMS messages or voice calls may be intercepted or redirected, implementers of new systems SHOULD carefully consider alternative authenticators. If the out-of-band verification is to be made using the public switched telephone network (PSTN), the verifier SHALL verify that the pre-registered telephone number being used is not associated with a VoIP (or other software-based) service. It then sends the SMS or voice message to the pre-registered telephone number. Changing the pre-registered telephone number SHALL NOT be possible without two-factor authentication at the time of the change. OOB using the PSTN (SMS or voice) is deprecated, and may no longer be allowed in future releases of this guidance.”

NIST is only calling out that new implementations of SMS or voice MFA should consider the security implications of using SMS or voice for MFA.  But NIST has not totally invalidated any existing SMS and voice MFA solutions.  They just do not want any new implementations unless there is no choice because the process is already underway.  So while SMS or voice MFA can still be used in existing implementations, NIST is saying that future implementation of SMS and voice MFA are out of the question, have basically killed those solutions.

With that as our background, in a Community Meeting session, Ralph Poore stated that MFA to devices such as smartphones or back to the same device or browser (i.e., “soft” solutions) were not considered secure because of statements in the NIST Draft of SP800-63B.  I was attending a different session when Ralph made his statements, but I can tell you that my cell phone started buzzing with text messages from various people asking if we had all heard what we had heard.  But since there was no Q&A at that session, there was no way to clarify Ralph’s statements.

As a result, this issue was brought up in the Assessor Session to clarify those MFA comments.  Ralph stood and reiterated his remarks and that sent the room into an absolute tizzy.  It was pointed out that NIST had only invalidated SMS and voice for future two-factor authentication, not all soft token solutions such as RSA’s or Symantec’s application solutions.  However, Ralph continued to repeat his remarks saying that they had invalidated all soft solutions.  That brought the house down and people were loudly explaining that his comments were invalidating decades of recommendations for OOB MFA solutions.  Eventually the room calmed down and the Council agreed to review their position on such “soft” MFA solutions.

So that is where we are with this subject.  Time will tell if the Council revises its statements on MFA and comes into line with what NIST is saying on the subject.


2016 North American PCI Community Meeting

It was a hectic week out in Las Vegas at the Community Meeting this year.  I wish I had more time this year to just hang out with everyone, but I was in the middle of a number of assessments that needed to get done, so I was working at night and attending sessions during the day.

By the time you read this, the slide decks from the sessions will have been posted on the Council’s Web site.  So all of you that attended will be able to download those presentations.  You go to the link provided in the program guide, provide your name, organization name, email address and the password from the program guide (ve4eqepR) and you are in.

The Council tried the 20 minute “TED Talk” format again with the Wednesday sessions.  A number of the sessions I attended could have easily used an extra 10 minutes if not a complete hour.  I know the Council is trying to move things along and get a lot of information covered, but trying to discuss topics like “the cloud” or EMV standards just cannot be properly accomplished in 20 minutes.  I do not care how good a speaker or organized the presentation.

Here are some of the more notable highlights.

The Assessor Session Is Back

Possibly the most anticipated session of the Community Meeting this year was the return of the Assessor Session after being missing for two years.  But unlike previous years where this session occurred before the start of the Community Meeting, the return of the Assessor Session was moved to the end of the Community Meeting.  I heard a number of complaints throughout the week from assessors about being at the end of the meeting.  Yet when Thursday lunch came around, there were a lot of QSAs, ISAs and ASVs that adjusted their travel schedules (Guru included) to attend this session.

While I originally agreed with people that moving the Assessor Session to the end was not a good idea, the more I have thought about it, the more I think it was better at the end.  That way assessors can have questions covering topics that come up during the meeting get answered while we are all together.  I know we all want to get home, but I think the Assessor Session offers more value to all of us being at the end.

On the not so good side, the Council chose to use up an hour and 10 minutes to present a variety of topics, some of which took way too long to discuss.  But the larger question was why was this material not presented during the main conference?  Not only did all of the meeting attendees miss out, but there were people that did not get their questions asked.  I am also sure that running long discouraged a lot of people from asking questions as well.

That said, there were a number of good questions asked during this session and the Council rewarded five people with large PCI SSC coffee mugs for their “good” questions.

One question though really created a stir.  I will address that question regarding multi-factor authentication (MFA) as a separate post to be published later.  However I will say this about this discussion.  The Council really needs to go back and re-think their position on MFA if what they said is accurate.

The Council was asked about SAQ A and where it is headed.  The concern in the assessor community is that the mechanism that issues/controls the iFrame/redirect needs protection.  However the changes to SAQ A for v3.2 did not seem to address this obvious risk.  Based on how the question was answered, I am guessing that the hosting community is trying to keep SAQ A as simple and easy as possible regardless of the risk.

Another area that the Council agreed to review was the change to requirement 3.2 in the ROC Reporting Template.  In v3.2 of the template you can no longer mark those requirements as Not Applicable however it was pointed out that an ‘NA’ was still allowed in the SAQ D.  The reason for seeking this clarification was related to past comments from the Council to follow SAQs for P2PE (SAQ P2PE) and outsourced eCommerce (SAQ A) when filling out a ROC for merchants with these solutions.  It was pointed out that neither of these SAQs has requirement 3.2 in them, so how is a QSA/ISA supposed to respond to it in the reporting template if it cannot be marked as ‘NA’.

Understanding The Current Data Breach Landscape (aka Verizon DBIR Report Discussion)

When Verizon sends out Chris Novak, you know you will get a great presentation on the data breach incident report aka ‘The DBIR’.  This year was no exception albeit somewhat depressing as Chris again pointed out that most breaches are the result of sloppy operations, lax security and insecure applications.  Essentially security issues that we should have gotten past a long, long time ago but have not.

Architecting for Success

Who better to talk about success than a representative from the Jet Propulsion Laboratory (JPL) talking about how to develop spacecraft to explore the most inhospitable environment we know, outer space and planetary bodies.  Brian Muirhead was the keynote speaker on Wednesday and is the Chief Engineer for the Mars Science Laboratory, the group that designed and developed the various Mars exploration rovers.  He gave a great discussion on how to look out for problems and develop self-managing devices.  Very interesting and I am sure an eye opener for people that we need to stop accepting the sloppy and messy solutions we get for handling cardholder data.

Internet of Things Keynote

The Thursday keynote was just a great time.  While there seemed to be very little directly relevant to PCI compliance presented by Ken Munro and an associate from Pen Test Partners, it was a fabulous time exploring the wonderful world of flawed technology from a tea kettle, to a refrigerator to a child’s doll.  In the case of the child’s doll, they removed the word filter database and therefore allowed the doll to say things that no child’s toy should say.

What was relevant to PCI was the ease with which these folks were able to reverse engineer firmware and software used by these devices.  It gave a lot of people unfamiliar with IoT and penetration testing in the room pause as to how seemingly sophisticated technology can be easily abused.

Cloud Security

While it was great to see Tom Arnold from PSC, the even better thing about this presentation was the fact that Amazon provided an actual human being, in the form of Brad Dispensa, to talk about Amazon’s EC2 Cloud.  While billed as a discussion on incident response, the session provided great insight into AWS’s EC2 service offering as well as the variety of new tools available to manage the EC2 environment and also provide auditors and assessors with information regarding the configuration of that environment.  The key take away from this session is that organizations using EC2 can provide everything needed for conducting a PCI assessment using their EC2 Master Console.


Brian Byrne from EMVCo gave a great 20 minute session on EMV.  The slide deck will be more valuable than the presentation because he had so much content to share and so little time to share it in.  Of note was his discussion of version 2.0 of three domain secure otherwise known as 3D Secure or 3DS.  While v1.0 will remain under the control of Visa, EMVCo has taken over management and development of the 3DS standard.  The new version is in draft and only available to EMVCo members, so this was the first time I had been able to see what the new version has to offer.  But because of the time constraint, I will need to wait for the slide deck to be published to know more.

PCI Quality Assurance Program

Brandy Cumberland of the Council provided a great presentation on the Council’s quality assurance program that all QSAs have become familiar.  I appreciated her discussion of James Barrow who took over the AQM program after most of us wanted to kill his predecessor for creating one of the most brutal QA programs we had ever seen.  James efforts to make the AQM program more relevant cannot be underestimated as he took over a very troubled affair.  This was a bittersweet discussion as James passed away right after last year’s Community Meeting and will be greatly missed by those of us that came to know and respect him.  Brandy took over the AQM program when James left the Council and has been doing a great job ever since.  She is possible one of the best resources the Council has and does the AQM program proud.

Application Security at Scale

The last great session of the conference I saw was from Jeff Williams of Contrast Security.  The reason this session was great was it discussed what application developers can do to instrument their applications for not only security, but also for operational issues.  He introduced us to interactive AppSec testing (IAST) and run-time application self-promotion (RASP).  The beauty of this approach is that applications get security in the form of embedded instrumentation that results in actionable analytics which then allow decisions to be made to respond to threats to these applications.  It sounds like an interesting approach and concept and I cannot wait to see it in action.

As always, it was great to see and catch up with all of my friends in Las Vegas at the PCI Community Meeting.  It was also great to meet a lot of new people as well.  I look forward to seeing all of you again next year in Orlando.


Microsoft Changes Their Patching Strategy

Back in May 2016, Microsoft issued a blog entry on TechNet giving the world insight into its new patching strategy.  The concept of a monthly “rollup” patch or what a lot of people are calling a “mega-patch”.  In August another blog entry was posted that further detailed this strategy and explained that from October 2016 going forward, this is how Microsoft would patch Windows.

But there is even more to it.  For WSUS and SCCM users, security patches will be separated from the Monthly Rollup in their own Security mega-patch.  The idea behind separating the security patches into their own mega-patch is to allow organizations to at least stay current on security.  However there is a twist on this approach as well.  Organizations such as small business that do not use WSUS or SCCM will only get a single mega-patch through Windows or Microsoft Update that will contain the Monthly Rollup and Security mega-patches in one mega-patch.

So what could go wrong you might be asking?

The biggest drawback to this scheme is that, should you have any issue with a mega-patch, you must back out the whole patch, not just the item that is creating the issue.  That means instead of having just one potential issue to mitigate, you could have as many issues to mitigate as the patch contains.  From a PCI compliance perspective, that could mean lots of missing patches in your Windows systems if your systems run into an issue with a mega-patch.  This can get doubly bad for organizations not using WSUS or SCCM because they will be backing out security patches as well as application patches.

But it can get even worse.  These mega-patches are cumulative meaning that every month Microsoft adds the previous mega-patch to the new month’s mega-patch.  For example, say one month the mega-patches cannot be applied for compatibility reasons.  For example, you apply the monthly mega-patch and your point of sale (POS) application fails to work with the mega-patch and you must back it out.  If that issue continues because of your vendor, you will not be able to patch your POS systems until that compatibility issue is resolved because month after month the mega-patches are cumulative.  So until the compatibility issue is resolved, you will not be able to patch your systems.

But I foresee small businesses running into the worst issue with this new approach.  Since small organizations likely will not be using WSUS or SCCM, they will not get a separate Security mega-patch, they will only get a single mega-patch that combines the Monthly Rollup and Security into one mega-patch.  If any issue occurs with that single mega-patch, the small businesses will not even get their security patches.  That will create a situation where the organization must figure out how to mitigate their inability to secure their systems.  In addition, that could mean months of security issues until the original compatibility issue can be resolved.

But to add insult to injury, I can also see situations where a vendor has issues resolving a compatibility problem with a mega-patch and finally gets it fixed only to encounter a new compatibility issue with the latest mega-patch.  Based on how Microsoft is running these mega-patches, there appears to be no way to go back to a compatible and useable mega-patch.  This could result in organizations being unable to patch at all due to ongoing compatibility issues.

At a minimum, I think Microsoft will need to make the Security mega-patch separate from the Monthly Rollup for all organizations, not just those using WSUS or SCCM.  At least then, all organizations can apply security patches independent of the Monthly Rollup which would be more likely to be the one that would create compatibility issues.

It will be interesting to see how this new patching strategy plays out.  Hopefully it does not create even more risk for uses of Windows.  If it does, I would not be surprised if the PCI SSC invokes significant new controls on Windows-based solutions.  That could be the final straw in using Windows for a lot of merchants.  Time will tell.


Level 3 Versus Level 4 Merchants

There seems to be a lot of confusion over these two merchant levels.  As such I thought I would take a quick moment to clarify them.

From the respective Web sites, here are the definitions for a Level 3 Merchant.

“20,000 to 1 million ecommerce Visa transactions annually” – Visa USA

“Any merchant with more than 20,000 combined Mastercard and Maestro e-commerce transactions annually but less than or equal to one million total combined Mastercard and Maestro e-commerce transactions annually” OR “Any merchant meeting the Level 3 criteria of Visa” – MasterCard

From the respective Web sites, here are the definitions for a Level 4 Merchant.

“Merchants processing less than 20,000 Visa ecommerce transactions annually and all other merchants processing up to 1 million Visa transactions annually” – Visa USA

“All other merchants” – MasterCard

The operative factor is eCommerce transactions.  Level 3 has always been about eCommerce.  It was specifically created to identify those small merchants that were predominately eCommerce focused.  That delineation is important because of the risk presented by card not present (CNP) payment transactions as well as the potential loss of sensitive authentication data (SAD) or cardholder data (CHD) from Web sites used for eCommerce.

However, where the confusion occurs is that both merchant levels end at 1 million total transactions from all payment sources (i.e., eCommerce, MOTO, card present, etc.).

The bottom line is that if your organization is conducting more than 20,000 payment transactions through your eCommerce payment channel, but your total number of payment transactions is less than 1 million, then you are a Level 3 Merchant.  Otherwise, your organization is a Level 4 Merchant.

Now we should all be on the same page.


Third Party Service Provider PCI Compliance

This has recently become a very hot topic as more and more businesses get serious about controlling their supply chains not only for PCI but for information security in general.  It has only taken three years after the Target breach for organizations to really understand that their computer systems and networks are all part of a larger technology ecosystem and that their security depends on the security of everyone else they are connected.  This post provides guidance for service providers and merchants alike.

The first question that can come up is what is the difference between a third party and a service provider?  Technically there is no difference.  “Third party” is a term that comes from the financial audit industry which is where I first encountered it a long time ago.  Third parties are those outside organizations that provide services under contract to another organization.  Examples can include services such as office cleaning, facility management, mailroom management, lock box services, secure document destruction, human resources and a whole host of other business services.

In today’s complex corporate structures, functions such as information technology or human resources as well as whole business units can be separate legal entities and provide business services to other parts of the corporation.  While not truly outside organizations, for regulatory assessments they may also be treated as third party organizations.  I have a number of large clients that take this approach because it simplifies their audit/assessment and compliance reporting processes.  However if a merchant or service provider is going to take such an approach, it should be discussed with their acquiring bank and/or the card brands to obtain their formal approval before assessing and reporting under that approach.

What Organizations Are Service Providers?

The next question that comes up is what organizations qualify as a third party service provider under PCI?  The PCI SSC defines a service provider in the PCI DSS Glossary as:

“Business entity that is not a payment brand, directly involved in the processing, storage, or transmission of cardholder data on behalf of another entity. This also includes companies that provide services that control or could impact the security of cardholder data. Examples include managed service providers that provide managed firewalls, IDS and other services as well as hosting providers and other entities. If an entity provides a service that involves only the provision of public network access—such as a telecommunications company providing just the communication link—the entity would not be considered a service provider for that service (although they may be considered a service provider for other services).”

Under that definition any third party organization that directly processes, stores or transmits sensitive authentication data (SAD) or cardholder data (CHD) are service providers.  Examples of these organizations are transaction gateways, transaction processors and some loyalty program providers.  One notable exception is acquiring banks.  Acquiring banks are only third parties if they provide services in addition to being an acquiring bank such as card terminal management and transaction processing.

Where things get messy is third party service providers that do not directly come into contact with SAD or CHD but could come into contact with it.  While I have written two posts on this topic, there still seem to be a lot of managed service providers in denial over whether they need to be PCI compliant.  The bottom line is that if you are a service provider and you could impact the security of SAD/CHD, you must comply with the PCI standard (see PCI SSC FAQ 1092).

But that is where complaints and arguments from such peripheral service providers focus.  Most have no idea if their customers need PCI compliance unless they ask or get asked by a customer.  As a result, they tend to argue that because they do not know they do not need to comply.  Unfortunately, ignorance and/or lack of knowledge are not a valid reason to not be PCI compliant.  That is why it is incumbent for all service providers to ask every customer and prospective customer if they require PCI, HIPAA, GLBA or any other regulatory compliance so that the service provider can ensure that they can properly comply with those requirements.

Service Provider Levels Explained

Service providers, like merchants, are categorized into levels by the card brands.  The most commonly referenced service provider levels are those defined by Visa, MasterCard and Discover.

  • Level 1 service providers conduct 300,000+ transactions annually on behalf of their customers, and
  • Level 2 service providers conduct less than 300,000 transactions annually for their customers.

JCB and American Express have their own service provider level definitions, but there are very, very few service providers that only process exclusively for those brands.  If you are one of those rare service providers, I would tell you to visit the appropriate brand’s Web site and review their service provider level definitions.

Level 1 service providers must conduct a PCI assessment that results in a service provider Report On Compliance (ROC) and related Attestation Of Compliance (AOC).  That assessment can be conducted by a QSA or an ISA just as with merchant PCI ROCs.  Level 2 service providers can use either the service provider SAQ D or create a service provider ROC.

These levels also add confusion to those service providers that do not process or transmit any transactions.  As they rightfully point out, their transaction volume is zero.  I then point out to them that zero is less than 300,000, so they are considered a Level 2 service provider.  Problem and confusion solved.

The most important thing to understand about service provider levels are that if your organization is a service provider level 1 for any card brand, your organization is a level 1 for all card brands.

The next important thing to note about these assessment processes is that they must use the service provider specific SAQ D, ROC and AOC forms.  I cannot tell you the number of times I have gotten a service provider’s AOC and/or SAQ/ROC and it is not the service provider specific version.  More on this topic later.

The Global Registries

Once we get these third parties to admit they are in scope for PCI compliance, the next issue that typically comes up is in regards to the card brand global registries for service providers.  Both Visa and MasterCard have public registries of service providers on their respective Web sites.  These are strictly marketing schemes run by the respective brands and it is not mandatory that service providers be listed on either of these lists.  Since they are marketing schemes, they have no real meaning regarding any merchant organization’s PCI compliance and are not a substitute for getting an AOC from a service provider.  What they do provide is a quick way for merchants to find PCI compliant service providers providing services they wish to outsource.  As a result, a lot of service providers like to be listed on these Web sites just so that merchants will consider them.

To be listed on either of these Web sites, the service provider must have a PCI QSA (an ISA cannot be used) conduct an assessment and then the QSA must file the resulting compliant ROC and AOC with the appropriate card brand.  In the case of service providers that process or transmit SAD/CHD, they will have a relationship with a bank that must sponsor the service provider with the brands to get listed on the Web site.  For service providers that do not have a relationship with a bank because they do not process or transmit SAD/CHD, those service providers must contact the appropriate card brand who will then sponsor them.  Once approved by the brand, the service provider then pays a fee to be listed.  To stay listed on the brands Web site, the service provider must annually revalidate their compliance using a QSA, have the QSA file their compliant ROC/AOC with the brand and pay a renewal fee.

To add confusion for service providers, Visa also maintains a separate, private inventory of service providers.  This list is for Visa and their acquiring banks to reference and is not available to the public.  Visa is trying to ensure that all service providers are tracked for their annual PCI compliance even if they do not register for their public Global Registry.  So if you are a service provider and are filing a service provider SAQ D/ROC or you do not register for the public Global Registry, you will be asked to fill out information for this private Visa service provider inventory.

Service Provider AOC Issues

The most common AOC problem we encounter with service providers is that they only assess some of their services provided, not all of their services.  For third party run data centers the most common requirements assessed are requirements 9 (physical security) and 12 (policies) but no other requirements are assessed even if that same firm provides managed services such as network security, network monitoring, virtualization, server management and network management.  I will address this situation later on in the post when discussing service providers that do not have a PCI assessment.

The next most common problem is that the AOC provided to the merchant is not a service provider AOC.  The biggest problem this mistake creates is that there is no way to know what services provided to the merchant were assessed for PCI compliance.  Then you have a very embarrassing conversation for all involved as you inform the service provider that their PCI compliance is reported on the wrong form(s).  Worse is the fact that most times this results in a whole new assessment being conducted because service provider requirements were not assessed and too much time (i.e., more than 90 days) has passed since the assessment was completed.

With the introduction of v3 of the PCI DSS, the service provider AOC has had a number of changes to facilitate merchants’ evaluation of the service provider’s PCI compliance.  The first change was to list not only what services were assessed in section 2a, but what services were not assessed.  Then for each service that was assessed, the QSA/ISA is required to individually document in separate sections of 2g of the AOC which of the 12 requirements were tested for each service.

Which brings us to the third most common problem.  The AOC does not document each service individually in section 2g.  As I stated earlier, this was a change with v3, but many QSAs/ISAs did follow the instructions in the section.  In addition, the Council has not helped this situation as the AOC document is locked so adding additional sections for 2g are not possible using the Council’s form.  The Council’s advice is to copy that section and then paste additional sections as necessary to the end of the AOC.

Another situation that we occasionally run into is service providers that have gone through the PCI assessment process but refuse to provide their customers with a copy of their AOC.  Reasons for not providing the AOC vary (from the stupid to the absolutely idiotic), but it happens more often than I would like to admit.  The PCI SSC has repeatedly reinforced at their Community Meetings and in FAQs that if a service provider has been independently assessed, they must provide their service provider AOC to their customers.  If you encounter such a situation, I would recommend contacting the appropriate card brands and complaining to them about the service provider particularly if that service provider is listed on the card brands’ public Global Registry.  In most cases, such complaints will result in the brand suspending the service provider’s listing until they comply.

The last problem we encounter with AOCs is their timing and availability.  In a perfect world, every service provider would have an AOC less than a year old available for every customer.  But in the real world, a merchant conducting their assessment encounters service providers that either: (a) are also in the process of conducting their assessment, (b) had their assessment delayed and will not be able to provide an AOC by the time the merchant’s assessment is completed, or (c) does not have an AOC.

The first two conditions are timing issues and should not be a problem unless the service provider has not been compliant in the past.  As the Council has repeatedly pointed out, no organization’s PCI compliance is affected by the PCI compliance of any other organization.  In addition, the Council has also said that the PCI assessment process are not conducted to the standard of an AICPA SSAE 16 assessment which needs reliance on third party assessments.  As a result, you need to work with your QSA/ISA, bank and service providers to agree to an approach to handling these first two conditions.  My recommendation is as long as there is close to a year between assessments (give or take 30 to 60 days), I would accept whatever current AOC is available from the service provider.  For situations where there is going to be significant differences in time, I would consult with your acquiring bank or the card brands.

It is the third condition that creates the most heartburn for a merchant and the service provider.  In this situation, a merchant has no choice but to include that service provider as part of the scope of their PCI assessment (see PCI SSC FAQs 1065 and 1290).  Most of the time, this is covered under the service provider’s contract under a section regarding regulatory and legal compliance audits and assessments.  The service provider agrees to allow the merchant’s staff or authorized representatives to conduct any audits/assessments whenever required.  In very rare situations, I have encountered older contracts that do not have such audit/assessment provisions and it becomes a painful issue to get the service provider to comply with the assessment process.

However, this third condition creates a larger scope and will result in increased costs for a merchant’s PCI assessment.  Sometimes that increase can be extremely significant if the service provider is doing a substantial amount of the work that needs to be evaluated such as hosting and managing a merchant’s IT environment.  While QSAs try to minimize the occurrence of this sort of situation when scoping engagements, they still encounter it as the merchant is confused and does not understand the implication of their decision to use a non-PCI compliant service provider and their responsibilities under the PCI DSS and their Merchant Agreement.  As a result, the QSA does not get accurate answers to their scoping questions and does not find out about the service provider’s involvement until they are performing the assessment.

Non-PCI Compliant Service Providers

Before discussing this, I first need to dispel a myth.  Nowhere does the PCI DSS require a merchant to use only PCI compliant service providers (see PCI SSC FAQ 1312).  That is a requirement specified by certain card brands in their Merchant Agreements (most notably Visa and MasterCard).  Therefore not using PCI compliant service providers does not and should not result in a PCI compliance issue provided they are assessed as part of the merchant’s assessment as stated earlier.

Getting back to the topic at hand.  As an example, you have a service provider AOC and it says that section 8 is not compliant (with the latest changes in v3.2 for service providers, this is a situation that is becoming more and more common.)  As a merchant, what do you do?

This is where requirements 12.8 and 12.9 come into play as part of an organization’s vendor management process.  As part of your organization’s vendor management process you should have the following processes, at a minimum, in place.

  • Have a complete inventory of service providers including the date of their last AOC, expected receipt date of their next AOC, and whether the current AOC was PCI compliant. If not PCI compliant, it should note for each service provider those areas of non-compliance and the dates each area will be compliant.
  • For any non-PCI compliant service providers, periodic meetings need to be held with the non-compliant service provider to obtain updates on their remediation efforts. Depending on the duration and complexity of the project(s), these meetings may be conducted quarterly, monthly or even weekly.  However notes need to be kept for all of these calls and information updated as to the project(s) status.  These updates should not be suspended until the service provider is judged PCI compliant.
  • Any adverse changes in remediation efforts status should result in a review of the service provider and possibly result in seeking a new PCI compliant service provider.
  • To be judged compliant, the service provider must have a QSA/ISA submit proof (for example, a letter outlining evaluation procedures followed with a revised AOC) that they have evaluated the remediation efforts and that those efforts are complete and the PCI requirements in question have been judged PCI compliant.

The most important take away in this whole discussion regarding non-PCI compliant service providers is that it does not affect the PCI compliance of the organization using the service provider.  That said, anyone following such procedures outlines above should be prepared to provide their acquiring bank and/or card brands with proof of all of these monitoring activities.

As with all topics related to PCI compliance, this one is no different and there will be nuances to all of these discussions.  But hopefully you now understand all of the basics regarding third party service providers.


The Council Issues A New Information Supplement

Back in May, the PCI SSC issued a new information supplement titled ‘Effective Daily Log Monitoring’.  It probably slipped under most people’s radar because of the issuing of v3.2 of the PCI DSS.  And for my friend, Anton Chuvakin, this will be on his reading list immediately if he has not read it already because his book, ‘Logging and Log Management: The Authoritative Guide to Understanding the Concepts Surrounding Logging and Log Management’, is listed in the information supplement’s bibliography and from the way this supplement reads, figured prominently in its content.

First some overall impressions.  Unlike a lot of other information supplements, this one actually clarifies and provides a significant amount of knowledge and great advice.  Every organization that needs to meet PCI compliance should read it.  But more importantly, any organization that does not fully understand the collection and analysis of log data should read it to get a foundation in these practices.

One of the best recommendations comes in section 5.2 in the supplement which instructs organizations to develop a baseline from log data.  This is the biggest problem when an organization gets serious about analyzing log data, they fail to baseline the exceptions.  I cannot tell you the number of times that I have discussed logging with the personnel responsible for monitoring for alerts and they cannot tell you what is “normal” activity versus “abnormal” activity.  As a result, they either: (1) chase every alert as though they are an attack (they are not), or (2) ignore alerts because they assume they are “normal” activity (they are not).  The result in case #1 is that they cause burn out in the organization and, of course, in case #2 the organization is likely getting attacked and compromised.

I cannot understate the importance of this activity both in getting a log monitoring project going but also as something that needs to also be an ongoing activity.  No organization’s IT environment is static, so it is important to continuously baseline the log data and adjust for changes.  This can be done by adding an item on change control tickets to evaluate the impact on log data or re-baselining every quarter.  The bottom line is that baselining is not something done once, the baseline must be periodically updated.

The most glaring omission in this document is a reference on page 27 to Appendix B that promises to provide readers with a list of “sample tools” for normalizing log data.  But guess what?  There is no Appendix B.  Hopefully the Council will issue a revision to this document and add Appendix B to that future revision.

If I had to ding this information supplement for anything it is the downplaying of the need to have one or more tools to collect and analyze the log data.  The Council obviously has to walk a fine line on this, but let us be honest.  Anyone that believes that log monitoring and analysis can be done without one or more commercial tools is kidding themselves.  Yes, a person can put up a centralized log collection server and the parse that collection of data with tools such as Grep, Perl or Python.  Again, let us be honest, no one has the time to write all of the necessary scripts or search patterns to find all of the conditions you need to find.

It used to be that there were open source tools such as Splunk and similar available for log gathering and analysis.  However, those days disappeared a number of years back and only commercial versions are available.  So everyone needs to be prepared to spend a certain amount of money to get a toolset that will provide the information gathering and analysis toolsets necessary.

I also fault this document for focusing too much on log information normalization.  This is exactly why organizations need to invest in a commercial tool.  This was the big complaint with using open source tools.  People found that they got basic functionality, but then had to essentially conduct a full-fledged system development effort to get the analysis and alerting done.  This was the big selling point with commercial tools such as LogRythms and ArcSight that came the basics as well as a whole host of analysis templates both free and for purchase.

Another ding I have is the minimal number of requirements that the information supplement identifies as being satisfied by log monitoring and alerting.  I can find at least 15 instances where requirements can be met by monitoring and alerting, not just the eight that are identified in the supplement.

Even with these concerns and shortcomings, this is possibly one of the better information supplements issued by the Council and is a must read by everyone.

To get your own copy of the information supplement, go to the PCI SSC Document Library, select ‘Guidance Documents’ and scroll down to the ‘Logging’ banner.


If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.


October 2016
« Sep    

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,689 other followers