Archive for the 'Card Brands' Category

05
Apr
20

The Joke That Is SAQ A

This week another outbreak of Magecart was detected in at least 19 eCommerce sites.  It is using a new way to obfuscate and gather cardholder data (CHD).  As I read through the latest description, it brought to mind SAQ A.

But before I launch into that diatribe, first a little bit of history so that everyone understands why SAQ A even exists.

In the early wild, wild west days of payment card security on the internet, enterprising solution providers were pandering “outsourced” solutions that would “avoid” compliance with the then Visa Cardholder Information Security Program (CISP) and MasterCard Site Data Protection (SDP) compliance efforts.  What they were selling was a solution that used a variety of Web site techniques to keep the CHD away from the merchant’s Web site.  These solutions sold themselves because they took the merchant out of scope from the very onerous Visa and MasterCard security programs.

Then along came the PCI DSS and the self-assessment questionnaires (SAQ).  As part of that process, the Council and the Brands realized that these so-called out of scope solutions were not really “out of scope”.  The result was SAQ A which covers these outsourced solutions.  For years they had kept their solutions out of the card brands’ compliance programs and now they were included.  SAQ A was good news, bad news moment for the solution providers.  The bad news was that there was no escaping the fact that their customers were now in scope for PCI compliance.  However, the good news was that to placate these solution providers who were lobbying loudly for no scope, the Council and Brands minimized the number of requirements in SAQ A to a very, very bare minimum so that these outsourced solutions would not scare their customer bases off due to PCI compliance.

Just for the record.  SAQ A is the absolute bare minimum number of requirements any merchant can comply with and be considered PCI compliant.  There is nothing less.

And Now The Jokes – Bad As They Are

The first joke is that SAQ A is the absolute prime example of compliance does not equal security, bar none.

Anyone that thinks compliance with SAQ A keeps their customer payments secure is seriously lying to themselves.  Magecart in all of its forms is exhibit number 1 as to why SAQ A is a joke and should be retired.

I have told my clients since SAQ A was published that if they thought compliance with SAQ A would keep them out of trouble to think again.  Yes, SAQ A keeps the processors, banks and brands happy, but it does nothing to manage the risk presented by any web site.  That is because if the code/executable/script on their server that invokes the redirect or iFrame is ever tampered with (as with Magecart), it will not be the processor or bank held legally responsible, it will be the merchant that operates that web site that is legally on the hook.

That is the second joke of SAQ A.  Merchants think they have pushed the payment card processing risk of their eCommerce operation off to a service provider and they have not.  Unknowingly, they still have a lot of skin in the game.  More than they realize or want to realize.

Yet time and again, I encounter merchants following SAQ A that blindly go about life without regularly patching, maintaining or monitoring their web site because “SAQ A says I do not need to do that”.  All of this under the mistaken belief that SAQ A’s requirements create security for that web site which they do not.  Sadly, I have also encountered a number of merchants over the years that have been caught in the SAQ A trap and found out the hard way the monetary and business costs of their beliefs in SAQ A protecting them from bad actors.

SAQ A Is Compliance Not Security

In the last update of the SAQs in 2018, the Council did address a minor shortcoming in SAQ A.  That addition was to require organizations to ensure that their Web server was patched current for critical vulnerabilities.  However, from a risk perspective for an internet-facing system, that did very little to ensure the security of merchant Web sites used for directing payment processing.

Notably, SAQ A does not require at least any of the following:

  • Only one major service running, i.e., Web server with eCommerce application.
  • External and internal vulnerability scanning.
  • External and internal penetration testing.
  • Critical file monitoring to identify if the redirect or iFrame invocation method has been tampered with.
  • Logging and monitoring of the Web server and Web applications.

Most information security professionals would still likely consider even these aforementioned requirements inadequate.  These are all items I have told my clients I recommend, but even these absolute bare minimum steps for securing a Web server are not required for SAQ A compliance.

As a result, is it any surprise that most information security professionals and most QSAs consider SAQ A worthless for anything other than PCI compliance?  Organizations that truly understand information security also realize that SAQ A is not security and follow SAQ A-EP for ensuring the security of their out of scope Web servers.

The bottom line is that we in the payment security industry need to lobby the PCI SSC, banks and card brands to get rid of SAQ A before even more organizations get hurt.

31
Mar
20

Surprise! PCI Largely Ignores Disaster Recovery

As a lot of people are finding out when they rolled out work from home (WFH) for COVID-19, it turned out that compliance with the PCI DSS was not in place and in some cases, not even possible.  If the PCI Dream Team session last week is any example, I think a lot of QSAs got calls and emails from their clients asking what the Hell had happened?

What we are all experiencing is a shortcoming of the PCI DSS and a point that has been discussed off and on since it came into existence.  The problem is that the PCI DSS does not worry about business continuity (BCP) or disaster recovery (DR) unless there is cardholder data (CHD) or sensitive authentication data (SAD) involved which typically only occurs when that data exists at hot/warm recovery sites.  If the CHD/SAD is only there when recovery is occurring, then those locations are out of scope.

Unfortunately, the COVID-19 event is not your normal “disaster”.  With the invocation of WFH, production data centers and offices were still fully operational which is typically the concern of BCP/DR.  What changed was that the government enacted stay at home or shelter in place orders and the recovery site became an employee’s home, not your expected recovery location.  Worse, since WFH would not involve CHD/SAD until it was invoked, QSAs had no reason to assess any plans for such recovery because they were not in-scope until activated.

That said, a lot of QSAs (myself included) usually did have a discussion with clients that, while their BCP/DR plans were not in-scope, clients should occasionally assess those plans to make sure that, when invoked, the plan maintained PCI compliance.  Because when the BCP/DR was invoked, whatever was done had to be PCI compliant the moment it was used.

And there is the rub in all of this.  There is no grace period with PCI compliance because of the invocation of BCP/DR.  You are expected to be 100% PCI compliant regardless.

There are a number of lessons learned due to the COVID-19 disaster.

  • BCP/DR will need to be updated for pandemic incidents including WFH capabilities. According to the World Health Organization (WHO) and the Centers for Disease Control (CDC), pandemics are not going to go away, so the ability to continue operations remotely is going to have to be part of an organization’s recovery options.
  • WFH capabilities will have to be incorporated into normal production capabilities. This effort will need to address PCI compliance as well as HIPAA, CCPA, GDPR and any other relevant security and privacy programs your organization may have to comply with.  I have had a number of organizations enact such capabilities, policies and procedures for various parts of their operations over the years due to changing work requirements of their personnel as WFH offers a number of flexible advantages for retaining employees.
  • Implement virtual desktop infrastructure (VDI) to provide office as well as remote working capabilities. This can also potentially allow WFH to use an employee’s BYOD as long as they are not required to be PCI compliant.  Use of VDI allows the use of thin clients and Chromebooks to be used as workstations making security of those workstations a bit easier as well as reducing the cost.
  • Implement P2PE or end-to-end encryption (E2EE) solutions for the entry of CHD/SAD into applications. There are a number of USB and Bluetooth options available from the various point of interaction (POI) vendors such as Verifone and Ingenico as well as other third-party application vendors.
  • Softphones create a larger scope by bringing the workstation they connect to into full scope for PCI compliance. The number of requirements that need to be assessed can be reduced by using VDI and connecting the softphone to the VDI through the workstation.  Making such a connection though is not “plug and play”, so be prepared to have to work a lot with the VDI vendor to make that connection work.  But do not be surprised if it does not provide a reliable and/or clear connection, so make sure you are prepared to have to place the full workstation into scope to have an acceptable working solution.
  • If you are expecting to use The Cloud for your VDI or application, make sure that you conduct appropriate capacity planning so that you are not caught without the ability to expand that solution due to WFH. A lot of organizations have found with the COVID-19 event that their Cloud implementation did not scale as they thought it would.  It turned out that Cloud providers have capacity limitations just like in-house data centers.  While you are not using that capacity all of the time, you need to reserve it for those instances when you need it.  While not free, that excess capacity can be reserved for such events as COVID-19 so that when you need it, it is available.
23
Mar
20

Work From Home PCI Considerations

The PCI Guru got a question regarding PCI compliance for service providers in today’s emergency work from home (WFH) environment from a blog reader and it got The PCI Dream Team thinking about how does that work?  So, thanks to David Mundhenk of Herjavec Group, Art “Coop” Cooper of NuArx, Ben Rothke of Tapad and Jeff Hall of Online Business Systems for contributing to this list.

Ben & David wrote a piece on the topic last week, and the Dream Team has a webinar on Dealing with PCI DSS Compliance During the COVID-19 Crisis on March 25.

Thanks to the Coronavirus crisis, organizations are now scrambling to get their employees working from home.  This is presenting a whole new series of challenges to their compliance, technology and information security teams as these employees are now operating in a potentially less secure and definitely less private environment.

Home networks are going to be less controlled and secure.  Making matters worse is that most home networks today are Wi-Fi based, not wired, so data is flowing over untrusted networks because everyone in the house knows the Wi-Fi password (assuming there is one and it is not the default).

Bring Your Own Device (BYOD)

The biggest issue we are encountering is those organizations that need to rely on workstations owned by employees because they do not have company-owned and configured equipment to provide.  I have seen many a Tweet and LinkedIn post discussing the shortages of equipment for work from home and what options do they have.  The problem stems from most business continuity plans focusing on events that affect a business location or a community, not the entire country.  As a result, the idea of a pandemic forcing people to work from home was not thought of as a realistic threat.

As a result, bring your own device (BYOD) is the only answer in the near term to getting people working from home.  In discussions not only amongst the Dream Team but with other QSAs, there just do not seem to be any good answers for using BYOD and maintaining PCI compliance.  None of us can come up with ways to maintain compliance with BYOD because there are just too many factors involved from anti-virus (many varieties), limited or non-existent central monitoring and management, vulnerability scanning, penetration testing, patching, differing hardware, differing operating systems and a host of other issues that make it impossible to verify compliance let alone maintain compliance.

One potential option to reduce risk and gain better control with BYOD is using virtual desktop infrastructure (VDI) solutions such as Citrix Workspace, VMware Horizon or Windows Remote Desktop Services.  If you have that infrastructure in place, then we would recommend expanding it for WFH remote access.  If you do not have that infrastructure, you may be able to use Amazon Web Services (AWS), Microsoft Azure, Google Cloud or similar cloud environments to stand it up quickly.  That would allow you to reduce the risk of the BYOD being used but there still would be the risk of memory scraping and keyboard logging on the BYOD that must be managed.

That is not to say that you should not use BYOD as you need to keep your business running.  What it does mean is that you need to have a serious talk to your acquirer to determine how to handle this situation, what the risks are to your solution and then communicate the results of that discussion formally to your QSA.  You may even want to have your QSA on those calls with your acquirer to assist.  In these desperate times, I doubt that any bank is going to say you cannot stay in business as long as you can provide some controls and do your best to manage the risk.

We view BYOD though as a short term solution and that a longer-term solution needs to be developed as current estimates seem to indicate that the crisis will likely extend past the original estimate of four weeks.  That longer-term solution would involve acquiring the necessary hardware and software to implement a managed, secured and controlled environment that can be tested to ensure PCI compliance.

Company Provided Hardware and Software

For those companies that do have the equipment to send home with their employees, this is not a complete list, but a set of bullet points of ideas for how to address PCI compliance in our “new normal”.

  • The easiest topic is remote access. The PCI DSS explicitly calls out that a secure VPN (i.e., encrypted) with multi-factor authentication (MFA) for users to obtain access to the service provider network (8.3.2.a).  But where things can go sideways is complying with 8.3.1.a which requires MFA for non-console access to systems/devices in the cardholder data environment (CDE).  It goes awry because people think that the first MFA covers the MFA into the CDE and it does not.  The reason is that 8.3.1.a was designed to stop the phishing of Administrators to gain access to the cardholder data environment (CDE).  To stop that, you need additional MFA to access the CDE.  That does not mean a separate MFA solution (which would be ideal), but it does mean enforcing a delay in the single MFA so that the same MFA code cannot be used to access the internal network and then also the CDE.  A lot of organizations implement the delay in their remote logon script by invoking a timer delay that expires after the longest time a code can be active (usually 30 seconds).
  • A secure VPN is necessary to remove the home network from scope. Ideally, the VPN should be required to always be in use so that the workstation cannot get to the internet other than over the VPN.  For those that do allow the home network and internet to be accessible, you will need to ensure that the firewall on the workstation appropriately protects the workstation as well as implementing a host intrusion detection solution (HIDS).
  • VDI is also a solution because it allows for the use of thin-client and devices such as Chromebooks to be used to connect. Most VDI solutions also embed a secure remote connection via HTTPS or similar secure connectivity solutions.  If not then you will need to use a secure VPN as documented above.  However, even a thin client runs the risk of memory scraping and keyboard logging so you need to manage those risks.
  • Review all automated workflows to make sure that they are producing the necessary audit trails that will provide evidence for you PCI assessment of what is happening. Where this becomes problematic is when the workflows are developed only for PCI compliance and with the changes for remote operations, those workflows are not picking up new users, devices and other changes that were made to allow for remote work.
  • People that typically work together but now are remote will start using Microsoft Teams, Slack, Skype and other collaboration platforms to communicate and that may include sharing cardholder data (CHD) or sensitive authentication data (SAD) at times. You will need to train and quickly remediate situations if CHD/SAD enters these applications as well as periodically reminding these people that the use of these communication systems for transmitting SAD/CHD is not allowed.  If possible, enable data loss prevention (DLP) or similar capabilities to identify and then redact SAD/CHD in any of these communications.
  • If you are pushing out call center operations, remember that softphones will bring the workstation they connect into scope for PCI compliance because the workstation is now directly connected to the CDE which is, of course, the VoIP telephone system. That means an increase in scope and that those workstations need to be hardened, managed, logged and controlled for PCI compliance.  Call center operations may also require additional network segmentation to be put in place to ensure the size of your CDE does not exponentially grow.
  • While not entirely PCI related but needs to be noted are some other remote call center operation issues to consider that could make compliance with contractual obligations regarding privacy and confidentiality of data discussed or processed by the operator. You may need to supply operators with shredders, printers, additional monitors and other equipment to ensure privacy and productivity.  You may also have to instruct people to locate their work area to a bedroom or other room where a door can isolate the operator while they work so that family members do not come into contact with information or documents they should not view.
  • Ensure that you have ways to document changes happening, their review and approval. A lot of organizations have paper forms, Excel spreadsheets, email forms, etc. they use that sometimes can get lost in people’s inboxes, archives and folders or just lost, period.  You need to make sure that the change management system will work in the remote mode and that change evidence, reviews and approvals are maintained.
  • Logging should not be an issue unless your organization was not logging the VPN or other devices because they were not in scope for PCI compliance but now are in scope. So, you need to review your new workflows to ensure all devices and systems are logging to your SIEM or logging solution so that you comply with PCI requirement 10.
  • Encryption key management could become an issue particularly if your process does not support remote management. This can happen with some hardware security modules (HSM) and systems that require that the key custodians physically input their seed values into the device’s console.  So, going on-site may be required for encryption key changes and that may require formal approval from local authorities to occur.

These are the top of mind ideas that we were able to come up with for this discussion.  However, every environment is different so not everything discussed may be possible for your organization to use and maintain compliance.  We would recommend you work with a QSA to make sure that what you are attempting to do is not creating risks you are unwilling to accept or if you cannot manage appropriately.

We wish all of you the best of luck during this crisis.  We will get through this, but it will likely take some ingenuity in how that happens.

Also, be aware that the Council and the Card Brands are working on this topic as well and I expect more from them in the coming weeks.

Stay safe and healthy.

 

Other WFH resources:

CISA Coronavirus Guidance – https://www.cisa.gov/coronavirus

NIST Teleworking Guidance – https://csrc.nist.gov/publications/detail/sp/800-46/rev-2/final

SANS Work From Home Webcast – https://www.sans.org/webcasts/archive/2020

08
Dec
19

Are You A Level 2 Merchant? Beware The MasterCard Trap

I had a discussion with a prospective client and as things usually go you want to determine their merchant level.  As it turned out, they were confused about the differences between Level 3 and Level 4 and their bank was just as confused.  The merchant had a 2 to 1 advantage in Visa transactions (around 800K) over MasterCard and, in total, had more than one million transactions across all card brands.

When their bank couldn’t decide their merchant level, the bank referred them to Visa since the bank was affiliated with Visa.  Visa informed the merchant that they were considering them a Level 2 merchant because of the high volume of eCommerce transactions (80%+) and their total transaction count for all payment cards (around 1.3M).

With this information in hand I said, “Well, it looks like you’ll be doing a ROC.”

The CFO at the other end of the WebEx exclaimed, “Say what!  Why do we need to do a ROC?  The standard says we can do a self-assessment!”

Sadly, another merchant gets caught flatfooted by the card brand rules.  People think that the PCI DSS and other PCI standards are all they have to worry about for card payment compliance.  However, the card brands (i.e., Visa, MasterCard, American Express, Discover and JCB) also have their own security programs in addition to the PCI standards and those also need to be followed.  Think that is not the case?  That Merchant Agreement from the bank that someone in the merchant’s organization signed calls out that not only do PCI standards need to be followed but also the rules from the card brands the merchant has agreed to accept for payment (almost always Visa and MasterCard with one or more of the others) also need to be followed.

One of those “quirks” in the card brands’ programs that comes up is this one regarding Level 2 merchants and MasterCard.

The first thing everyone needs to remember is that if a merchant is at a certain merchant level for one card brand, they are at that merchant level for ALL the card brands.  The second thing to remember about merchant levels is that any of the card brands can set the merchant level for a merchant regardless of transaction volume.  I have had merchants end up as a Level 1 merchant with fewer than 30K transactions all because the dollar value per transaction was extremely high as with business to business (B2B) transactions.

With that information, a merchant now needs to go to the card brands’ Web sites for the brands you accept and review their rules.  If you go to the MasterCard Web site to the page titled ‘What merchants need to know about securing transactions’ and scroll down to the merchant level requirements for Level 2, you will see footnote 3 next to the requirement “Onsite Assessment at Merchant Discretion”.  That footnote states the following:

“Level 2 merchants that choose to complete an annual self-assessment questionnaire must ensure that staff engaged in the self-assessment attend PCI SSC ISA Training and pass the associated accreditation program annually in order to continue the option of self-assessment for compliance validation. Alternatively, Level 2 merchants may, at their own discretion, complete an annual onsite assessment conducted by a PCI SSC approved Qualified Security Assessor (QSA) rather than complete an annual self-assessment questionnaire.”

For an organization to get an employee trained as an ISA, you need an employee with backgrounds in compliance and technology.  Typically, this would be someone in the internal audit department that a lot of Level 2 organizations do not have or if they do have, the people do not have the time to take on PCI. Then there is the cost which is $3,100 USD plus travel expenses since most ISA training is not done locally unless you are lucky. And finally, there is the employee retention issue after such an investment.

In the end, most Level 2 organizations do not see the cost benefit of training one of their employees to be an ISA in order to do an SAQ.  As a result, that is why I get to my comment about Level 2 merchants doing a ROC.

Oh, and for the record, the PCI standards do not dictate which organizations can fill out a self-assessment questionnaire (SAQ) and which fill out a Report On Compliance (ROC).  The card brands dictate that based on merchant and service provider levels.  In this case, MasterCard has its own ideas in that regard when it came to Level 2 merchants.

21
May
19

An Inadvertent Service Provider

A discussion came up on the last PCI Dream Team session regarding situations at universities that have bookstores and cafeterias operated by third parties on their networks and those vendors processing payment card transactions.  QSAs encounter this situation not only at universities and colleges, but also with hospitals, health clinics and large corporations.

The Situation

As organizations focus on customer and employee perks, QSAs encounter third parties operating business outlets within a variety of organizations.  These businesses include coffee shops, convenience stores, dry cleaners, bookstores, restaurants, cafeterias, parking ramps, travel agencies, pharmacies, health clubs and a whole host of other businesses.  Of course, all of these third parties accept payment cards for their services and need a way to process those cards.  Organizations offering these perks have existing wired and wireless infrastructure that get leveraged to connect these third parties to the internet and their payment processors.  Thus, bringing that network and everything attached to that network into scope for PCI compliance.

As a result, this situation creates a PCI compliance problem because the organization is now a service provider as well as a merchant.  The organization thought by outsourcing these businesses it was reducing PCI scope not increasing scope.  But scope increases because since they are now considered a service provider, they must provide each of these third parties with a Service Provider Attestation Of Compliance (AOC) for that network connectivity.

But it can and does get worse.  I have encountered situations where the outsourcing organization provides help desk, firewalls and other support services for these third parties, further complicating their PCI compliance responsibilities.

What Do You Do? Option 1 – Get Out Of Scope

There are some ways to get out of scope, but these can be complex and/or expensive.

The first way to get out of scope is to force all of your third parties to get their own network connectivity from their own internet service provider (ISP).  The problem with this is that an ISP will likely have to run wire into your facilities to make those connections.  That can be disruptive as well as expensive and complicated due to locations within existing buildings.  And what if each business wants their own ISP because of a contract relationship?  That will mean multiple ISPs tearing up your facilities.  Not necessarily the best situation.

The most extreme solution to get out of scope is for the outsourcing organization to implement carrier equipment and become a “carrier” to these third parties.  I have had a few clients go down this road, but it is not cheap and can also be more trouble than it is worth.  However, for a university or large hospital/clinic complex with lots of third parties, this solution can actually be a cheaper route to implement and operate.

But the beauty of these solutions is that your organization is totally out of scope so there are no service provider PCI assessment requirements.

What Do You Do? Option 2 – Reduce Scope

There are also a couple of ways to reduce scope.  But reducing scope requires at a minimum the creation of a Service Provider SAQ D and AOC.

The quickest and easiest way to reduce scope is that the outsourcing organization can implement end-to-end encryption between the third party’s connection and the internet.  However, this adds the requirements in section 4 to the assessment as well as keeps the endpoints in scope for PCI compliance.

Another option to reduce scope is to require these third parties to implement encryption from their operation to anyone outside of the outsourcing organization.  While this seems simple, it usually never is simple.  Never mind the fact that if that encryption is ever stopped (most times without your knowledge), the outsourcing organization’s network is back in scope.  Typically, when this gets brought up as a solution, a lot of the third parties balk or say they do not know how to encrypt their connections.  Never mind the fact of the complexity of proving that the outsourcing organization does not have encryption keys and that every third party connection is encrypted becomes problematic.  It ends up more trouble than it is worth.

The only good news about reduced scope is that you only need to fill out a Service Provider SAQ D and AOC because you have no idea the transaction volumes being processed by any of these third parties.  That said though, it is additional paperwork that needs to be filled out annually and given to all your third parties.

Heaven help you though if you offer firewall, help desk and other support services in addition to connectivity.  Those just complicate your compliance and reporting efforts.  All I can say is, if you can stop offering those services, stop.  If you cannot stop those services, then be prepared to document and report on the PCI compliance of each of those services.  That can be done in a single assessment, but the AOC must cover each of those services provided individually in a separate section 2g.

Never mind the fact that if some of those services offered give your organization insight into the number of transactions processed by your third parties such as you provide payment processing under one or more of your merchant identifiers, you may end up having to conduct a Service Provider Report On Compliance (ROC) because the transaction volume exceeds one of the card brands’ annual service provider transaction volumes.

There you have it on third parties and their payments on your network.

22
Apr
19

More On The NIST Password Standard

Apparently, I touched a nerve with my post on the National Institute of Standards and Technology (NIST) password standards discussed in Special Publication (SP) 800-63B.  As a result, I thought I would walk you through my logic by using a compensating control worksheet (CCW) approach since this is what you will have to do for your PCI assessment if you chose to rely on the NIST guidance.

[SPOILER ALERT: It is possible, but I doubt it is worth all the effort.]

First, let us review all of what a CCW needs to comply with the Council’s requirements.  From Appendix B of the Report On Compliance (ROC) Reporting Template.

“Compensating controls must satisfy the following criteria:

  1. Meet the intent and rigor of the original PCI DSS requirement.

  2. Provide a similar level of defense as the original PCI DSS requirement, such that the compensating control sufficiently offsets the risk that the original PCI DSS requirement was designed to defend against. (See Guidance Column for the intent of each PCI DSS requirement.)

  3. Be “above and beyond” other PCI DSS requirements. (Simply being in compliance with other PCI DSS requirements is not a compensating control.)

  4. Be commensurate with the additional risk imposed by not adhering to the PCI DSS requirement.”

QSAs can get stuck on the third point because the Council also seems to focus on that point in their Assessor Quality Management (AQM) reviews because QSAs miss that point so often.  However, the other three are also very important to apply to the compensating controls being discussed.

Now let us focus on is section 4 of the CCW where the organization being assessed is required to describe the controls they have in place that go “above and beyond” the requirement being compensating which in this case is requirement 8.2.4 which requires password changes every 90 days or less.  I pick that requirement because that is the one most often cited by clients as why they want to use the NIST standard.  Most want to go to a 12-month password change interval.  These controls are going to come from pages 13 through 15 of the SP800-63B.

  • All passwords are required to be [value greater than eight] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service], [name of dictionary word list used], repetitive or sequential characters and context specific words are checked and rejected if found.
  • Authentication is only conducted using [encrypted authentication protocol(s)].
  • Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique].
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

To comply with the NIST guidelines for passwords an organization needs to implement all of these controls.

So how do they match up with the four criteria for a CCW?

Above and Beyond

This is the easiest one to tackle because almost all of the controls are above and beyond.  What?  Almost?

There are a couple of controls that do not meet the above and beyond test.

The first is the easiest to discuss and that is “Authentication is only conducted using [encrypted authentication protocol(s)].”.  That control does not pass above and beyond because it is required by requirement 8.2.1 under transmission must use strong cryptography.  As such, that control cannot be relied upon in the CCW and must be removed.

The second one is the “Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique]” control.  This discussion gets sticky because requirement 8.2.1 states that storage of credentials must also use strong cryptography which is not very specific.  I would argue that any sort of reasonable response here would be required by requirement 8.2.1 and therefore this requirement would also be ineligible to be used.

Only the password length is specified by the PCI DSS and as long as a value greater than eight is picked, that meets above and beyond.  However, we need to discuss this value further under intent and rigor.

All of the remaining controls are not specified in the PCI DSS, so those are all considered above and beyond.

Intent and Rigor

For intent and rigor, we need to look to the guidance provided for requirement 8.2.4.

“Passwords/passphrases that are valid for a long time without a change provide malicious individuals with more time to work on breaking the password/phrase.”

Remember, we are looking at a 12 month password change interval, so we need to consider intent and rigor under that concept that we need controls that will allow a password to remain unchanged for 12 months.

So let us look at the length attribute again.  Nine characters in today’s world without any complexity requirements can result in passwords able to be cracked in minutes.  Ten characters can be done in hours.  Only when we get to 12 characters and above do we get a value of at least 12 months or greater to crack.  As such, I would argue that you need 12 character long passwords or greater to pass the rigor requirement for justifying a 12 month change interval.

Passwords are assessed against a dictionary word list, context specific words and repetitive/sequential characters.  The key to this part of the second bullet is the extent of the dictionary word list.  The dictionary needs to be sufficiently large to provide the control that NIST desires.  The QSA is going to need to know how large the dictionary is, what is used as a reference to ensure that the dictionary has the appropriate words in its list and how often is the dictionary updated.  That would likely mean that these controls would need to be separated from the credential breach service control so that those aforementioned additional controls can be documented in the CCW. This would all have to be backed up by a proper risk assessment that documents that the review and updatee intervals of the dicutionary are appropriate and mitigate the risks.

Passwords being assessed to some credentialed breach source/service introduces an interesting twist to ensuring the security of a password.  But it also introduces an interesting discussion into the intent of requirement 8.2.4 which is to ensure the security of credentials.  NIST is only requiring that credentials be tested at the point they are changed.  But what happens if sometime during the 12 month interval that those credentials are compromised?  The intent of requiring a 90 day change interval was to reduce the risk of credentials becoming compromised for an extended length of time by changing one of those credentials at least every 90 days.

But NIST does not require monitoring of the credentials other than when they change.  Without constant monitoring of the credentials from a compromise service, how do you know when they need to be changed which is the intent of the change interval?

The PCI DSS does provide a bit of guidance on how the Council would likely approach this issue.  For reference I point you to requirement 3.6.5 which discusses this in regard to encryption keys that are suspected to have been compromised.  The reason I believe this is relevant here is that the PCI DSS does not require specific change intervals for encryption keys.  I would argue that the PCI DSS would view passwords changing at long intervals as requiring the same sort of control.  If the credentials are ever suspected of being compromised, then they should be changed.

Which brings up an interesting dilemma.  How do you monitor something that you have hashed and cannot recover?  Do we really want to have encrypted passwords in our authentication systems so that we can monitor them for compromise?  I seriously doubt that would be a good practice.

So with that said, we would need some sort of monitoring and alerting capability to warn if credentials do appear to be compromised such as monitoring for excessive logons, logons when the user is out of the office, logons from systems outside of the user’s area or building or other characteristics that would provide some sort of indication of credential compromise.  These controls would have to be added to the monitoring of the credential breach source to show that the credentials are changed when suspected of being compromised.

Similar Level of Defense and Be Commensurate

At this point, I think we have covered these two requirements for a CCW with our discussions about above and beyond and intent and rigor.

Where Are We With The CCW Controls?

Based on our discussion, here is what I think section 4 of the CCW would now have to look like.

  • All passwords are required to be [value of 12+] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service]
  • Passwords are monitored for excessive logons, excessive failed logon attempts, logons when the user is out of the office and logons that occur from systems outside of the user’s area or building to provide an indication of credential compromise.
  • When passwords are modified, [name of dictionary word list/source used], repetitive or sequential characters and context specific words are checked, and the password is rejected if any of these characteristics are found. The dictionary is updated every [month/quarter/six months] and reviewed [semi-annually/annually] to ensure the dictionary contains an appropriate list of words.
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

After looking at these controls, I would venture to say it is simpler and easier to meet the PCI DSS requirements than to implement these controls and make them work consistently and effectively.  Because remember, this is just section 4 of the CCW.  For section 5, you have to produce evidence that all of these controls are in place and working as designed.  Never mind section 6 where you explain how you maintain all of these controls.

So for those of you bent on using NIST, there you have it but I doubt it is worth the effort you think it is.  And this does not address the CCWs you will also need to write for 8.2.3 because you no longer enforce complexity and 8.2.5 because you no longer track the last four passwords used.  But those could be another post.  Yeah, I do not think so.  Not worth the effort because those CCWs will revolve around the controls in this one.

As I said in my original post, it might be better to wait for the Council to issue their guidance in v4 of the PCI DSS.

UPDATE: The PCI Council has created an FAQ to address this situation. https://pcissc.secure.force.com/faq/articles/Frequently_Asked_Question/Can-organizations-use-alternative-password-management-methods-to-meet-PCI-DSS-Requirement-8

27
Feb
19

Bank Of America Makes NESA Mandatory

Remember the non-listed encryption solution assessment (NESA)?  Probably not because it really didn’t get legs.  That is until now and from an unlikely source – Bank of America (BoA).  QSAs that perform a lot of merchant Report On Compliance (ROC) that go to BoA have likely noticed that BoA have been scrutinizing those ROCs more than before.

This has been particularly true of ROCs that use end-to-end encryption (E2EE) solutions such as Verifone Verishield or First Data TransArmor and you are asking BoA for scope reduction to point-to-point encryption (P2PE).  I ran into this three years ago with a client that was implementing TransArmor at their retail stores.  After much negotiation by my client, they were finally granted P2PE scope reduction and their assessment moved on.

However, at the same client this past year, a shock.  BoA told them not so fast on P2PE scope reduction this year.  As the client and their new QSA found out, sometime in 2018 BoA introduced a whole program to deal with E2EE solutions that now requires a P2PE-QSA to assess the solution and produce a NESA report.  Surprise!

What makes this particularly sad and annoying is that First Data and BoA are joint partners in Bank of America Merchant Services (BAMS) the transaction processing arm of BoA.  BAMS relies on First Data solutions such as TransArmor for processing and securing payment transactions.  But guess what?  Think that your TransArmor solution will get a “pass” from BoA when it was recommended by BAMS?  Think again.  BoA is requiring all non-P2PE validated solutions to go through a NESA.  And that is exactly what this client has, TransArmor from First Data that is a partner in BAMS.

The lesson here is, be prepared as a QSA to deal with a new issue if you have E2EE, you want P2PE scope reduction and your client’s bank is BoA.

25
Jan
19

Where Is EMV When You Need It?

Juniper Research (not Juniper Networks) issued a report recently that stated that card not present (CNP) fraud would be $130B by 2023.  In response, there were a lot of people asking where EMV was to address this issue?  Apparently there are a lot of people that are confused about EMV including some that are directly involved in PCI compliance.

First a bit of background.

People need to understand that EMV as it is implemented anywhere today was originally developed for eliminating or minimizing card present (CP) fraud.  Europe became a hotbed of CP fraud in the early 1990s after the fall of the Iron Curtain.  To address this problem, Europay, MasterCard and Visa Europe (hence the acronym “EMV”) joined forces to develop the standard in an effort to minimize the CP fraud problem in Europe.  EMV was introduced in the UK in 1996 and continued to rollout throughout Europe for the next decade.

Then there is the term “Chip and PIN” that people inadvertently confuse with EMV.  Using an EMV card with a PIN is not a requirement as consumers in the US have discovered.  The term “Chip and PIN” comes from that original UK rollout.  The banks in the UK decided on requiring a cardholder to not only put their card into the card terminal but also to require a personal identification number (i.e., PIN) in order to complete a transaction.  That standard has continued pretty much throughout the world with the exception of the US.

The next key thing to understand about EMV is that it is no more secure than the magnetic stripe it replaced.  I know that fact might shock some people given all of the press EMV has gotten regarding security.  Somewhere along the line, people began to believe that EMV by itself was more secure.  I believe a lot of this misunderstanding was the result of other security technologies that were bundled as countries converted to EMV.

The biggest security feature was the requirement of a PIN for transactions.  A PIN is essentially implementation of multi-factor authentication (MFA).  The EMV card is the something you have, and the PIN is something you know.  Both of which are also known as two factor authentication (2FA).  2FA is great for dramatically reducing CP fraud, but still does not protect the data being transmitted and likely stored by any point of sale (POS) solution.

What came next in the evolution of EMV was the addition of end-to-end encryption (E2EE) between the card terminal or point of interaction (POI) and the transaction gateway or processor.  E2EE encrypts the sensitive authentication data (SAD) transmission from the POI to the processor meaning that any devices or networks between the two cannot access the data unless they have the encryption keys (which they will not if E2EE is properly implemented).

The final security feature that came to EMV was the addition of tokenization.  Tokenization takes the primary account number (PAN) and converts it to a token which can then be returned to the merchant’s POS solution without the worry that it was storing cardholder data (CHD).  Tokenization can be either be performed at the POI or by the processor upon completion of a transaction (most common).

With that as the background, I think most readers can start to understand why EMV and its currently used security features are not going to address the CNP fraud problem.  All of those security features we are familiar require a CP environment and exactly how does that translate into a CNP environment?  The answer is, they do not translate, at least easily.

It turns out that we have been here before with EMV although most people are probably not aware of that fact.  Around 2000 to 2002, a few UK banks and a card brand thought about addressing the growing CNP fraud issue with EMV.

In the UK, Barclays and Standard Chartered came up with competing application programming interface (API) standards for eCommerce sites to use.  Both Barclays and Standard Chartered paired their APIs with card readers that connected to PCs.  Their solutions relied on the new EMV cards that were being issued in the UK and used Chip and PIN for conducting transactions.

At around the same time in the US, American Express was rolling out their first iteration of their Blue card.  That card had a chip although it did not conform to the EMV standard.  Customers that were in that Blue rollout also got a handy USB chip reader along with the card.  As with the implementations in the UK, American Express also relied on Chip and PIN for completing transactions.

The idea with all of the schemes was to have consumers connect the reader to their computer and install some software for the reader.  Then when making a purchase online the consumer would insert their EMV card into the reader, key their PIN through the computer’s keyboard and complete the purchase.  No different than in a traditional brick and mortar store.

Unfortunately, there were some issues with all of these approaches.  The largest of which was that the APIs were all different.  As a result, the consumer could not make a secured payment unless the online merchant supported the payment API the consumer had installed on their local PC.  In the case of American Express, they had signed on Amazon as a merchant, but Amazon was a very small but up and coming fish in the eCommerce world at the time.  In the case of the UK, the banks had only signed a very few small online UK merchants.  As a result, with no large eCommerce merchants on board no API gained critical mass to win out.  The end result was that by 2003 the EMV CNP experiment had effectively died.

To those observant readers, I earlier alluded to the fact that there are other EMV security features that might be useful for addressing CNP fraud.

There are two features in the EMV standard that could be used and those are dynamic PAN and dynamic card verification value (CVV).  These two EMV fields are included in every EMV card but are not currently used.  The reason is that using them would require significant programming on the transaction processor’s end to make them work.  But using them would still require a card reader solution for eCommerce given the cards in circulation today.

Obviously with CNP, what is needed is a solution that would not require a card reader and therefore a standard API.

In the age of mobile applications, it would be relatively easy for an app to provide the dynamic PAN and dynamic CVV for entry into a Web site.  Granted this app would have to communicate with a bank or processor to ensure the generation of valid dynamic values, but it should be no more difficult than what RSA or Symantec do for multifactor authentication.

Another option would be to provide a browser widget or a whole PC application that would generate the dynamic PAN and dynamic CVV while the user was purchasing items online.

But what about people that do not have smartphones or prefer physical cards?  What immediately came to my mind is something like the FUZE, Edge or Dynamics cards.  While all of these are currently not EMV capable, they are expected to be at some point.  They all come with displays that could easily display the dynamic PAN and dynamic CVV just as from a smartphone.  Unfortunately, all of these electronic cards require a smartphone but could probably be easily adapted to program from a Web site through a PC since they need to be charged.

The bottom line is that there are solutions to the problem.

30
Nov
18

The New Telephony Information Supplement

In case you missed it, the PCI SSC released their new information supplement on telephony this week.  Since I served on this Special Interest Group (SIG) I was involved in its development.  As a result, I thought I would share my thoughts on the new information supplement.

A Bit Of Background

At the start of the SIG a number of participants brought up the fact that the prior Telephony Information Supplement issued in 2011 had basically been ignored by the qualified security assessor (QSA) community and companies being assessed.  A number of QSAs and Participating Organization (PO) representatives explained to Council representatives that they had personally witnessed situations where QSAs ignored voice over IP (VoIP).

That brought about the following response from one of the Council members on the call:

“All QSAs are trained to understand that VoIP is in scope if CHD/SAD [cardholder data/sensitive authentication data] is discussed on any telephone calls.”

The consensus response was that while that is no doubt the case, many participants attested to the fact that they had encountered QSAs ignoring VoIP as being in scope.  Some had witnessed QSAs telling their clients and prospective clients to not worry about VoIP because it will not be in scope.  These same QSAs did worry about the security of call recordings, but they were leaving the rest of telephony out of scope.

That response seemed to send a chill through the Council representatives.  No one identified any particular qualified security assessor companies (QSAC) but the participants made it clear that VoIP was largely being ignored in PCI assessments.  The point was also made that some QSACs were benefiting handsomely in obtaining engagements because of their willingness to ignore VoIP.

But that exchange also identified a shortcoming with today’s telephony solutions.  QSAs and technology people do not seem to understand telephony solutions and appreciate their risks.  Therefore, they do not know where to even start in securing them let alone those that make an attempt only to find themselves in one or more “rabbit holes”.  As a result, it is easier to ignore these telephony solutions than to try and deal with the intricacies and vagaries of securing them.

There were also brief discussions about the shortcomings of the original information supplement from 2011.  The biggest complaint of which was that it was call center centric and did not reflect what was being implemented in the real world.  Various people explained that the 2011 document did not address call centers operated within corporations on a shared telephony solution with the rest of the business nor was there any useful guidance provided for PCI compliance.

Such configurations obviously complicate the scope of PCI assessments since any device connected to the shared VoIP system and network was therefore in scope (hence why a lot of QSAs ignore VoIP).  As we were to find out, the new version of the information supplement would do only a little to address this complex issue.

Disappointment

Trust me, it was not the SIG’s intent nor the Council’s intent to disappoint people, but I have a feeling that a lot of people will be disappointed with this information supplement.  Not that there are not good ideas and solutions presented, they are just not always as fleshed out as well as they should be and do not always represent what actually goes on with the solution.  The reason is for that is because telephony solutions all operate differently when performing various functions such as call forwarding, conference calling and the like.  As a result, providing real guidance depends greatly on how a given solution functions in a particular circumstance.  As we found out a number of times, this issue would come back to bite SIG participants repeatedly.

In my very humble opinion, the latest information supplement is lacking in detailed guidance to a lot of telephony situations particularly those that are complicated because of how vendors have approached Unified Communications which is the driving force now behind most vendors’ current telephony solutions.  The document points out a lot of scope and security concerns regarding the use of softphones and VoIP only to leave the reader essentially up to their own devices as to how to address those concerns using existing guidance from other information supplements.

That was a point of contention as the information supplement was developed.  There were a number of people that argued that more guidance was needed to be provided because the issues are more complicated and nuanced than the supplement leads people to believe.  They wanted more discussion with the card brands about the risks involved so that all parties could come to a consensus over what was acceptable risk and if there were better ways to address those risks and therefore provide more guidance.  Unfortunately, we were told that there was not enough time to have such discussions which drove in great part what resulted in the document that you now have access.

Then there are the threats to VoIP that seemed to be minimized in discussions.  At one point in a meeting someone stated that VoIP is not an attack vector so there is no need to worry about it.  This individual was almost immediately reminded that this is how we got into this situation in the first place.  People ignored the risks to processing, storing and transmitting payment card data and then we all had to do a fire drill to secure that information.

Using CVE Details, I was able to identify close to 400 specific threats to VoIP and/or specific VoIP vendor solutions.  Of those, there were around 250 to 300 that appeared to be able to compromise VoIP and by association, CHD/SAD.  While most had been patched, there were a around 20 that had no fix because they were flaws in the protocols themselves (mostly due to UDP streaming).  The bottom line in this research is that while VoIP might not be an active attack vector at this point in time, it is ripe for being turned into one.  Worse, current information security practices have minimal effect on a lot of the attack vectors thanks to UDP.  And if that was not bad enough, in a lot of cases all it takes is a telephone call to start the attack process.

With that as a background, while the new information supplement is a quantum leap above the 2011 information supplement, a lot of participants feel it is still somewhat lacking in guidance.

Telephony Guidance Anger

I can already anticipate the anger that will result from this one particular recommendation on page 55, section E.4 Unified Communications, where it states:

“As a result, entities can find that their only option to minimize the PCI scope of their VoIP environment is to implement multiple instances of in scope VoIP and out of scope VoIP.”

Say what?!?!?

That will be a huge burst of a bubble to a lot of organizations, QSAs and ISAs alike.  The rationale for this statement goes to Unified Communications and how most vendors have approached it.  The telephony system vendors have now so tightly integrated services such as voice, voice mail, facsimile, video, telepresence, instant message, email and other communication mediums that it has resulted in an inability to decouple and move say instant messaging or email to a different network segment from the call manager.  As a result, there are no easy ways to implement network segmentation around telephony solutions so that some are in the CDE (Category 1) and others are in Shared Services (Category 2).

Unfortunately, Unified Communications is not the only situation where two telephony solutions will be needed.  Softphones, call centers on common corporate telephony solutions and other telephony features/functions will also create situations where the only way to be PCI compliant will be to implement at least two separate telephony systems.

Speaking of softphones, if you were angry at the first statement, your anger will likely only grow when you read the following on page 24, 5.2.4 Softphones:

“It is important to note that the use of such systems [softphones] to capture payment card account data would bring the workstation and probably the network it is connected to into PCI DSS scope.”

The next paragraph after the quotation points readers to the Network Segmentation Information Supplement for guidance.  Unfortunately, the problem with that guidance is that regardless of how you try to segment the workstation, the softphone application will put the workstation in scope regardless.  No other guidance is provided regarding softphones.  It is not like this was not discussed within the SIG, it is just that there was no agreement on how to address this subject.  So, what you read in this section is the guidance you get.

One potential solution discussed to minimize scope is to put the softphone in a virtual desktop (VDI) workstation.  That would put the VDI in the CDE and the workstation as Shared Services.  However, the VDI approach can be fraught with compatibility issues and other technical problems that may not reliably provide telephony service to end users via the softphone.  There is also still some risk of eavesdropping through the end user’s workstation, but it is now limited to memory in the workstation versus the softphone software that can sometimes be addressed with other workstation controls.  This of course is assuming that the VDI solution is easier to control, secure and monitor than the physical workstations.  The bottom line is that there are a lot of moving parts that would have to be assessed on a case-by-case basis, so the consensus was that there was no general, one size fits all recommendation that could safely be made about the VDI approach.

Another scope reduction approach is to use “inexpensive” physical SIP phones for handling calls that are logically network segmented away from the workstation.  I have a number of clients with agents configured this way to limit telephony scope to just their SIP phone.  But then their router must support two VLAN connections and those VLANs cannot be allowed to access each other.  That is easy to do in a corporate environment but can complicate things with SOHO workers.  Such a solution can drive up networking and equipment costs to an unacceptable level for some organizations.  Particularly organizations that were looking at softphones to reduce costs.

There are plenty of other areas of the information supplement that will generate anger mainly because for the first time, the PCI SSC is calling out more areas that are in scope for PCI compliance that organizations and some QSAs/ISAs treated as, or thought were out of scope.

Miscellaneous Comments

There are a few more points that I felt should be discussed.

On page 43, 7.2.2 SIP Trunking, the following quote will be of interest.

“As the technology matures, technical boundaries between an organization and SIP Trunk provider may become harder to define. Scoping for these services will therefore require an understanding of how connections are made between the different entities.”

I feel this is already an issue because the boundaries are already blurred.  When you realize that VoIP is predominately a UDP protocol, there is little you can do from an information security point to protect your telephony system.

First the carriers will tell you that their SIP demarcation device will provide some amount of security for your organization.  Exactly what amount of “security” that device actually provides is questionable at best.

But speaking of UDP, page 54, E.1 Protocols, Ports and Network states the obvious.

“… the use of UDP may render the detection of malicious content or payload more difficult.”

More difficult?  In some ways, it can be impossible to detect malicious payloads because it is streaming, and you want to ensure continuity of a conversation.  This is the biggest security issue with VoIP, because it relies on UDP streaming, VoIP exploits use that stateless streaming to their advantage by embedding the attack in the voice/video stream.

This inevitably brings up the discussion of firewalling your VoIP because that seems to have been the answer for every other security issue.  While the firewall will provide some amount of control and monitoring of TCP connections, it will do nothing for the UDP streams that VoIP relies upon.

Yet I have actually had some firewall vendor sales people claim that their firewalls are “VoIP aware” and can identify certain “bad” VoIP packets.  I’m not sure exactly how you can identify bad UDP audio/video data streams, but they claim to have some sort of proprietary methods for doing just that.  Of course, when you attempt to drill down on that “proprietary method” you get essentially stonewalled because it is “proprietary”.  I take that as an indication of sales “smoke and mirrors”.

Then there is the solution of encrypting all VoIP traffic.  I have had a number of clients suggest this as a solution to the security of telephony.  While encryption of all VoIP traffic minimizes the ability to eavesdrop on calls via the network, it does not remove the risk of eavesdropping via compromised endpoints which is much greater than the network risk.  Encryption also does not remove the risk of malware injected via the UDP stream which is the bulk of the real threats to VoIP.  After all of the discussion surrounding encryption, I really see only marginal value in the use of encryption of VoIP traffic from a security perspective.

Also, on page 54, E.2 VoIP Attacks and Vulnerabilities you get this statement.

“VoIP equipment and software are susceptible to vulnerabilities that could allow someone with malicious intents to gain access to your network, intercept and gather customer data, or initiate a Denial Of Service attack.”

I cannot tell you how many IT professionals do not realize the risk presented by VoIP and its infrastructure.  They seem to treat it like the PABXs of old that used to be located in basements next to the telephone carrier’s point of presence (POP) at their organization’s facilities.

Granted, we have moved away from the Windows and Linux versions of call managers that were standard fare when VoIP originally came out.  Most of today’s call managers are based on some proprietary derivative of Linux or Unix stripped down and modified for telephony.  But there are open source solutions that run on Windows and Linux server editions.  The bottom line though is that regardless of what you run, these are still servers no different than any other servers and they need to be properly configured and get regular patching just like any other server.

That is my take on the latest telephony guidance from the Council.  Better than what was produced in 2011 but still lacking in some areas.

21
Nov
18

Requirement 12.8.2

I got a comment a while back about contracts and PCI compliance.  The two requirements that are relevant in this discuss are 12.8.2 and 12.9.  Requirement 12.8.2 is for all organizations (merchants and service providers) that are being assessed under the PCI DSS.  Requirement 12.9 is only for service providers.

As usual, the clarifications surrounding these requirements were all provided verbally over the years at various PCI Community Meeting presentations and Q&A sessions.  But the overall gist of these requirements can be readily determined.  It just takes a little bit of effort and looking at more than just the PCI DSS.

Requirement 12.8.2 states:

“Maintain a written agreement that includes an acknowledgement that the service providers are responsible for the security of cardholder data the service providers possess or otherwise store, process or transmit on behalf of the customer, or to the extent that they could impact the security of the customer’s cardholder data environment.

Note: The exact wording of an acknowledgement will depend on the agreement between the two parties, the details of the service being provided, and the responsibilities assigned to each party. The acknowledgement does not have to include the exact wording provided in this requirement.”

The Guidance provided for 12.8.2 states:

“The acknowledgement of the service providers evidences their commitment to maintaining proper security of cardholder data that it obtains from its clients. The extent to which the service provider is responsible for the security of cardholder data will depend on the particular service and the agreement between the provider and assessed entity.

In conjunction with Requirement 12.9, this requirement is intended to promote a consistent level of understanding between parties about their applicable PCI DSS responsibilities. For example, the agreement may include the applicable PCI DSS requirements to be maintained as part of the provided service.”

If things are still not clear enough, it helps to look at the ROC Reporting Template to get clarification.  The tests being conducted for a given requirement usually clear up any confusion regarding what is being expected.  There is only one test for 12.8.2 and it states:

“Describe how written agreements for each service provider were observed to include an acknowledgement by service providers that they will maintain all applicable PCI DSS requirements to the extent the service provider handles, has access to, or otherwise stores, processes, or transmits the customer’s cardholder data or sensitive authentication data, or manages the customer’s cardholder data environment on behalf of a customer.”

The first thing to notice in all of these discussions is that nothing in the PCI DSS states that any organization is required to work with a PCI compliant third party.  None of the requirements in 12.8 specify that an Attestation Of Compliance (AOC) be provided.  A lot of QSAs will argue that requirement 12.8.4 requires it, but if you read the test:

“Describe how it was observed that the entity maintains a program to monitor its service providers’ PCI DSS compliance status at least annually.”

There is nothing in that test that explicitly mandates that an AOC is required to monitor third parties.  Granted an AOC is the easiest way to monitor service provider compliance, but there is nothing explicitly calling it out in this test.

So where does this “requirement” originate?  It comes from the merchant agreements with the card brands, specifically Visa and MasterCard.  They require that their merchants only work with third parties that are PCI compliant and can prove that compliance with a Service Provider AOC.  This is why it is important to read and understand the brands’ merchant agreements and their own security programs.  There are a number of key “requirements” that come from those documents that are just as important as what is in the PCI DSS.  So, read them as well as all of the PCI documents.

Getting back to the PCI DSS, what the Council wants QSAs and ISAs to look for in contracts, master service agreements, addendums and any other legal documents that describe the parties’ legal relationship is some sort of acknowledgement between all parties that they will abide by the PCI DSS and ensure that sensitive authentication data (SAD) and cardholder data (CHD) is kept secure.

Where a lot of QSAs/ISAs go wrong is demanding that the words “PCI DSS”, “PCI standards” or other explicit acknowledgement of “PCI” something to appear somewhere in those documents.  The Council has stated a number of times that explicitly using “PCI DSS”, “PCI standards” or “PCI” anything is not required.  It would be great if such documents did, but a lot of legal documents do not because they either predate the PCI DSS or lawyers argue it is not necessary.  That is what led to the Note in both requirements.  The key is the last sentence which explicitly states:

“The acknowledgement does not have to include the exact wording provided in this requirement.”

It is this sentence that the Council always points to and states that this is why explicit statements of PCI or any other direct reference to PCI is not necessary nor required.  My advice is, when in doubt, ask your client’s legal counsel for their legal interpretation of the legal agreements and whether they fell it covers the PCI responsibilities of the parties involved.

That will lead you to the fact that a lot of legal agreements reference the PCI DSS and PCI standards indirectly through language that obligates the parties to follow and comply with “regulatory or other legal requirements”.  The reason this language works is because “other legal requirements” will drag in the card brand legal agreements for taking and processing card payments.  Every card brand has legal agreements for merchants and service providers that explicitly call out that the customer of the brand will maintain PCI compliance for all relevant PCI standards.

Where this discussion becomes problematic is with service providers that do not directly process, store or transmit SAD/CHD such as managed service providers and the like that can affect the security of payments.  That is because they are not directly under the card brands’ legal agreements, so their contracts while using the same “regulatory or other legal requirements” will not necessarily be referencing PCI compliance because they are indirectly involved.  It is in these cases that I rely on getting a PCI AOC from the service provider which then provides the additional assurance that the service provider understands that they need to be PCI compliant.

It is when I cannot obtain an AOC from a service provider that I then explain to my client that this service provider’s environment needs to be assessed as part of their assessment.  Either that or my client needs to find a new PCI compliant service provider.

What a QSA/ISA needs to be looking for in a service provider’s AOC is a couple of things (actually, there are a lot of things, but these are the most important).

First, you need to ensure that the services provided to your client have all been covered by the service provider’s assessment.  Section 2a of the AOC documents the services covered and not covered.  The most common problem found with section 2a is that one or more services used by an organization were not assessed.  If services were not assessed, then you need to notify the service provider and develop a plan of how to handle this situation.

The next thing to review is the locations that were part of the assessment.  It still amazes me the number of AOCs I review where a client’s data center or processing center was not part of the assessment.  It gets worse when you bring this to the attention of the service provider and they get put out or worse, they argue with you over the fact that they must review every facility where a service is conducted.  I am sorry, the PCI SCC and the card brands are the ones that make the rules, I am just the poor assessor that must enforce and live by them.

Finally, you need to review section 2g for all of the services assessed (if they were assessed from section 2a).  Section 2g are a matrix of the 12 PCI DSS sections that explains who is responsible for the various PCI DSS requirements.  From this matrix an organization is to construct their PCI program to fit the controls that they need to implement to be PCI compliant for this service.

There should be a section 2g for every individual service assessed, but in instances where PCI coverage is the same for different services (e.g., SaaS application hosting), you can combine services in section 2g.  However, this is where problems usually are found.  My favorite example of such a problem is the day I found data center co-location and call center services listed in the same matrix.  I am sorry, but those services have very little similarity particularly in PCI controls.  When you encounter this situation, it is usually indicative of a QSAC that does not understand how to deal with service providers and is cutting corners to get the ROC and AOC out the door.  In addition, it also likely indicates a service provider that is just “checking a box” for PCI compliance to placate customers.  But worse is when that service provider is listed on the Visa or MasterCard service provider lists (it is rare, but I have seen it) which then indicates that the brands are also not doing their due diligence in their review of the AOC.

Hopefully, you now better understand requirement 12.8.2.  In a future post I will discuss requirement 12.9.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

September 2020
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 2,289 other followers