Archive for the 'Card Brands' Category



08
Dec
19

Are You A Level 2 Merchant? Beware The MasterCard Trap

I had a discussion with a prospective client and as things usually go you want to determine their merchant level.  As it turned out, they were confused about the differences between Level 3 and Level 4 and their bank was just as confused.  The merchant had a 2 to 1 advantage in Visa transactions (around 800K) over MasterCard and, in total, had more than one million transactions across all card brands.

When their bank couldn’t decide their merchant level, the bank referred them to Visa since the bank was affiliated with Visa.  Visa informed the merchant that they were considering them a Level 2 merchant because of the high volume of eCommerce transactions (80%+) and their total transaction count for all payment cards (around 1.3M).

With this information in hand I said, “Well, it looks like you’ll be doing a ROC.”

The CFO at the other end of the WebEx exclaimed, “Say what!  Why do we need to do a ROC?  The standard says we can do a self-assessment!”

Sadly, another merchant gets caught flatfooted by the card brand rules.  People think that the PCI DSS and other PCI standards are all they have to worry about for card payment compliance.  However, the card brands (i.e., Visa, MasterCard, American Express, Discover and JCB) also have their own security programs in addition to the PCI standards and those also need to be followed.  Think that is not the case?  That Merchant Agreement from the bank that someone in the merchant’s organization signed calls out that not only do PCI standards need to be followed but also the rules from the card brands the merchant has agreed to accept for payment (almost always Visa and MasterCard with one or more of the others) also need to be followed.

One of those “quirks” in the card brands’ programs that comes up is this one regarding Level 2 merchants and MasterCard.

The first thing everyone needs to remember is that if a merchant is at a certain merchant level for one card brand, they are at that merchant level for ALL the card brands.  The second thing to remember about merchant levels is that any of the card brands can set the merchant level for a merchant regardless of transaction volume.  I have had merchants end up as a Level 1 merchant with fewer than 30K transactions all because the dollar value per transaction was extremely high as with business to business (B2B) transactions.

With that information, a merchant now needs to go to the card brands’ Web sites for the brands you accept and review their rules.  If you go to the MasterCard Web site to the page titled ‘What merchants need to know about securing transactions’ and scroll down to the merchant level requirements for Level 2, you will see footnote 3 next to the requirement “Onsite Assessment at Merchant Discretion”.  That footnote states the following:

“Level 2 merchants that choose to complete an annual self-assessment questionnaire must ensure that staff engaged in the self-assessment attend PCI SSC ISA Training and pass the associated accreditation program annually in order to continue the option of self-assessment for compliance validation. Alternatively, Level 2 merchants may, at their own discretion, complete an annual onsite assessment conducted by a PCI SSC approved Qualified Security Assessor (QSA) rather than complete an annual self-assessment questionnaire.”

For an organization to get an employee trained as an ISA, you need an employee with backgrounds in compliance and technology.  Typically, this would be someone in the internal audit department that a lot of Level 2 organizations do not have or if they do have, the people do not have the time to take on PCI. Then there is the cost which is $3,100 USD plus travel expenses since most ISA training is not done locally unless you are lucky. And finally, there is the employee retention issue after such an investment.

In the end, most Level 2 organizations do not see the cost benefit of training one of their employees to be an ISA in order to do an SAQ.  As a result, that is why I get to my comment about Level 2 merchants doing a ROC.

Oh, and for the record, the PCI standards do not dictate which organizations can fill out a self-assessment questionnaire (SAQ) and which fill out a Report On Compliance (ROC).  The card brands dictate that based on merchant and service provider levels.  In this case, MasterCard has its own ideas in that regard when it came to Level 2 merchants.

Advertisement
21
May
19

An Inadvertent Service Provider

A discussion came up on the last PCI Dream Team session regarding situations at universities that have bookstores and cafeterias operated by third parties on their networks and those vendors processing payment card transactions.  QSAs encounter this situation not only at universities and colleges, but also with hospitals, health clinics and large corporations.

The Situation

As organizations focus on customer and employee perks, QSAs encounter third parties operating business outlets within a variety of organizations.  These businesses include coffee shops, convenience stores, dry cleaners, bookstores, restaurants, cafeterias, parking ramps, travel agencies, pharmacies, health clubs and a whole host of other businesses.  Of course, all of these third parties accept payment cards for their services and need a way to process those cards.  Organizations offering these perks have existing wired and wireless infrastructure that get leveraged to connect these third parties to the internet and their payment processors.  Thus, bringing that network and everything attached to that network into scope for PCI compliance.

As a result, this situation creates a PCI compliance problem because the organization is now a service provider as well as a merchant.  The organization thought by outsourcing these businesses it was reducing PCI scope not increasing scope.  But scope increases because since they are now considered a service provider, they must provide each of these third parties with a Service Provider Attestation Of Compliance (AOC) for that network connectivity.

But it can and does get worse.  I have encountered situations where the outsourcing organization provides help desk, firewalls and other support services for these third parties, further complicating their PCI compliance responsibilities.

What Do You Do? Option 1 – Get Out Of Scope

There are some ways to get out of scope, but these can be complex and/or expensive.

The first way to get out of scope is to force all of your third parties to get their own network connectivity from their own internet service provider (ISP).  The problem with this is that an ISP will likely have to run wire into your facilities to make those connections.  That can be disruptive as well as expensive and complicated due to locations within existing buildings.  And what if each business wants their own ISP because of a contract relationship?  That will mean multiple ISPs tearing up your facilities.  Not necessarily the best situation.

The most extreme solution to get out of scope is for the outsourcing organization to implement carrier equipment and become a “carrier” to these third parties.  I have had a few clients go down this road, but it is not cheap and can also be more trouble than it is worth.  However, for a university or large hospital/clinic complex with lots of third parties, this solution can actually be a cheaper route to implement and operate.

But the beauty of these solutions is that your organization is totally out of scope so there are no service provider PCI assessment requirements.

What Do You Do? Option 2 – Reduce Scope

There are also a couple of ways to reduce scope.  But reducing scope requires at a minimum the creation of a Service Provider SAQ D and AOC.

The quickest and easiest way to reduce scope is that the outsourcing organization can implement end-to-end encryption between the third party’s connection and the internet.  However, this adds the requirements in section 4 to the assessment as well as keeps the endpoints in scope for PCI compliance.

Another option to reduce scope is to require these third parties to implement encryption from their operation to anyone outside of the outsourcing organization.  While this seems simple, it usually never is simple.  Never mind the fact that if that encryption is ever stopped (most times without your knowledge), the outsourcing organization’s network is back in scope.  Typically, when this gets brought up as a solution, a lot of the third parties balk or say they do not know how to encrypt their connections.  Never mind the fact of the complexity of proving that the outsourcing organization does not have encryption keys and that every third party connection is encrypted becomes problematic.  It ends up more trouble than it is worth.

The only good news about reduced scope is that you only need to fill out a Service Provider SAQ D and AOC because you have no idea the transaction volumes being processed by any of these third parties.  That said though, it is additional paperwork that needs to be filled out annually and given to all your third parties.

Heaven help you though if you offer firewall, help desk and other support services in addition to connectivity.  Those just complicate your compliance and reporting efforts.  All I can say is, if you can stop offering those services, stop.  If you cannot stop those services, then be prepared to document and report on the PCI compliance of each of those services.  That can be done in a single assessment, but the AOC must cover each of those services provided individually in a separate section 2g.

Never mind the fact that if some of those services offered give your organization insight into the number of transactions processed by your third parties such as you provide payment processing under one or more of your merchant identifiers, you may end up having to conduct a Service Provider Report On Compliance (ROC) because the transaction volume exceeds one of the card brands’ annual service provider transaction volumes.

There you have it on third parties and their payments on your network.

22
Apr
19

More On The NIST Password Standard

Apparently, I touched a nerve with my post on the National Institute of Standards and Technology (NIST) password standards discussed in Special Publication (SP) 800-63B.  As a result, I thought I would walk you through my logic by using a compensating control worksheet (CCW) approach since this is what you will have to do for your PCI assessment if you chose to rely on the NIST guidance.

[SPOILER ALERT: It is possible, but I doubt it is worth all the effort.]

First, let us review all of what a CCW needs to comply with the Council’s requirements.  From Appendix B of the Report On Compliance (ROC) Reporting Template.

“Compensating controls must satisfy the following criteria:

  1. Meet the intent and rigor of the original PCI DSS requirement.

  2. Provide a similar level of defense as the original PCI DSS requirement, such that the compensating control sufficiently offsets the risk that the original PCI DSS requirement was designed to defend against. (See Guidance Column for the intent of each PCI DSS requirement.)

  3. Be “above and beyond” other PCI DSS requirements. (Simply being in compliance with other PCI DSS requirements is not a compensating control.)

  4. Be commensurate with the additional risk imposed by not adhering to the PCI DSS requirement.”

QSAs can get stuck on the third point because the Council also seems to focus on that point in their Assessor Quality Management (AQM) reviews because QSAs miss that point so often.  However, the other three are also very important to apply to the compensating controls being discussed.

Now let us focus on is section 4 of the CCW where the organization being assessed is required to describe the controls they have in place that go “above and beyond” the requirement being compensating which in this case is requirement 8.2.4 which requires password changes every 90 days or less.  I pick that requirement because that is the one most often cited by clients as why they want to use the NIST standard.  Most want to go to a 12-month password change interval.  These controls are going to come from pages 13 through 15 of the SP800-63B.

  • All passwords are required to be [value greater than eight] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service], [name of dictionary word list used], repetitive or sequential characters and context specific words are checked and rejected if found.
  • Authentication is only conducted using [encrypted authentication protocol(s)].
  • Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique].
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

To comply with the NIST guidelines for passwords an organization needs to implement all of these controls.

So how do they match up with the four criteria for a CCW?

Above and Beyond

This is the easiest one to tackle because almost all of the controls are above and beyond.  What?  Almost?

There are a couple of controls that do not meet the above and beyond test.

The first is the easiest to discuss and that is “Authentication is only conducted using [encrypted authentication protocol(s)].”.  That control does not pass above and beyond because it is required by requirement 8.2.1 under transmission must use strong cryptography.  As such, that control cannot be relied upon in the CCW and must be removed.

The second one is the “Passwords are hashed and salted for storage using [hash algorithm and appropriate salting technique]” control.  This discussion gets sticky because requirement 8.2.1 states that storage of credentials must also use strong cryptography which is not very specific.  I would argue that any sort of reasonable response here would be required by requirement 8.2.1 and therefore this requirement would also be ineligible to be used.

Only the password length is specified by the PCI DSS and as long as a value greater than eight is picked, that meets above and beyond.  However, we need to discuss this value further under intent and rigor.

All of the remaining controls are not specified in the PCI DSS, so those are all considered above and beyond.

Intent and Rigor

For intent and rigor, we need to look to the guidance provided for requirement 8.2.4.

“Passwords/passphrases that are valid for a long time without a change provide malicious individuals with more time to work on breaking the password/phrase.”

Remember, we are looking at a 12 month password change interval, so we need to consider intent and rigor under that concept that we need controls that will allow a password to remain unchanged for 12 months.

So let us look at the length attribute again.  Nine characters in today’s world without any complexity requirements can result in passwords able to be cracked in minutes.  Ten characters can be done in hours.  Only when we get to 12 characters and above do we get a value of at least 12 months or greater to crack.  As such, I would argue that you need 12 character long passwords or greater to pass the rigor requirement for justifying a 12 month change interval.

Passwords are assessed against a dictionary word list, context specific words and repetitive/sequential characters.  The key to this part of the second bullet is the extent of the dictionary word list.  The dictionary needs to be sufficiently large to provide the control that NIST desires.  The QSA is going to need to know how large the dictionary is, what is used as a reference to ensure that the dictionary has the appropriate words in its list and how often is the dictionary updated.  That would likely mean that these controls would need to be separated from the credential breach service control so that those aforementioned additional controls can be documented in the CCW. This would all have to be backed up by a proper risk assessment that documents that the review and updatee intervals of the dicutionary are appropriate and mitigate the risks.

Passwords being assessed to some credentialed breach source/service introduces an interesting twist to ensuring the security of a password.  But it also introduces an interesting discussion into the intent of requirement 8.2.4 which is to ensure the security of credentials.  NIST is only requiring that credentials be tested at the point they are changed.  But what happens if sometime during the 12 month interval that those credentials are compromised?  The intent of requiring a 90 day change interval was to reduce the risk of credentials becoming compromised for an extended length of time by changing one of those credentials at least every 90 days.

But NIST does not require monitoring of the credentials other than when they change.  Without constant monitoring of the credentials from a compromise service, how do you know when they need to be changed which is the intent of the change interval?

The PCI DSS does provide a bit of guidance on how the Council would likely approach this issue.  For reference I point you to requirement 3.6.5 which discusses this in regard to encryption keys that are suspected to have been compromised.  The reason I believe this is relevant here is that the PCI DSS does not require specific change intervals for encryption keys.  I would argue that the PCI DSS would view passwords changing at long intervals as requiring the same sort of control.  If the credentials are ever suspected of being compromised, then they should be changed.

Which brings up an interesting dilemma.  How do you monitor something that you have hashed and cannot recover?  Do we really want to have encrypted passwords in our authentication systems so that we can monitor them for compromise?  I seriously doubt that would be a good practice.

So with that said, we would need some sort of monitoring and alerting capability to warn if credentials do appear to be compromised such as monitoring for excessive logons, logons when the user is out of the office, logons from systems outside of the user’s area or building or other characteristics that would provide some sort of indication of credential compromise.  These controls would have to be added to the monitoring of the credential breach source to show that the credentials are changed when suspected of being compromised.

Similar Level of Defense and Be Commensurate

At this point, I think we have covered these two requirements for a CCW with our discussions about above and beyond and intent and rigor.

Where Are We With The CCW Controls?

Based on our discussion, here is what I think section 4 of the CCW would now have to look like.

  • All passwords are required to be [value of 12+] characters or greater in length.
  • When passwords are modified, they are assessed against [name of credential verification source/service]
  • Passwords are monitored for excessive logons, excessive failed logon attempts, logons when the user is out of the office and logons that occur from systems outside of the user’s area or building to provide an indication of credential compromise.
  • When passwords are modified, [name of dictionary word list/source used], repetitive or sequential characters and context specific words are checked, and the password is rejected if any of these characteristics are found. The dictionary is updated every [month/quarter/six months] and reviewed [semi-annually/annually] to ensure the dictionary contains an appropriate list of words.
  • [Name of password vault solution] is used to securely store and generate strong passwords that meet the aforementioned criteria.
  • A password strength meter is provided to assess the password against these aforementioned criteria to indicate to the user when they have met all of the criteria.

After looking at these controls, I would venture to say it is simpler and easier to meet the PCI DSS requirements than to implement these controls and make them work consistently and effectively.  Because remember, this is just section 4 of the CCW.  For section 5, you have to produce evidence that all of these controls are in place and working as designed.  Never mind section 6 where you explain how you maintain all of these controls.

So for those of you bent on using NIST, there you have it but I doubt it is worth the effort you think it is.  And this does not address the CCWs you will also need to write for 8.2.3 because you no longer enforce complexity and 8.2.5 because you no longer track the last four passwords used.  But those could be another post.  Yeah, I do not think so.  Not worth the effort because those CCWs will revolve around the controls in this one.

As I said in my original post, it might be better to wait for the Council to issue their guidance in v4 of the PCI DSS.

UPDATE: The PCI Council has created an FAQ to address this situation. https://pcissc.secure.force.com/faq/articles/Frequently_Asked_Question/Can-organizations-use-alternative-password-management-methods-to-meet-PCI-DSS-Requirement-8

27
Feb
19

Bank Of America Makes NESA Mandatory

Remember the non-listed encryption solution assessment (NESA)?  Probably not because it really didn’t get legs.  That is until now and from an unlikely source – Bank of America (BoA).  QSAs that perform a lot of merchant Report On Compliance (ROC) that go to BoA have likely noticed that BoA have been scrutinizing those ROCs more than before.

This has been particularly true of ROCs that use end-to-end encryption (E2EE) solutions such as Verifone Verishield or First Data TransArmor and you are asking BoA for scope reduction to point-to-point encryption (P2PE).  I ran into this three years ago with a client that was implementing TransArmor at their retail stores.  After much negotiation by my client, they were finally granted P2PE scope reduction and their assessment moved on.

However, at the same client this past year, a shock.  BoA told them not so fast on P2PE scope reduction this year.  As the client and their new QSA found out, sometime in 2018 BoA introduced a whole program to deal with E2EE solutions that now requires a P2PE-QSA to assess the solution and produce a NESA report.  Surprise!

What makes this particularly sad and annoying is that First Data and BoA are joint partners in Bank of America Merchant Services (BAMS) the transaction processing arm of BoA.  BAMS relies on First Data solutions such as TransArmor for processing and securing payment transactions.  But guess what?  Think that your TransArmor solution will get a “pass” from BoA when it was recommended by BAMS?  Think again.  BoA is requiring all non-P2PE validated solutions to go through a NESA.  And that is exactly what this client has, TransArmor from First Data that is a partner in BAMS.

The lesson here is, be prepared as a QSA to deal with a new issue if you have E2EE, you want P2PE scope reduction and your client’s bank is BoA.

25
Jan
19

Where Is EMV When You Need It?

Juniper Research (not Juniper Networks) issued a report recently that stated that card not present (CNP) fraud would be $130B by 2023.  In response, there were a lot of people asking where EMV was to address this issue?  Apparently there are a lot of people that are confused about EMV including some that are directly involved in PCI compliance.

First a bit of background.

People need to understand that EMV as it is implemented anywhere today was originally developed for eliminating or minimizing card present (CP) fraud.  Europe became a hotbed of CP fraud in the early 1990s after the fall of the Iron Curtain.  To address this problem, Europay, MasterCard and Visa Europe (hence the acronym “EMV”) joined forces to develop the standard in an effort to minimize the CP fraud problem in Europe.  EMV was introduced in the UK in 1996 and continued to rollout throughout Europe for the next decade.

Then there is the term “Chip and PIN” that people inadvertently confuse with EMV.  Using an EMV card with a PIN is not a requirement as consumers in the US have discovered.  The term “Chip and PIN” comes from that original UK rollout.  The banks in the UK decided on requiring a cardholder to not only put their card into the card terminal but also to require a personal identification number (i.e., PIN) in order to complete a transaction.  That standard has continued pretty much throughout the world with the exception of the US.

The next key thing to understand about EMV is that it is no more secure than the magnetic stripe it replaced.  I know that fact might shock some people given all of the press EMV has gotten regarding security.  Somewhere along the line, people began to believe that EMV by itself was more secure.  I believe a lot of this misunderstanding was the result of other security technologies that were bundled as countries converted to EMV.

The biggest security feature was the requirement of a PIN for transactions.  A PIN is essentially implementation of multi-factor authentication (MFA).  The EMV card is the something you have, and the PIN is something you know.  Both of which are also known as two factor authentication (2FA).  2FA is great for dramatically reducing CP fraud, but still does not protect the data being transmitted and likely stored by any point of sale (POS) solution.

What came next in the evolution of EMV was the addition of end-to-end encryption (E2EE) between the card terminal or point of interaction (POI) and the transaction gateway or processor.  E2EE encrypts the sensitive authentication data (SAD) transmission from the POI to the processor meaning that any devices or networks between the two cannot access the data unless they have the encryption keys (which they will not if E2EE is properly implemented).

The final security feature that came to EMV was the addition of tokenization.  Tokenization takes the primary account number (PAN) and converts it to a token which can then be returned to the merchant’s POS solution without the worry that it was storing cardholder data (CHD).  Tokenization can be either be performed at the POI or by the processor upon completion of a transaction (most common).

With that as the background, I think most readers can start to understand why EMV and its currently used security features are not going to address the CNP fraud problem.  All of those security features we are familiar require a CP environment and exactly how does that translate into a CNP environment?  The answer is, they do not translate, at least easily.

It turns out that we have been here before with EMV although most people are probably not aware of that fact.  Around 2000 to 2002, a few UK banks and a card brand thought about addressing the growing CNP fraud issue with EMV.

In the UK, Barclays and Standard Chartered came up with competing application programming interface (API) standards for eCommerce sites to use.  Both Barclays and Standard Chartered paired their APIs with card readers that connected to PCs.  Their solutions relied on the new EMV cards that were being issued in the UK and used Chip and PIN for conducting transactions.

At around the same time in the US, American Express was rolling out their first iteration of their Blue card.  That card had a chip although it did not conform to the EMV standard.  Customers that were in that Blue rollout also got a handy USB chip reader along with the card.  As with the implementations in the UK, American Express also relied on Chip and PIN for completing transactions.

The idea with all of the schemes was to have consumers connect the reader to their computer and install some software for the reader.  Then when making a purchase online the consumer would insert their EMV card into the reader, key their PIN through the computer’s keyboard and complete the purchase.  No different than in a traditional brick and mortar store.

Unfortunately, there were some issues with all of these approaches.  The largest of which was that the APIs were all different.  As a result, the consumer could not make a secured payment unless the online merchant supported the payment API the consumer had installed on their local PC.  In the case of American Express, they had signed on Amazon as a merchant, but Amazon was a very small but up and coming fish in the eCommerce world at the time.  In the case of the UK, the banks had only signed a very few small online UK merchants.  As a result, with no large eCommerce merchants on board no API gained critical mass to win out.  The end result was that by 2003 the EMV CNP experiment had effectively died.

To those observant readers, I earlier alluded to the fact that there are other EMV security features that might be useful for addressing CNP fraud.

There are two features in the EMV standard that could be used and those are dynamic PAN and dynamic card verification value (CVV).  These two EMV fields are included in every EMV card but are not currently used.  The reason is that using them would require significant programming on the transaction processor’s end to make them work.  But using them would still require a card reader solution for eCommerce given the cards in circulation today.

Obviously with CNP, what is needed is a solution that would not require a card reader and therefore a standard API.

In the age of mobile applications, it would be relatively easy for an app to provide the dynamic PAN and dynamic CVV for entry into a Web site.  Granted this app would have to communicate with a bank or processor to ensure the generation of valid dynamic values, but it should be no more difficult than what RSA or Symantec do for multifactor authentication.

Another option would be to provide a browser widget or a whole PC application that would generate the dynamic PAN and dynamic CVV while the user was purchasing items online.

But what about people that do not have smartphones or prefer physical cards?  What immediately came to my mind is something like the FUZE, Edge or Dynamics cards.  While all of these are currently not EMV capable, they are expected to be at some point.  They all come with displays that could easily display the dynamic PAN and dynamic CVV just as from a smartphone.  Unfortunately, all of these electronic cards require a smartphone but could probably be easily adapted to program from a Web site through a PC since they need to be charged.

The bottom line is that there are solutions to the problem.

30
Nov
18

The New Telephony Information Supplement

In case you missed it, the PCI SSC released their new information supplement on telephony this week.  Since I served on this Special Interest Group (SIG) I was involved in its development.  As a result, I thought I would share my thoughts on the new information supplement.

A Bit Of Background

At the start of the SIG a number of participants brought up the fact that the prior Telephony Information Supplement issued in 2011 had basically been ignored by the qualified security assessor (QSA) community and companies being assessed.  A number of QSAs and Participating Organization (PO) representatives explained to Council representatives that they had personally witnessed situations where QSAs ignored voice over IP (VoIP).

That brought about the following response from one of the Council members on the call:

“All QSAs are trained to understand that VoIP is in scope if CHD/SAD [cardholder data/sensitive authentication data] is discussed on any telephone calls.”

The consensus response was that while that is no doubt the case, many participants attested to the fact that they had encountered QSAs ignoring VoIP as being in scope.  Some had witnessed QSAs telling their clients and prospective clients to not worry about VoIP because it will not be in scope.  These same QSAs did worry about the security of call recordings, but they were leaving the rest of telephony out of scope.

That response seemed to send a chill through the Council representatives.  No one identified any particular qualified security assessor companies (QSAC) but the participants made it clear that VoIP was largely being ignored in PCI assessments.  The point was also made that some QSACs were benefiting handsomely in obtaining engagements because of their willingness to ignore VoIP.

But that exchange also identified a shortcoming with today’s telephony solutions.  QSAs and technology people do not seem to understand telephony solutions and appreciate their risks.  Therefore, they do not know where to even start in securing them let alone those that make an attempt only to find themselves in one or more “rabbit holes”.  As a result, it is easier to ignore these telephony solutions than to try and deal with the intricacies and vagaries of securing them.

There were also brief discussions about the shortcomings of the original information supplement from 2011.  The biggest complaint of which was that it was call center centric and did not reflect what was being implemented in the real world.  Various people explained that the 2011 document did not address call centers operated within corporations on a shared telephony solution with the rest of the business nor was there any useful guidance provided for PCI compliance.

Such configurations obviously complicate the scope of PCI assessments since any device connected to the shared VoIP system and network was therefore in scope (hence why a lot of QSAs ignore VoIP).  As we were to find out, the new version of the information supplement would do only a little to address this complex issue.

Disappointment

Trust me, it was not the SIG’s intent nor the Council’s intent to disappoint people, but I have a feeling that a lot of people will be disappointed with this information supplement.  Not that there are not good ideas and solutions presented, they are just not always as fleshed out as well as they should be and do not always represent what actually goes on with the solution.  The reason is for that is because telephony solutions all operate differently when performing various functions such as call forwarding, conference calling and the like.  As a result, providing real guidance depends greatly on how a given solution functions in a particular circumstance.  As we found out a number of times, this issue would come back to bite SIG participants repeatedly.

In my very humble opinion, the latest information supplement is lacking in detailed guidance to a lot of telephony situations particularly those that are complicated because of how vendors have approached Unified Communications which is the driving force now behind most vendors’ current telephony solutions.  The document points out a lot of scope and security concerns regarding the use of softphones and VoIP only to leave the reader essentially up to their own devices as to how to address those concerns using existing guidance from other information supplements.

That was a point of contention as the information supplement was developed.  There were a number of people that argued that more guidance was needed to be provided because the issues are more complicated and nuanced than the supplement leads people to believe.  They wanted more discussion with the card brands about the risks involved so that all parties could come to a consensus over what was acceptable risk and if there were better ways to address those risks and therefore provide more guidance.  Unfortunately, we were told that there was not enough time to have such discussions which drove in great part what resulted in the document that you now have access.

Then there are the threats to VoIP that seemed to be minimized in discussions.  At one point in a meeting someone stated that VoIP is not an attack vector so there is no need to worry about it.  This individual was almost immediately reminded that this is how we got into this situation in the first place.  People ignored the risks to processing, storing and transmitting payment card data and then we all had to do a fire drill to secure that information.

Using CVE Details, I was able to identify close to 400 specific threats to VoIP and/or specific VoIP vendor solutions.  Of those, there were around 250 to 300 that appeared to be able to compromise VoIP and by association, CHD/SAD.  While most had been patched, there were a around 20 that had no fix because they were flaws in the protocols themselves (mostly due to UDP streaming).  The bottom line in this research is that while VoIP might not be an active attack vector at this point in time, it is ripe for being turned into one.  Worse, current information security practices have minimal effect on a lot of the attack vectors thanks to UDP.  And if that was not bad enough, in a lot of cases all it takes is a telephone call to start the attack process.

With that as a background, while the new information supplement is a quantum leap above the 2011 information supplement, a lot of participants feel it is still somewhat lacking in guidance.

Telephony Guidance Anger

I can already anticipate the anger that will result from this one particular recommendation on page 55, section E.4 Unified Communications, where it states:

“As a result, entities can find that their only option to minimize the PCI scope of their VoIP environment is to implement multiple instances of in scope VoIP and out of scope VoIP.”

Say what?!?!?

That will be a huge burst of a bubble to a lot of organizations, QSAs and ISAs alike.  The rationale for this statement goes to Unified Communications and how most vendors have approached it.  The telephony system vendors have now so tightly integrated services such as voice, voice mail, facsimile, video, telepresence, instant message, email and other communication mediums that it has resulted in an inability to decouple and move say instant messaging or email to a different network segment from the call manager.  As a result, there are no easy ways to implement network segmentation around telephony solutions so that some are in the CDE (Category 1) and others are in Shared Services (Category 2).

Unfortunately, Unified Communications is not the only situation where two telephony solutions will be needed.  Softphones, call centers on common corporate telephony solutions and other telephony features/functions will also create situations where the only way to be PCI compliant will be to implement at least two separate telephony systems.

Speaking of softphones, if you were angry at the first statement, your anger will likely only grow when you read the following on page 24, 5.2.4 Softphones:

“It is important to note that the use of such systems [softphones] to capture payment card account data would bring the workstation and probably the network it is connected to into PCI DSS scope.”

The next paragraph after the quotation points readers to the Network Segmentation Information Supplement for guidance.  Unfortunately, the problem with that guidance is that regardless of how you try to segment the workstation, the softphone application will put the workstation in scope regardless.  No other guidance is provided regarding softphones.  It is not like this was not discussed within the SIG, it is just that there was no agreement on how to address this subject.  So, what you read in this section is the guidance you get.

One potential solution discussed to minimize scope is to put the softphone in a virtual desktop (VDI) workstation.  That would put the VDI in the CDE and the workstation as Shared Services.  However, the VDI approach can be fraught with compatibility issues and other technical problems that may not reliably provide telephony service to end users via the softphone.  There is also still some risk of eavesdropping through the end user’s workstation, but it is now limited to memory in the workstation versus the softphone software that can sometimes be addressed with other workstation controls.  This of course is assuming that the VDI solution is easier to control, secure and monitor than the physical workstations.  The bottom line is that there are a lot of moving parts that would have to be assessed on a case-by-case basis, so the consensus was that there was no general, one size fits all recommendation that could safely be made about the VDI approach.

Another scope reduction approach is to use “inexpensive” physical SIP phones for handling calls that are logically network segmented away from the workstation.  I have a number of clients with agents configured this way to limit telephony scope to just their SIP phone.  But then their router must support two VLAN connections and those VLANs cannot be allowed to access each other.  That is easy to do in a corporate environment but can complicate things with SOHO workers.  Such a solution can drive up networking and equipment costs to an unacceptable level for some organizations.  Particularly organizations that were looking at softphones to reduce costs.

There are plenty of other areas of the information supplement that will generate anger mainly because for the first time, the PCI SSC is calling out more areas that are in scope for PCI compliance that organizations and some QSAs/ISAs treated as, or thought were out of scope.

Miscellaneous Comments

There are a few more points that I felt should be discussed.

On page 43, 7.2.2 SIP Trunking, the following quote will be of interest.

“As the technology matures, technical boundaries between an organization and SIP Trunk provider may become harder to define. Scoping for these services will therefore require an understanding of how connections are made between the different entities.”

I feel this is already an issue because the boundaries are already blurred.  When you realize that VoIP is predominately a UDP protocol, there is little you can do from an information security point to protect your telephony system.

First the carriers will tell you that their SIP demarcation device will provide some amount of security for your organization.  Exactly what amount of “security” that device actually provides is questionable at best.

But speaking of UDP, page 54, E.1 Protocols, Ports and Network states the obvious.

“… the use of UDP may render the detection of malicious content or payload more difficult.”

More difficult?  In some ways, it can be impossible to detect malicious payloads because it is streaming, and you want to ensure continuity of a conversation.  This is the biggest security issue with VoIP, because it relies on UDP streaming, VoIP exploits use that stateless streaming to their advantage by embedding the attack in the voice/video stream.

This inevitably brings up the discussion of firewalling your VoIP because that seems to have been the answer for every other security issue.  While the firewall will provide some amount of control and monitoring of TCP connections, it will do nothing for the UDP streams that VoIP relies upon.

Yet I have actually had some firewall vendor sales people claim that their firewalls are “VoIP aware” and can identify certain “bad” VoIP packets.  I’m not sure exactly how you can identify bad UDP audio/video data streams, but they claim to have some sort of proprietary methods for doing just that.  Of course, when you attempt to drill down on that “proprietary method” you get essentially stonewalled because it is “proprietary”.  I take that as an indication of sales “smoke and mirrors”.

Then there is the solution of encrypting all VoIP traffic.  I have had a number of clients suggest this as a solution to the security of telephony.  While encryption of all VoIP traffic minimizes the ability to eavesdrop on calls via the network, it does not remove the risk of eavesdropping via compromised endpoints which is much greater than the network risk.  Encryption also does not remove the risk of malware injected via the UDP stream which is the bulk of the real threats to VoIP.  After all of the discussion surrounding encryption, I really see only marginal value in the use of encryption of VoIP traffic from a security perspective.

Also, on page 54, E.2 VoIP Attacks and Vulnerabilities you get this statement.

“VoIP equipment and software are susceptible to vulnerabilities that could allow someone with malicious intents to gain access to your network, intercept and gather customer data, or initiate a Denial Of Service attack.”

I cannot tell you how many IT professionals do not realize the risk presented by VoIP and its infrastructure.  They seem to treat it like the PABXs of old that used to be located in basements next to the telephone carrier’s point of presence (POP) at their organization’s facilities.

Granted, we have moved away from the Windows and Linux versions of call managers that were standard fare when VoIP originally came out.  Most of today’s call managers are based on some proprietary derivative of Linux or Unix stripped down and modified for telephony.  But there are open source solutions that run on Windows and Linux server editions.  The bottom line though is that regardless of what you run, these are still servers no different than any other servers and they need to be properly configured and get regular patching just like any other server.

That is my take on the latest telephony guidance from the Council.  Better than what was produced in 2011 but still lacking in some areas.

21
Nov
18

Requirement 12.8.2

I got a comment a while back about contracts and PCI compliance.  The two requirements that are relevant in this discuss are 12.8.2 and 12.9.  Requirement 12.8.2 is for all organizations (merchants and service providers) that are being assessed under the PCI DSS.  Requirement 12.9 is only for service providers.

As usual, the clarifications surrounding these requirements were all provided verbally over the years at various PCI Community Meeting presentations and Q&A sessions.  But the overall gist of these requirements can be readily determined.  It just takes a little bit of effort and looking at more than just the PCI DSS.

Requirement 12.8.2 states:

“Maintain a written agreement that includes an acknowledgement that the service providers are responsible for the security of cardholder data the service providers possess or otherwise store, process or transmit on behalf of the customer, or to the extent that they could impact the security of the customer’s cardholder data environment.

Note: The exact wording of an acknowledgement will depend on the agreement between the two parties, the details of the service being provided, and the responsibilities assigned to each party. The acknowledgement does not have to include the exact wording provided in this requirement.”

The Guidance provided for 12.8.2 states:

“The acknowledgement of the service providers evidences their commitment to maintaining proper security of cardholder data that it obtains from its clients. The extent to which the service provider is responsible for the security of cardholder data will depend on the particular service and the agreement between the provider and assessed entity.

In conjunction with Requirement 12.9, this requirement is intended to promote a consistent level of understanding between parties about their applicable PCI DSS responsibilities. For example, the agreement may include the applicable PCI DSS requirements to be maintained as part of the provided service.”

If things are still not clear enough, it helps to look at the ROC Reporting Template to get clarification.  The tests being conducted for a given requirement usually clear up any confusion regarding what is being expected.  There is only one test for 12.8.2 and it states:

“Describe how written agreements for each service provider were observed to include an acknowledgement by service providers that they will maintain all applicable PCI DSS requirements to the extent the service provider handles, has access to, or otherwise stores, processes, or transmits the customer’s cardholder data or sensitive authentication data, or manages the customer’s cardholder data environment on behalf of a customer.”

The first thing to notice in all of these discussions is that nothing in the PCI DSS states that any organization is required to work with a PCI compliant third party.  None of the requirements in 12.8 specify that an Attestation Of Compliance (AOC) be provided.  A lot of QSAs will argue that requirement 12.8.4 requires it, but if you read the test:

“Describe how it was observed that the entity maintains a program to monitor its service providers’ PCI DSS compliance status at least annually.”

There is nothing in that test that explicitly mandates that an AOC is required to monitor third parties.  Granted an AOC is the easiest way to monitor service provider compliance, but there is nothing explicitly calling it out in this test.

So where does this “requirement” originate?  It comes from the merchant agreements with the card brands, specifically Visa and MasterCard.  They require that their merchants only work with third parties that are PCI compliant and can prove that compliance with a Service Provider AOC.  This is why it is important to read and understand the brands’ merchant agreements and their own security programs.  There are a number of key “requirements” that come from those documents that are just as important as what is in the PCI DSS.  So, read them as well as all of the PCI documents.

Getting back to the PCI DSS, what the Council wants QSAs and ISAs to look for in contracts, master service agreements, addendums and any other legal documents that describe the parties’ legal relationship is some sort of acknowledgement between all parties that they will abide by the PCI DSS and ensure that sensitive authentication data (SAD) and cardholder data (CHD) is kept secure.

Where a lot of QSAs/ISAs go wrong is demanding that the words “PCI DSS”, “PCI standards” or other explicit acknowledgement of “PCI” something to appear somewhere in those documents.  The Council has stated a number of times that explicitly using “PCI DSS”, “PCI standards” or “PCI” anything is not required.  It would be great if such documents did, but a lot of legal documents do not because they either predate the PCI DSS or lawyers argue it is not necessary.  That is what led to the Note in both requirements.  The key is the last sentence which explicitly states:

“The acknowledgement does not have to include the exact wording provided in this requirement.”

It is this sentence that the Council always points to and states that this is why explicit statements of PCI or any other direct reference to PCI is not necessary nor required.  My advice is, when in doubt, ask your client’s legal counsel for their legal interpretation of the legal agreements and whether they fell it covers the PCI responsibilities of the parties involved.

That will lead you to the fact that a lot of legal agreements reference the PCI DSS and PCI standards indirectly through language that obligates the parties to follow and comply with “regulatory or other legal requirements”.  The reason this language works is because “other legal requirements” will drag in the card brand legal agreements for taking and processing card payments.  Every card brand has legal agreements for merchants and service providers that explicitly call out that the customer of the brand will maintain PCI compliance for all relevant PCI standards.

Where this discussion becomes problematic is with service providers that do not directly process, store or transmit SAD/CHD such as managed service providers and the like that can affect the security of payments.  That is because they are not directly under the card brands’ legal agreements, so their contracts while using the same “regulatory or other legal requirements” will not necessarily be referencing PCI compliance because they are indirectly involved.  It is in these cases that I rely on getting a PCI AOC from the service provider which then provides the additional assurance that the service provider understands that they need to be PCI compliant.

It is when I cannot obtain an AOC from a service provider that I then explain to my client that this service provider’s environment needs to be assessed as part of their assessment.  Either that or my client needs to find a new PCI compliant service provider.

What a QSA/ISA needs to be looking for in a service provider’s AOC is a couple of things (actually, there are a lot of things, but these are the most important).

First, you need to ensure that the services provided to your client have all been covered by the service provider’s assessment.  Section 2a of the AOC documents the services covered and not covered.  The most common problem found with section 2a is that one or more services used by an organization were not assessed.  If services were not assessed, then you need to notify the service provider and develop a plan of how to handle this situation.

The next thing to review is the locations that were part of the assessment.  It still amazes me the number of AOCs I review where a client’s data center or processing center was not part of the assessment.  It gets worse when you bring this to the attention of the service provider and they get put out or worse, they argue with you over the fact that they must review every facility where a service is conducted.  I am sorry, the PCI SCC and the card brands are the ones that make the rules, I am just the poor assessor that must enforce and live by them.

Finally, you need to review section 2g for all of the services assessed (if they were assessed from section 2a).  Section 2g are a matrix of the 12 PCI DSS sections that explains who is responsible for the various PCI DSS requirements.  From this matrix an organization is to construct their PCI program to fit the controls that they need to implement to be PCI compliant for this service.

There should be a section 2g for every individual service assessed, but in instances where PCI coverage is the same for different services (e.g., SaaS application hosting), you can combine services in section 2g.  However, this is where problems usually are found.  My favorite example of such a problem is the day I found data center co-location and call center services listed in the same matrix.  I am sorry, but those services have very little similarity particularly in PCI controls.  When you encounter this situation, it is usually indicative of a QSAC that does not understand how to deal with service providers and is cutting corners to get the ROC and AOC out the door.  In addition, it also likely indicates a service provider that is just “checking a box” for PCI compliance to placate customers.  But worse is when that service provider is listed on the Visa or MasterCard service provider lists (it is rare, but I have seen it) which then indicates that the brands are also not doing their due diligence in their review of the AOC.

Hopefully, you now better understand requirement 12.8.2.  In a future post I will discuss requirement 12.9.

09
Nov
18

Service Provider To A Service Provider

Another good question from our recent PCI Dream Team session.

“Are service providers to a service provider be required to provide report on compliance (ROC) to the service provider in a private cloud scenario?”

It depends.

The reason it depends is because the answer will depend on whether or not the service provider in question directly processes, stores or transmits sensitive authentication data (SAD) or cardholder data (CHD).  While our session this time was on ‘The Cloud’, the cloud has nothing to do with the answer, so the answer will be the same regardless.

If you are unsure if you are a service provider, read this post.  If you are trying to construct a story that gets your out of scope as a service provider, read this post.

Reporting Requirements

Before we can talk about what a service provider needs to provide to a merchant or another service provider, we need to ensure that everyone understands the PCI reporting requirements.

For any service provider that directly processes, stores or transmits SAD or CHD, if the volume of Visa/MasterCard/Discover transactions is greater than or equal to 300,000, then the service provider must go through a PCI assessment that produces a Service Provider ROC and Attestation Of Compliance (AOC).

For service providers that directly process, store or transmit less than 300K transactions or does not directly process, store or transmit, then that service provider can self-assess using the Service Provider SAQ D and related AOC.

Another key point regarding reporting that needs to be made is that there are differences between the Merchant AOC and the Service Provider AOC.  It is very important that service providers use the Service Provider AOC and not the Merchant AOC.

I still get too many Merchant AOCs from service providers.  Most often this is because these service providers are also merchants and they mistakenly believe their merchant PCI assessment serves as their service provider PCI assessment.  Not so!  These service providers need two assessments.  One that covers their merchant payment processes (usually a very small assessment) and one that covers their service provider processes which is usually the larger of the two.

The first key AOC difference is that the Service Provider AOC has a section 2a that discusses what services were assessed in the assessment and what services were not assessed.  This is important to customers of service providers because it allows them to ensure that all of their services have been assessed in this AOC.  If they have not, then the customer knows to ask the service provider for additional AOCs that cover those services.

The other key AOC difference is section 2g which documents the requirements tested during the assessment for each service assessed from section 2a.  The PCI SSC requires that individual 2g sections be used if the services assessed have different requirements matrices.

Finally, section 2c is also very important to customers as it explains what locations were included in the assessment.  I cannot tell you the number of AOCs I have reviewed from large service providers only to find that the location used to service my client was not part of the service provider’s assessment.  As a result, the AOC has no use to my client in their assessment.

Who Needs What?

Under the PCI rules, a service provider is required to provide their Service Provider AOC to all merchants and other service providers to which they provide services.  Yet time and again as a QSA, I end up in fights with service providers who refuse to provide their AOC to my clients.

This requirement of providing an AOC is all about proper vendor management and ensuring there are no gaps in meeting controls responsibilities.  The Service Provider’s AOC has a matrix in section 2g for each service assessed that explains what requirements the service provider is responsible, what requirements are the customer’s responsibility and those requirements where there is shared responsibility.  Without that matrix, a customer has no way to understand their responsibilities in maintain PCI compliance between themselves and their service providers.

Please notice that nowhere have a I mentioned sending anyone the ROC, only the AOC.  As you will recall, the question involved the sending of the ROC to another service provider.  That is not to say that you cannot send your ROC, it is just not required by the PCI SSC.

As a QSA, I have encountered a few situations where section 2g is not clear enough and have asked a service provider for their ROC to ensure that my client properly sets up their controls to mesh with the service provider’s controls.  If the service provider was unwilling to provide their ROC or even the section needed, I hold a lot of conference calls to clarify the situation.

With that said, if you want your organization listed on either the Visa or MasterCard Global Service Provider lists, you will have to submit your ROC and AOC to those card brands (as well as some money) to get on those lists.  If you are a service provider and can use the Service Provider SAQ D and you want to get listed on either brand’s service provider list, you will have to go through the ROC assessment process.  Visa and MasterCard will only accept a ROC for listing on their sites.

Hopefully you now understand what is required and what is not.

29
Oct
18

Virtual Payments

Virtual payments are becoming more and more prevalent outside of the insurance industry as companies realize the convenience of paying virtually.  As a result, more business-to-business (B2B) purchases are being paid for via virtual payments.  It also became obvious at our latest PCI Dream Team session that virtual payments need to be better explained to people so that they understand how they work and their responsibilities for security.

Definition

Technically virtual credit cards have existed for a while.  Businesses have had “virtual” credit cards for making airline and hotel reservations, purchasing office supplies and paying for other business expenses for decades.  The cards do not physically exist (originally they did exist, but this was seen as a security risk), but the business’ accounts payable department had a virtual card with a PAN, card verification code and expiration date issued by Visa, MasterCard or American Express for paying merchants for goods or services.

A virtual payment (or virtual credit card) is essentially the same as a regular, physical credit card with the following exceptions.

  • The primary account number (PAN) can only be used once to make a payment. If you mess up either of the next two criteria, that does not count as a ‘use’.  That said, even if everything is correct and the payment is declined you will have to contact the organization that generated the virtual payment to get a new virtual payment created.  Also, be careful with a virtual card PAN as some processors may generate a PAN that will not Luhn check.
  • Only the merchant defined on the virtual payment can use the payment. For example, if the merchant on the virtual payment is defined as ‘ABC Company’, only ABC Company can submit the transaction for payment.
  • The payment must total exactly to the total authorized on the virtual payment. For example, if the virtual payment is for $1,252.98 USD, then the merchant can only submit a charge for $1,252.98 USD for payment.
  • Virtual payments are flagged as being virtual. So, if someone were to copy the information and put it on a physical card to use physically at a retail outlet, the card would be declined.

How Do Virtual Payments Work?

Virtual payments are typically created by transaction processors such as Chase Paymentech, Elavon or Worldpay.  Although there are a number of independent sales organizations (ISO) and others that have affiliated themselves with processors to also generate such payments.

A lot of accounts payable software solutions now provide connections to transaction processors’ APIs for the generation of virtual payments to pay bills.  You will have to check with the application vendors to determine whose virtual payment solutions their applications support.  But the original way of using a Web browser to access the processor’s virtual payment application is also available.

An organization must sign up for virtual payment services, so it is not something that you can just access.  In addition, it is the responsibility of the organization to manage the users that have the ability to generate virtual payments as well as establish the minimum/maximum transaction amounts, time payments are valid (typically 30 to 90 days) and other payment criteria.  In addition, the solution may also specify the merchants that can be paid through the virtual payment solution.  Once set up, an organization can then generate virtual payments to pay their bills.

One very important step before you start generating virtual payments is that you need to ensure that the organizations you are paying will accept virtual payments or payment cards.  While an organization may have retail outlets that accept payment cards for payment, does not mean that their commercial operations also accept payment cards.  As such, you need to contact the accounts receivable department at the organizations you intend to pay with a virtual payment to ensure that they will process the virtual payments as some organizations cannot or will not.  Also use this as an opportunity to confirm you have the correct name of the organization (as it appears when they process card payments), the correct facsimile number, correct email address (I recommend you get both just in case) and the preferred method of sending the virtual payment (i.e., facsimile or email).  Keep in mind it is not your problem to worry about the payee’s PCI compliance in how they handle your payment.  That is their problem, not yours.

When a virtual payment is generated, it is typically sent to the payee via facsimile.  However, I have also heard that some processors that can send the information via secure email services such as Proofpoint or MimeCast.

If you are accepting virtual payments, you need to be aware of the PCI compliance issues with facsimile.  The problem with using facsimile is that a lot of organizations have implemented facsimile services such as HelloFax, MyFax or eFax and any facsimile messages are automatically delivered to users via corporate email.  Such a solution as eFax brings an organization’s email system into scope for PCI compliance.  As a result, it is important that if your organization will accept virtual payments that those facsimile transmissions are sent to a secure physical facsimile machine located in the area where those payments will be processed.  I have some clients that use secure printing solutions for printing their facsimiles where the user has to use their building HID card to securely print output on any printer.

Secure email solutions will hold the message for the payee to obtain from the secure email Web site interface via a browser.  The secure email solution will send you a notification that you have received a secure message along with a link to that message.  Once you get into the secure email solution, it is up to your organization to ensure you maintain the security of the message and the SAD sent to you.  So, no forwarding the message to your own email system.  No storing message attachments (likely the SAD) to a PC or network drive.  Print out the message and/or attachments to a physical printer and process the payment from those printouts.

SAD Is SAD – CHD Is CHD

As I said earlier, virtual payment messages contain SAD in the form of card verification value in addition to the PAN, expiration date and cardholder name which are cardholder data (CHD).  Just because we are talking about virtual payments and they can be used only once does not mean they can be treated any differently than the same information from a physical payment card.

That said, Visa and MasterCard have their own view of virtual payment information security.  As David Mundhenk reminded everyone on our latest Dream Team session, the card brands also have their own rules in addition to the PCI standards.  So, it is important for everyone to look at the card brands’ rules as well as the PCI standards when dealing with SAD/CHD.  That means not only their security programs, but also their respective Merchant Agreements and asking them questions when you cannot find the answers in any of their official documents.

In the case of virtual payments, Visa and MasterCard differ on security of virtual payment information.  Unfortunately, you would not know that fact if you had not asked each of the brands about this subject because their security programs and merchant agreements do not address the subject.  For the record, American Express and JCB do not have an opinion on the subject.  Obviously SAD is SAD before it is used to process the payment, where the difference comes is after the payment is processed.

Visa wants the information protected even after the payment is processed.  They demand that it be securely destroyed after the payment is processed even though the information is single use.  I kid you not, MasterCard said on a call that if my client wanted to post the printed facsimile on a utility pole out in public, that was okay with them because the information could not ever be used again.  Talk about two polar opposite approaches.  As a result, I recommend following Visa’s recommendation and securely destroy the original message or attachment.  If for whatever reason you need to keep the payment document, securely redact the information, take a copy of the redacted original for your records and then destroy the redacted original.

That is what you need to know about virtual payments.

08
Oct
18

2018 North American PCI Community Meeting Thoughts

It was an interesting time in Las Vegas this year.  Part of that is due to the fact that we are in Las Vegas.  But part of it was that the Community Meeting seemed to be devoid of the usual anticipation for the Community Meeting and expected pronouncements.  While there were announcements for various standard updates, these were well anticipated and were not a surprise.  Some of the slide decks have been released, but others will not be available until the European Community Meeting is held in a few weeks.

While there were a number of good presentations this year, in my very humble opinion, the best session was the Assessor Session at the end of the meeting.  The good news this year was that a lot of QSAs and ISAs made sure to stick around for this session.  There were a number of good questions asked after the Council’s presentation, but I will wait for the Council’s transcript to be published before weighing in on those.

As in years past, the Council had a presentation at the start.  The following are highlights from that presentation.

AQM Program Highlights

As usual, the AQM team did a bang-up job pointing out common issues found in the various assessment types they review.

On the PA-DSS side of the ledger, a lot of PA-QSAs are having issues with requirement 5.1.6.b regarding application least privilege.  The Council clarified that what they are looking for in this requirement is proof that the application does not run as ‘root’, ‘administrator’ or some other default privileged account in order to run properly.

For P2PE assessments, there have been issues regarding when a double length 3DES key can be used.  The Council explained that a double length 3DES key is only allowed when using derived unique key per transaction (DUKPT).  All other uses must be triple length keys to be in compliance with P2PE.

Apparently, QSAs and their QA minders are totally missing what is meant by “describe how”.  When describing “how” a QSA must describe all of those procedures used to determine the requirement was satisfied as well as how those procedures prove the requirement was met.

QSAC QA manuals still are not covering topics such as evidence retention and destruction, security incident response plans and code of conduct policy.  The Council reminded everyone to make sure all topics in the QSA Qualifications Requirements document are covered.

Compensating controls were a continuing problem area and that should not be a surprise.  I am constantly fascinated when I receive a ROC for proof of PCI compliance performed by another QSAC and get to see what passes for a valid compensating control worksheet (CCW) at other firms.  Apparently ‘intent and rigor’ of the requirement and ‘above and beyond’ are foreign phrases to a lot of QSAs.  Never mind the fact that the controls used, tested and maintained are usually vague in description.  The Council pointed people to their Portal for remedial training of QSAs that cannot comprehend writing a CCW.  I have written a number of posts on compensating controls.  If you want to write good CCWs, start here for the most current post and it will point you to prior posts.

The Council got some interesting questions from QSAs over the year.  The first one is one that a lot of clients ask us, “Do you really have to come onsite?”  Yes, an onsite visit by the QSA is actually required.  However, how long a QSA needs to be onsite can vary from as little as a couple of days for a long-time client to a week or more for a new client.  Onsite visits can be supplemented by video meetings when needed.  Not unusual these days when a client has worldwide operations and not everyone is located at headquarters or will not be available when the QSA is onsite.

The other question was regarding ROC and AOC dates.  How people keep messing these up is beyond me, but as with the CCWs, I see a lot of ROCs and AOCs out of other firms where the dates on the documents are not consistent.  Basically, the last thing any QSAC should do is to set all of the dates in the ROC and AOC to match as part of their document finalization processes.  That way you will avoid this problem.

There was a brief discussion of the Software Security Standard (S3) that will replace the PA-DSS.  Most of the discussion revolved around the proposed timeline.  The standards themselves will be published sometime before year end.  Reporting materials will be published around mid-2019 with training commencing in the Fall of 2019.  The big deadline is that PA-DSS Reports On Validation (ROV) will only be accepted through mid-2020 requiring all reports going forward to be under the S3.  That will mean that by mid-2022, all PA-DSS validated applications will move to “Acceptable for Pre-Existing Deployments”.

Finally, SSL and early TLS got a discussion.  Somehow the word has not gotten around that if a company still uses SSL and/or early TLS, there must be a compensating control developed for the relevant requirements since Appendix A2 no longer exists in v3.2.1 of the DSS.  They also reminded everyone that having SSL or early TLS is NOT an automatic fail.  However, vulnerability scans will have to have explanations developed justify the use of the protocols as well as what is done to mitigate their use.

Card Production Security Assessor Program

If you were not aware, the PCI SSC took over the various card brands’ card production programs and created a single common program similar to what the Council did with the Data Security Standard back in 2006.

In response the Council is creating a new assessor program in 2019.  Card Production Assessor Companies (CPAC) will not need to be existing QSACs nor will assessors need to be QSAs.  The new assessor training program will be rolled out next year for this standard.  The Council did note that existing card production assessors will be somehow recognized by the new program but did not specify how that recognition would be implemented.

As with QSACs and QSAs, the Council will maintain a database of CPACs and qualified card production assessors.

PIN Assessor Program

As with card production, the Council has also been responsible for PIN standards for a few years now.  As a result, the Council is developing a program for creating PIN Assessor Companies and PIN Assessors.

There will be no need for the PIN Assessor Company to be a QSAC nor will assessors be required to be QSAs.  This program will also start in 2019.

Global Executive Assessor Roundtable (GEAR)

This is a new group that was established this year.  Its role is to provide a direct communication channel between the PCI SSC and 20 qualified security assessor companies’ (QSAC) senior executive leadership.  This group met for the first time a few days before the start of the Community Meeting.  Each member of GEAR serves for a two-year term.

The 20 QSACs on the GEAR are:

  • @sec
  • Advantio
  • Coalfire
  • Control Case
  • Foregenix
  • IBM Security
  • isec
  • K3DES
  • nccgroup
  • Protiviti
  • PSC
  • RSM
  • Security Metrics
  • Shellman
  • SISA
  • Sysnet
  • Trustwave
  • UL
  • usd
  • Verizon

As usual, it was great catching up with everyone and meeting new Guru fans.  I really appreciate all of the great comments about the blog.  Even though I see the statistics for the blog, it still amazes me how many people read it and appreciate it particularly when you meet so many of them in person.  It is very humbling.

Hopefully I will see you all next year in Vancouver.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031