Archive for the 'PA-DSS' Category

05
Jul
17

NESA – Guidance In Search Of A Problem

On Thursday, June 29, the PCI SSC held their quarterly Assessor update webinar.  One of the more interesting discussions was on the topic of the non-listed encryption solution assessment or NESA.

For those unfamiliar with NESA, it is an attempt by the Council to have all end-to-end encryption (E2EE) solutions such as First Data’s TransArmor and Verifone’s Verishield assessed against the relevant PCI P2PE standards to ensure they are secure.  The problem is that the card brands and the banks have not gotten behind the NESA approach so it has effectively stalled out much like the P2PE program has stalled out.  But on the Thursday webinar we found out that it has really stalled out and the Council seems to be getting desperate to salvage it.

The goals of NESA are:

  • The Council reiterated that the NESA requires that a P2PE-QSA is required to conduct the assessment using the PCI P2PE assessment programs as guidance. Essentially, the NESA is a P2PE validation without the Council’s certification and listing of the solution on the Council’s Web site.
  • NESA provides a consistent approach to evaluating non-listed encryption solutions against “best practices”.
  • It provides other PCI assessors, acquiring banks and merchants with information about the risk and PCI DSS responsibilities when using a non-listed encryption solution.
  • It provides input to a merchant’s QSA to consider when conducting the merchant’s PCI assessment.

All of these are admirable goals of the NESA.  But the question still remains, do we need the NESA?

According to the Council a lot of people in the “payments community” have been clamoring for NESA.  I am not sure exactly who the Council is referring to as the “payments community” but it certainly has not been the banks or the brands.  Those two constituencies are already partnered up with E2EE and P2PE solutions and have not been clamoring for anything other than to use those solutions.

The Council did bring up the organizations behind the solutions already listed as P2PE validated.  That would make sense as they have a vested interest in forcing non-listed encryption solutions through the process.  But as to banks, the brands and QSAs pushing this agenda?  I would seriously doubt it.

Then there is the issue that the Council says that QSAs are stumped when they encounter an E2EE solution.  The process of assessing E2EE solutions has been known by QSAs since E2EE solutions were rolled out years ago by the various vendors.  But with the introduction of P2PE, I would bet that the Council’s QSA/ISA training does not cover how to handle E2EE solutions.  And I am sure since the invention of the NESA process, they have even more reasons not to instruct QSAs on how to assess an E2EE solution.  Yet I am sure that they still discuss how to assess an application that is not PA-DSS validated.  That is a “shame on them” for ignoring the realities of the real world.

But the process is not that involved.  When encountering an E2EE solution, the QSA needs to ensure that the E2EE solution is implemented according to its implementation guide (IG).  A transaction processor/gateway or an acquiring bank may also require packet captures to ensure that the data stream is encrypted.  All of that assessment and testing documentation is submitted to the acquiring bank and the bank explicitly grants the merchant scope reduction.  Then the QSA can follow the requirements in SAQ P2PE for an assessment.  All of which adds probably two hours to a merchant’s PCI assessment versus the costs of a full on P2PE assessment.  When looking at the costs of a P2PE assessment plus the listing fees to have the solution placed on the Council’s Web site, is there any wonder a lot of E2EE solution providers have eschewed the P2PE program.

First Data and Verifone have been adamant since P2PE was introduced that they will never go through P2PE because it is not needed.  Given they are partnered with most of the large processors and banks, their lack of support for P2PE means a lot and also means that until they get on board with either NESA or P2PE, both of these standards are in trouble.

But the most troubling comments occurred at the end of the Council’s brief discussion of NESA.

  • NESA is NOT a program. It is only “guidance”.
  • NESA may not result in scope reduction.
  • There is no formal NESA documentation or template.

When the Council says that something is “guidance”, there is no mandate for anyone to do anything.  This is how QSAs are to treat those Information Supplements published periodically by the Council.  In this case, NESA is only a suggestion.  So, until the brands and banks get behind the NESA process, there is no reason to have a NESA performed.

The next two comments go together.  If there is no formal deliverable for QSAs to review, how does a QSA evaluate that any NESA process was conducted adequately?  And if that is the case, of course the granting of scope reduction is not likely.  After all, if a QSA is not sure about the NESA, how is the bank supposed to evaluate it let alone pay for it.  And if scope reduction is not achieved, then what in the world is the point of NESA in the first place?  The only purpose I can see is to give P2PE QSACs an ability to push their services on the E2EE solution vendors to make their services worth the cost incurred with the Council.

The only other benefit that I can see is an opportunity for certain P2PE-QSACs to flood us all with NESA Certificates since their PCI Compliance certificates are worthless.

But in the end, you really start to wonder what the Council was thinking when they put this process together.  Time will tell, but I am guessing and hoping that NESA, like P2PE, will die a quick and quiet death.

30
Sep
16

2016 North American PCI Community Meeting

It was a hectic week out in Las Vegas at the Community Meeting this year.  I wish I had more time this year to just hang out with everyone, but I was in the middle of a number of assessments that needed to get done, so I was working at night and attending sessions during the day.

By the time you read this, the slide decks from the sessions will have been posted on the Council’s Web site.  So all of you that attended will be able to download those presentations.  You go to the link provided in the program guide, provide your name, organization name, email address and the password from the program guide (ve4eqepR) and you are in.

The Council tried the 20 minute “TED Talk” format again with the Wednesday sessions.  A number of the sessions I attended could have easily used an extra 10 minutes if not a complete hour.  I know the Council is trying to move things along and get a lot of information covered, but trying to discuss topics like “the cloud” or EMV standards just cannot be properly accomplished in 20 minutes.  I do not care how good a speaker or organized the presentation.

Here are some of the more notable highlights.

The Assessor Session Is Back

Possibly the most anticipated session of the Community Meeting this year was the return of the Assessor Session after being missing for two years.  But unlike previous years where this session occurred before the start of the Community Meeting, the return of the Assessor Session was moved to the end of the Community Meeting.  I heard a number of complaints throughout the week from assessors about being at the end of the meeting.  Yet when Thursday lunch came around, there were a lot of QSAs, ISAs and ASVs that adjusted their travel schedules (Guru included) to attend this session.

While I originally agreed with people that moving the Assessor Session to the end was not a good idea, the more I have thought about it, the more I think it was better at the end.  That way assessors can have questions covering topics that come up during the meeting get answered while we are all together.  I know we all want to get home, but I think the Assessor Session offers more value to all of us being at the end.

On the not so good side, the Council chose to use up an hour and 10 minutes to present a variety of topics, some of which took way too long to discuss.  But the larger question was why was this material not presented during the main conference?  Not only did all of the meeting attendees miss out, but there were people that did not get their questions asked.  I am also sure that running long discouraged a lot of people from asking questions as well.

That said, there were a number of good questions asked during this session and the Council rewarded five people with large PCI SSC coffee mugs for their “good” questions.

One question though really created a stir.  I will address that question regarding multi-factor authentication (MFA) as a separate post to be published later.  However I will say this about this discussion.  The Council really needs to go back and re-think their position on MFA if what they said is accurate.

The Council was asked about SAQ A and where it is headed.  The concern in the assessor community is that the mechanism that issues/controls the iFrame/redirect needs protection.  However the changes to SAQ A for v3.2 did not seem to address this obvious risk.  Based on how the question was answered, I am guessing that the hosting community is trying to keep SAQ A as simple and easy as possible regardless of the risk.

Another area that the Council agreed to review was the change to requirement 3.2 in the ROC Reporting Template.  In v3.2 of the template you can no longer mark those requirements as Not Applicable however it was pointed out that an ‘NA’ was still allowed in the SAQ D.  The reason for seeking this clarification was related to past comments from the Council to follow SAQs for P2PE (SAQ P2PE) and outsourced eCommerce (SAQ A) when filling out a ROC for merchants with these solutions.  It was pointed out that neither of these SAQs has requirement 3.2 in them, so how is a QSA/ISA supposed to respond to it in the reporting template if it cannot be marked as ‘NA’.

Understanding The Current Data Breach Landscape (aka Verizon DBIR Report Discussion)

When Verizon sends out Chris Novak, you know you will get a great presentation on the data breach incident report aka ‘The DBIR’.  This year was no exception albeit somewhat depressing as Chris again pointed out that most breaches are the result of sloppy operations, lax security and insecure applications.  Essentially security issues that we should have gotten past a long, long time ago but have not.

Architecting for Success

Who better to talk about success than a representative from the Jet Propulsion Laboratory (JPL) talking about how to develop spacecraft to explore the most inhospitable environment we know, outer space and planetary bodies.  Brian Muirhead was the keynote speaker on Wednesday and is the Chief Engineer for the Mars Science Laboratory, the group that designed and developed the various Mars exploration rovers.  He gave a great discussion on how to look out for problems and develop self-managing devices.  Very interesting and I am sure an eye opener for people that we need to stop accepting the sloppy and messy solutions we get for handling cardholder data.

Internet of Things Keynote

The Thursday keynote was just a great time.  While there seemed to be very little directly relevant to PCI compliance presented by Ken Munro and an associate from Pen Test Partners, it was a fabulous time exploring the wonderful world of flawed technology from a tea kettle, to a refrigerator to a child’s doll.  In the case of the child’s doll, they removed the word filter database and therefore allowed the doll to say things that no child’s toy should say.

What was relevant to PCI was the ease with which these folks were able to reverse engineer firmware and software used by these devices.  It gave a lot of people unfamiliar with IoT and penetration testing in the room pause as to how seemingly sophisticated technology can be easily abused.

Cloud Security

While it was great to see Tom Arnold from PSC, the even better thing about this presentation was the fact that Amazon provided an actual human being, in the form of Brad Dispensa, to talk about Amazon’s EC2 Cloud.  While billed as a discussion on incident response, the session provided great insight into AWS’s EC2 service offering as well as the variety of new tools available to manage the EC2 environment and also provide auditors and assessors with information regarding the configuration of that environment.  The key take away from this session is that organizations using EC2 can provide everything needed for conducting a PCI assessment using their EC2 Master Console.

EMVCo

Brian Byrne from EMVCo gave a great 20 minute session on EMV.  The slide deck will be more valuable than the presentation because he had so much content to share and so little time to share it in.  Of note was his discussion of version 2.0 of three domain secure otherwise known as 3D Secure or 3DS.  While v1.0 will remain under the control of Visa, EMVCo has taken over management and development of the 3DS standard.  The new version is in draft and only available to EMVCo members, so this was the first time I had been able to see what the new version has to offer.  But because of the time constraint, I will need to wait for the slide deck to be published to know more.

PCI Quality Assurance Program

Brandy Cumberland of the Council provided a great presentation on the Council’s quality assurance program that all QSAs have become familiar.  I appreciated her discussion of James Barrow who took over the AQM program after most of us wanted to kill his predecessor for creating one of the most brutal QA programs we had ever seen.  James efforts to make the AQM program more relevant cannot be underestimated as he took over a very troubled affair.  This was a bittersweet discussion as James passed away right after last year’s Community Meeting and will be greatly missed by those of us that came to know and respect him.  Brandy took over the AQM program when James left the Council and has been doing a great job ever since.  She is possible one of the best resources the Council has and does the AQM program proud.

Application Security at Scale

The last great session of the conference I saw was from Jeff Williams of Contrast Security.  The reason this session was great was it discussed what application developers can do to instrument their applications for not only security, but also for operational issues.  He introduced us to interactive AppSec testing (IAST) and run-time application self-promotion (RASP).  The beauty of this approach is that applications get security in the form of embedded instrumentation that results in actionable analytics which then allow decisions to be made to respond to threats to these applications.  It sounds like an interesting approach and concept and I cannot wait to see it in action.

As always, it was great to see and catch up with all of my friends in Las Vegas at the PCI Community Meeting.  It was also great to meet a lot of new people as well.  I look forward to seeing all of you again next year in Orlando.

29
Sep
16

Microsoft Changes Their Patching Strategy

Back in May 2016, Microsoft issued a blog entry on TechNet giving the world insight into its new patching strategy.  The concept of a monthly “rollup” patch or what a lot of people are calling a “mega-patch”.  In August another blog entry was posted that further detailed this strategy and explained that from October 2016 going forward, this is how Microsoft would patch Windows.

But there is even more to it.  For WSUS and SCCM users, security patches will be separated from the Monthly Rollup in their own Security mega-patch.  The idea behind separating the security patches into their own mega-patch is to allow organizations to at least stay current on security.  However there is a twist on this approach as well.  Organizations such as small business that do not use WSUS or SCCM will only get a single mega-patch through Windows or Microsoft Update that will contain the Monthly Rollup and Security mega-patches in one mega-patch.

So what could go wrong you might be asking?

The biggest drawback to this scheme is that, should you have any issue with a mega-patch, you must back out the whole patch, not just the item that is creating the issue.  That means instead of having just one potential issue to mitigate, you could have as many issues to mitigate as the patch contains.  From a PCI compliance perspective, that could mean lots of missing patches in your Windows systems if your systems run into an issue with a mega-patch.  This can get doubly bad for organizations not using WSUS or SCCM because they will be backing out security patches as well as application patches.

But it can get even worse.  These mega-patches are cumulative meaning that every month Microsoft adds the previous mega-patch to the new month’s mega-patch.  For example, say one month the mega-patches cannot be applied for compatibility reasons.  For example, you apply the monthly mega-patch and your point of sale (POS) application fails to work with the mega-patch and you must back it out.  If that issue continues because of your vendor, you will not be able to patch your POS systems until that compatibility issue is resolved because month after month the mega-patches are cumulative.  So until the compatibility issue is resolved, you will not be able to patch your systems.

But I foresee small businesses running into the worst issue with this new approach.  Since small organizations likely will not be using WSUS or SCCM, they will not get a separate Security mega-patch, they will only get a single mega-patch that combines the Monthly Rollup and Security into one mega-patch.  If any issue occurs with that single mega-patch, the small businesses will not even get their security patches.  That will create a situation where the organization must figure out how to mitigate their inability to secure their systems.  In addition, that could mean months of security issues until the original compatibility issue can be resolved.

But to add insult to injury, I can also see situations where a vendor has issues resolving a compatibility problem with a mega-patch and finally gets it fixed only to encounter a new compatibility issue with the latest mega-patch.  Based on how Microsoft is running these mega-patches, there appears to be no way to go back to a compatible and useable mega-patch.  This could result in organizations being unable to patch at all due to ongoing compatibility issues.

At a minimum, I think Microsoft will need to make the Security mega-patch separate from the Monthly Rollup for all organizations, not just those using WSUS or SCCM.  At least then, all organizations can apply security patches independent of the Monthly Rollup which would be more likely to be the one that would create compatibility issues.

It will be interesting to see how this new patching strategy plays out.  Hopefully it does not create even more risk for uses of Windows.  If it does, I would not be surprised if the PCI SSC invokes significant new controls on Windows-based solutions.  That could be the final straw in using Windows for a lot of merchants.  Time will tell.

10
Jun
16

Is The PCI DSS Even Relevant Any More?

First the National Retail Federation (NRF), then bloggers.  Organizations and people are piling on the PCI SSC and standards all because of the United States Federal Trade Commission’s (FTC) fact finding project.  Seems like PCI is now a bad three letter word.  But with the changes that have been implemented or will soon be implemented, I am starting to wonder about the relevance of the PCI DSS.  So I thought I would explore these topics and explain what has lead me to that conclusion.

Ever since the FTC announced there little fact finding mission, I have consistently said that the FTC is late to the party.

Why do I think the FTC is late?

The FTC’s fact finding efforts are I am sure in response to the Target, Michael’s, Home Depot, etc. data breaches which resulted in tens of millions of payment card accounts being exposed and potentially used for fraudulent purposes.  Remember, they are a governmental body, so taking action can take a bit of time, in this case at least three years and longer than most people would have desired.  But they eventually got around to it.  While this fact finding effort is a valid way to get up to speed on a problem, the trouble is that the threat landscape has changed since those notorious breaches and the FTC got its act together.

What in the threat landscape has changed?

The vast majority of mid-sized and large retailers have or are in the process of implementing point-to-point encryption (P2PE) or end-to-end encryption (E2EE) and tokenization solutions to minimize their PCI scope to only the point of interaction (POI) otherwise known as the card terminal.  As a result, the threat of large scale breaches at these merchants is or soon will be in the next 12 to 18 months (based on my knowledge of a large number of such efforts) near zero.  The reason being is that these merchants’ point of sale (POS) and other systems will no longer have access to cardholder data (CHD) or sensitive authentication data (SAD).

How can the threat be near zero?

The threat with P2PE/E2EE and tokenization limits scope to only the POI and is very, very low because of how the POI must be implemented to work with P2PE/E2EE and/or tokenization.  I am not going to discuss in detail the security features of these solutions so as not to tip the hand of those organizations implementing them.  Let me just say that there is a lot of information required that must be loaded into the POI in order to swap out terminals.  Even then, there are additional controls involving the registration of the device by the merchant and/or service provider that preclude terminal swaps without generating some form of alerts.

The one threat that still does remain is the use of an overlay for skimming cards.  But that risk varies from POI vendor to POI vendor and even by POI model within a vendor.  And it is not like vendors have not taken notice of the overlay problem.  Vendors have gotten a clue and are changing the design of their POI to make them as difficult as possible to use an overlay.  I have a client that went with a POI that has various angles, long swipe tracks, LED lights and other features that would make an overlay very expensive to engineer but also very difficult to appear seamless to customers and clerks.  Over time I expect to see all POI manufacturers adopt strategies to minimize the ability to use overlays.

The result of all of this is that merchants are no longer the risk (if they even present a risk) they were two or more years ago.

So who or what does that leave at risk?

ECommerce Web sites are still a huge problem.  EMV as it exists today does nothing to stem the problem of online fraud.  Even if a merchant has outsourced eCommerce, they still have to manage that environment as well as deal with the chargebacks and disputes that come from eCommerce card transactions.  I have heard rumors of solutions that are coming to address eCommerce, but I have yet to see any formal announcements of those solutions.  So for the foreseeable future, eCommerce will still be in-scope for some amount of PCI assessment.  So merchants with an eCommerce presence will likely still have to address some form of PCI assessment for that environment.

Any merchant that has not gotten on the P2PE/E2EE and tokenization bandwagon.  All merchants should be getting POI that encrypt and/or tokenize at the swipe or dip of a customer’s card.  Adopting such solutions will leave the merchant with only having to comply with requirements in 9.9 and 12.  I know for some merchants that will mean an investment, but the payoff is extremely reduced PCI scope and effectively taking almost all of the risk out of card payments.

The organizations that end up with a huge target on their backs are any service providers, transaction processors, issuers or financial institutions that have CHD and/or SAD stored in their files and/or databases.  An unfortunate fact of life is that transaction processors, issuers and financial institutions are always going to have to have some amount of CHD/SAD in their files and databases because of the nature of their business.  It is these organizations where the full on (i.e., Report On Compliance or ROC) PCI DSS assessment will never go away.

For merchants that have moved to P2PE/E2EE/tokens, I could see a move to an annual self-verification that those solutions are still implemented and functioning as designed.  I could additionally see that, every three years or so, the card brands requiring an independent assessment by a QSA/ISA that the controls for P2PE/E2EE/token solutions are still in place and functioning correctly.  The reason for independent verification is that changes get made and those changes might affect the environment making it less secure.  For merchants not using P2PE/E2EE/tokens, I would think the current SAQs and ROC will remain in place with an annual assessment required.

Will other PCI standards be marginalized or disappear?

The PA-DSS will never leave us.  Software developers need to develop secure code and those service providers, transaction processors, issuers and financial institutions that store CHD/SAD need applications that do that securely, so there is a built in constituency for the PA-DSS.  ECommerce solutions are also still going to need PA-DSS validation.  But regardless of whether P2PE/E2EE and tokenization are implemented, any application potentially dealing with CHD/SAD will need to be assessed under PA-DSS to ensure that any CHD stored is stored securely and is erased securely.  Then there are the unknowns of the future.  You never know what might come along in the future, so there is always a possibility that some solution might need to securely store CHD or other payment related information.  The bottom line is that I find it very hard to believe that the PA-DSS could ever be dropped.

The PTS standard will also not disappear because those POI need to be validated to handle CHD/SAD securely and work properly regardless of P2PE/E2EE solutions.  The PTS is the only standard that is a card brand requirement, not a PCI DSS requirement.  It is the card brands that demand merchants use only PTS validated POI and I do not see that requirement going away when the POI is going to become the remaining target at merchants.

The ASV standard will not go anywhere as there will still be eCommerce solutions that will require vulnerability scanning.  Most merchants will implement eCommerce solutions that minimize their PCI scope using a redirect or iFrame.  Although I can see it coming that even using those solutions will still require the merchant’s eCommerce site, now deemed as out of scope, to be scanned for vulnerabilities.  The reason is that the invocation point of the redirect or iFrame is at risk of modification by an attacker.

One standard I do believe that will eventually go away is P2PE.  The reason is that there is very little to gain with a P2PE versus an E2EE solution.  Both solutions are essentially the same, the only additional work required for E2EE is documenting that E2EE has been implemented appropriately and submitting that documentation to the client’s acquiring bank and getting the bank to agree to the PCI scope reduction.  As a result, I believe that the P2PE standard will slowly and quietly disappear into the night as the cost of going through the assessment process along with the Council filling fees just cannot be justified by a lot of influential vendors such as Verifone and First Data.

There is my rationale for where I think things are hopefully headed.  Only time will tell if the rest of the world sees things the same way.

28
Apr
16

Hold Your Horses

UPDATE: The ROC Reporting Template is available as a PDF on the Document Library page after the Reporting Template and Forms banner almost all the way down the page. The Word version of the ROC Reporting Template is now available from the PCI Portal. No word yet on the PA-DSS and ROV Reporting Template.

Yes, the PCI SSC released the final version of the PCI DSS v3.2, an updated Glossary and Summary of Changes document on their Web site this morning, but we are missing a key piece.  The Report On Compliance (ROC) Reporting Template.

Why is that important you might ask?

The ROC Reporting Template is the document that contains all of the tests that a QSA/ISA needs to conduct to prove that an organization is PCI compliant.  It tells you and your QSA/ISA the evidence needed to gather, how to gather the evidence and level of effort required.  Without that information, an assessment under v3.2 cannot be performed.  Let alone do we truly know the breadth and depth of the changes the Council has made.

The Council promised on their Webinar a month ago that all documents would be released on the same date.  But as of this writing, the ROC Reporting Template is missing in action.

Until we have that document, we have nothing.

Also of note is that the PA-DSS v3.2 and its related Report On Validation Reporting Template are also missing in action as well.

09
Apr
16

Living In PCI Denial

This was one of those weeks where you see something and all you can do is shake your head and wonder what some organizations think when it comes to PCI.  What added insult to injury in this case was that the organization arguing over PCI compliance is the manufacturer of card terminals, also known as point of interaction (POI).  It shocked me that such an organization was so clueless about PCI as a whole when you would think it is their business to know. But to add insult to injury, my client’s transaction processor and acquiring bank are also apparently clueless.

As background, I am working on a client’s Report On Compliance (ROC).  This client has almost completed with their roll out of an end-to-end encryption (E2EE) solution at all of their 4,000+ retail locations.  This E2EE solution will take all but the POI at those retail locations out of scope for PCI compliance.  That is the good news.

But if there is good news, you know there must be bad news.  In reviewing their documentation of this E2EE solution, I discovered that the POI vendor is providing management and updates to the POI through a terminal management system (TMS).  Since this TMS solution/service connects directly to my client’s cardholder data environment (CDE), I naturally asked the client for a copy of the vendor’s Attestation Of Compliance (AOC) for the TMS solution/service.

I thought those worthless PCI Certificates of Compliance took the cake.  Then, BAM!  I got the following message forwarded to me by my client from the POI vendor.  I have redacted all of the potential information that could identify the relevant parties and the TMS solution/service.

“Please see the follow up note below that you can send to your QSA for review and feedback:

  1. TMS systems in our industry do not require any type of PCI certification since PCI is concerned about card holder information that would be at risk. Since [vendor solution] does not have any card holder data at all, it falls outside of PCI requirements.  [Vendor solution] is merchant configuration and estate management tool only and as such, no payment card information passes through it, or directed to it.  In addition, no secure keys are stored on [vendor solution] so transaction data cannot be decrypted with anything on [vendor solution] or POS.
  2. [Vendor] Hardware and [vendor solution] Software are all PCI PTS compliant and certified and listed on the PCI website. Transactions are encrypted in hardware using the [encryption solution] keys which again [vendor solution] has no knowledge.  Transaction information can only be decrypted by [processor] the processor.  [Vendor solution] has no knowledge of this encrypted information being sent directly from the [vendor] to the processor.
  3. The beauty and simplicity of [vendor solution] semi-integrated terminal application is that is has all transaction data go directly to the Processor ([processor]) and no customer data is directed to the POS or [vendor solution] which makes the POS out of PCI Scope by the very nature of no card holder data in their environment.
  4. [Client] has a merchant certification with [processor] for the [encryption solution] with our [vendor solution] terminal application. Any questions regarding the certification should be directed to [acquiring bank] or a [processor] representative.

Let us know if your QSA has any further questions and we can also schedule a concall with all parties to address any concerns on [vendor solution] TMS and PCI.”

The first thing that wound me up is that this vendor is a business partner of my client’s transaction processor.  The processor is also a business partner of my client’s acquiring bank.  Those two organizations put forth this vendor to my client as being able to provide POI compatible to the processor’s E2EE and tokenization solution.  Obviously from this vendor’s response, these two well-known institutions did nothing in the way of due diligence to ensure that this vendor and its services were PCI compliant.

The second thing that totally irritated me is that there is no excuse for this vendor’s uneducated response.  Granted, this vendor is new to the US market, but they have been supplying POI to other merchants all over other parts of the world.  Which then starts to make you wonder just how lame are the banks, processors, card brands and other QSAs that they have not been called on the carpet about this before.  But that is a topic for another post and a good reason why the FTC is investigating the PCI compliance industry.

So let me take apart this vendor’s response.

“TMS systems in our industry do not require any type of PCI certification since PCI is concerned about card holder information that would be at risk.”

Wrong!  On page 10 of the PCI DSS the first paragraph under ‘Scope of PCI DSS Requirements’ clearly defines what is in scope for PCI compliance.

“The PCI DSS security requirements apply to all system components included in or connected to the cardholder data environment. The cardholder data environment (CDE) is comprised of people, processes and technologies that store, process, or transmit cardholder data or sensitive authentication data. “System components” include network devices, servers, computing devices, and applications.”

The operative phrase the TMS solution/service falls under is “connected to”.  The TMS solution/service directly connects to my client’s CDE.  That solution/service may not process, store or transmit cardholder data (CHD) or sensitive authentication data (SAD), but it is directly connected to my client’s CDE.  As a result, according to the above definition, the TMS solution/service is definitely in scope for PCI compliance.

“[Vendor] Hardware and [vendor solution] Software are all PCI PTS compliant and certified and listed on the PCI website.”

PTS certification is a card brand requirement, not a PCI DSS requirement.  Nowhere in the PCI DSS does it require that a PTS certified POI be used so I really do not care about this statement as it has nothing to do with my PCI DSS assessment activities.  If PTS were a PCI DSS requirement, then all of those people using Square and the like would be non-compliant.

“In addition, no secure keys are stored on [vendor solution] so transaction data cannot be decrypted with anything on [vendor solution] or POS.”

“Transaction information can only be decrypted by [processor] the processor.”

True, your TMS solution/service does not have the encryption keys.  But the firmware delivered by the TMS solution/service does have access.  (Unless you are the first POI vendor I have ever encountered that spent the huge amount of money required to truly create a hardware-only encryption solution.)  Given the low retail price and discounting of your POI you gave my client, I very seriously doubt that is the case.  So the firmware that your TMS solution/service delivers is what is doing the encryption and therefore has access to the encryption keys.  So while the TMS solution/service does not have the keys, it could be used to deliver rogue firmware that could obtain them.

Then there is the firmware delivery itself by your TMS solution.  If someone hacks your TMS environment, how easy would it be for them to have it deliver a rogue version of your firmware?  Since my client has no AOC, I have no idea if your security measures surrounding your TMS solution are adequate to prevent such an attack.

“[Client] has a merchant certification with [processor] for the [encryption solution] with our [vendor solution] terminal application.”

Such a statement ranks up there with those previously mentioned worthless PCI Certificates of Compliance.  Any QSA is required to obtain an AOC for the TMS solution/service to ensure that it is PCI compliant or the solution/service must be assessed as part of the merchant’s PCI assessment.

PCI DSS requirements under 12.8 are very clear as to everything a merchant needs to be able to provide to their QSA regarding third party PCI compliance.  Primarily of which is that AOC for your TMS solution/service among other items of evidence.

So I have a conference call with my client’s bank to discuss this situation.  I pushed back very hard when they told me that my client needs to do a compensating control for their business partner’s incompetence.  I even got an “atta boy” from the bank for identifying to them that they have a PCI compliance and potential security issue.  But I could not make the bank budge on the compensating control so I am off to get that written.

The lesson to be learned from this post is that nothing can be taken for granted when doing a PCI assessment even when you transaction processor and bank are involved.  A lot of people and QSAs would assume that a POI vendor would know better and that their bank and transaction processor had vetted the POI vendor.  Therefore, why do I have to worry about this vendor?  However as I have pointed out, you can never take anything for granted even when it involves organizations that you would think would know better.

This is just one way of many that could result in an organization being breached.  The TMS solution/service is a gateway directly to the merchant’s CDE.  Yet there has been no PCI assessment of that solution/service to ensure that it is PCI compliant and the risk it could be subverted has been minimized.

Thank goodness it is the weekend.  Oh, wait.  This weekend’s project is my income taxes.  Looks like I will be cranky all weekend as well.

01
Apr
16

The Council Speaks About v3.2

If you missed it, do not feel bad.  I too had to be told by friends and colleagues that the PCI SSC was having a Webinar on Thursday, March 31, to discuss the upcoming changes to the PCI DSS and PA-DSS as well as changes to other areas as a result.  Apparently the Webinar was announced in the March issue of the QSA newsletter.

To begin their presentation, the Council made a big deal out of explaining why they are dumping the three year update cycle.  The bottom line about this is that they feel the PCI DSS and PA-DSS are mature and therefore any future updates will be evolutionary not revolutionary as they have been in the past.  As a result, we can expect more minor changes more often.  Much like when the PCI DSS started out and we quickly got v1.1 followed by v1.2.

PCI DSS v3.2

The real piece of news here was that two-factor authentication (TFA) is going to be required for all administrative access to the cardholder data environment (CDE) regardless of whether that access is from the internal network or a remote network.  I am sure this is in response to the number of breaches that involved administrators being spear phished.

Speaking of TFA, the Council indicated that they are going to switch terminology from “two-factor” authentication to “multi-factor” authentication (MFA).  However, they were very clear when they discussed this change in terminology that they still mean the three factor model of something you know, something you have, and something you are.  Their rationale on this change is to align the DSS with industry terminology.  In the Q&A they got a lot of questions on this change as most security professionals said that clients would view MFA as including two sets of credentials versus TFA which has truly different factors.  So we will see if the MFA decision stands when the new standard is released.

In addition, the Council outlined some other key changes we can expect to see in the latest version of the DSS.  These are:

  • Two new Appendices are being added to the PCI DSS. The first of which discusses the SSL/early TLS issues.  The second is the incorporation of the Designated Entities Supplemental Validation (DESV) requirements into the DSS.
  • Allowing the display of the PAN to be more than just the first six digits and the last four digits to align the PCI DSS with the coming changes to ISO 7812 which will increase the issuer identification number (IIN) from six digits to eight digits.
  • Adding a number of additional requirements for service providers including: documentation of cryptographic architecture, detection/reporting on critical security control systems, penetration testing to confirm segmentation every six months, establishment of a formal PCI compliance program, and quarterly confirmation that personnel are following all security policies, standards and procedures.
  • Periodic testing that all change control policies, standards and procedures are in place and operating as designed. This is the first of many business as usual (BAU) requirements that will be added to the PCI DSS.

More On SSL/Early TLS

The Council gave a bit more information regarding why they extended the deadline on SSL and early TLS out to June 30, 2018.  As no surprise, the reason for the extension was push back from a variety of sources that found the 2016 deadline too short to convert.

I know from my own experience, I have a few clients that have contracts that do not allow them to make such changes without consultation with every customer impacted.  In one case, it was going to take almost nine months just to consult with all of their impacted customers and then another seven months to implement the changes into production.  In the perfect scenario, they would have cut over around September 2016, but they said past experience indicated a more likely date would have been July 2017 at the earliest.

The presenter reiterated that service providers must meet the June 30, 2016 deadline.

Also discussed was how ASVs are supposed to deal with SSL and early TLS issues.  Until June 30, 2016, if an ASV encounters SSL or early TLS vulnerabilities, the ASV must obtain the mitigation plan or a letter from their customer attesting that a mitigation plan has been developed and the date when the customer will have addressed the vulnerabilities related to SSL and/or early TLS.  The ASV does not need to assess the mitigation plan as the assessment of the mitigation plan is something the organization’s QSA must perform as part of the assessment process.

The final key point on this topic that the Council could not stress enough was, just because the deadline has been pushed out was no justification for an organization to wait until the last minute before addressing these critical vulnerabilities.  If an organization can meet the June 30, 2016 deadline, then they should meet that deadline.  If they need until December 31, 2016 to convert, then they need to mitigate the risk until December 31, 2016 when they can drop SSL and early TLS.  But waiting for the sake of waiting because the deadline is in 2018 is unacceptable and needs to be called out as ‘Not In Place’ by QSAs.

Related to this discussion was the fact that vulnerability management still needed to be addressed through the mitigation.  So if new vulnerabilities to SSL and/or early TLS are discovered while the organization is remediating their implementations of SSL/early TLS, the organization must still comply with requirements 6.2 and 11.2.

PA-DSS v3.2

No news is good news here.  There will be little change to the PA-DSS standard other than to align it with PCI DSS v3.2.

However two significant changes are coming to an application’s Implementation Guide (IG).

The IG will now be required to address debugging logs that contain PAN data.  Those debugging logs will be required to be protected, debugging will need to be immediately disabled once it is no longer needed and the debugging log data must be securely deleted as soon as it is no longer needed.

The IG will also be required to discuss the secure implementation of patches and updates to the application.

PA-DSS v3.1 dealt with the SSL/early TLS issue, so the Council felt that there would be no changes regarding that topic.  That said, they did address the question as to whether or not TLS v1.1 is considered secure and laid out how TLS v1.1 needed to be configured to be secure.  That configuration included:

  • Disable weak ciphers and cipher suites such as MD5, SHA-1 and RC4.
  • Use sufficient key sizes.
  • Prevent fallback to SSL or TLS v1.0.

AQM Update

The Council indicated that the PCI DSS v3.2 and the Report On Compliance (ROC) reporting templates will be released simultaneously for the first time.  Timing for these documents will be late April 2016.  No specific date was provided.

On the PA-DSS side, the Council stated that the v3.2 Report On Validation (ROV) reporting template and the standard will be released in May 2016.  Again, no specific date was provided.

Cutover to v3.2 for both standards was discussed with the PCI DSS cutover being the more specific.  PCI DSS v3.2 will go active upon release with sun setting of v3.1 occurring in October 2016 on whatever day matches the release date.  Cutover and sun setting on PA-DSS will be announced with the release of the v3.2 standard.  Use of both standards and reporting templates can occur immediately but we were reminded that everyone must cutover by the relevant sunset dates.

The Council also indicated that any relevant v3 FAQs will also be updated when the new standards are released.

ROC/ROV Personalization

The final point discussed under the AQM banner was the personalization of the ROC and ROV reporting templates by QSACs and PA-QSACs.  According to the presenter, the Council is hearing complaints from banks and the brands about the “over personalization” of ROC and ROV reports.

The Council stated that they understood the desire of QSACs and PA-QSACs to put their logos on the reports as well as making other “minor” changes to make the reports reflective of their organization.  However, banks and the card brands have been complaining that some of the personalization done had made the reports different enough from the original templates as to make them difficult to quickly review and process.

As a result, the Council has felt it necessary to issue guidelines on what personalization of the ROC and ROV templates is allowed.  Under these new guidelines:

  • Adding a title page to the report templates is allowed.
  • Adding a company’s logo to the report header is allowed.
  • No changes are allowed to any of the reports footers.

If you did miss this Webinar, the Council stated they were recording the session and it will be available on their PCI Portal sometime in the next few days.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

July 2017
M T W T F S S
« Jun    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,854 other followers