Archive for the 'PCI SAQ' Category

22
Jun
22

PCI DSS v4 Global Symposium Is Now Available

The PCI SSC has published the PCI DSS v4 Global Symposium for all QSAs, ASVs and Participating Organizations (PO).

To virtually attend this pre-recorded set of presentations you can go here. The Symposium dropped on Tuesday, June 21, and is available through Tuesday, August 30, 2022.

The Agenda for this Symposium includes:

  • Welcome Remarks
  • PCI DSS v4.0 Highlights
  • Requirements: What’s New And Exciting
  • Flexibility For Implementing Security Controls (likely all about the new Customized Approach)
  • The New Approach To Reporting (explanation of how the new ROC template works?)
  • A Look Into Self Assessments
  • Preparing To Move To 4.0
  • PCI DSS v4.0 Educational Resources
  • Closing Remarks

I have yet to attend this almost 3 hour symposium, but I am guessing, based on the topics, that some of this is a rehash of what we have already been provided. However, there does appear to be some new material, so it still should be informative and interesting.

Advertisement
29
Apr
22

The SAQs Have Been Published

Just a quick post to let everyone know that the PCI SSC has published the version 4 Self-Assessment Questionnaires (SAQs). You can get them under the Documents Library and select SAQS.

24
Oct
21

Remote PCI Assessment Guidance Issued

At the end of September 2021, the PCI Council released a Guidelines and Procedures document on conducting Remote Assessments for PCI and card brand assessments.  Most of this document is a rehash of previous Council statements and guidance.  However, there is one new element in this document that all QSAs will need to read and comply with and that is the requirement of documenting a feasibility analysis to justify conducting a remote assessment.

Some of the examples the Council gives as valid reasons that an on-site assessment may not be feasible includes:

  • Restrictions on the ability to travel or meet in person due to health and safety concerns or government advisories.  We are all familiar with the COVID-19 pandemic and its impact on travel, particularly international travel.  However, I encountered this a while back due to a volcanic eruption in Iceland that cancelled my trip to Europe.  Since we had no way of knowing how long the eruption would cause travel disruptions and we were on a tight timeline, we conducted video conferences rather than travel.
  • Geographic locations that are physically inaccessible or difficult to reach.  I personally ran into this situation one several years ago when a data center in Europe that was supposed to be decommissioned before the next assessment remained operational.  The company I worked for had shut down their EU operations and there was no way to justify 16 hours of flight time for a two-hour data center walk through.  We held meetings with the data center operator via video conference and did a virtual walk through.
  • Testing required at a location is limited to documentation and interviews and no observations of processes, systems or physical environment apply.
  • The entity operates a virtual environment without physical premises or facilities.  This has become more and more common with entities that operate in The Cloud.  Why rent expensive office space when there is not need for it?  This situation only got more prevalent with the pandemic and will likely only increase in the future.

As the Council states in their guidance,

“For many assessments, a combination of onsite and remote testing may provide a suitable balance, as it allows for increased efficiencies in the assessment process while enabling an appropriate level of assurance to be achieved in the assessment result.  For example, documentation reviews can often be performed remotely without significant loss of assurance, whereas observations of processes and environmental characteristics will generally require an onsite review.”

Regardless of whether the assessment fits into one of the bullets above, the Council wants QSAs to formally document their analyses of why the onsite assessment cannot be performed and the risks that may present to meeting the assessment objectives.  This analysis needs to be completed prior to starting any testing and is supposed to be a joint effort between the assessor and the client.

Topics that the Council recommends be addressed include, but are not limited to:

  • Confidentiality, security, and data protection requirements.
  • Availability and effectiveness of the remote assessment technologies.
  • Effects on entity’s personnel.
  • Effects on operation support.
  • Assessment scope and completeness.
  • Quality and reliability of digital evidence.

The Council further states:

“During the analysis, the entity and assessor should identify any challenges and potential risks associated with the remote testing and determine whether it is feasible for testing to be thoroughly completed to produce a high level of confidence in the assessment results.

The results of the feasibility analysis—including the risks and challenges associated with use of the remote testing methods, and any mitigating controls for overcoming the risks and challenges—should be documented and agreed upon by both the entity and assessor. A copy of the feasibility analysis results should be included with the applicable ROC/ROV. Entities and assessors may be required to produce the analysis upon request by the PCI SSC or applicable compliance-accepting entity.

The key points from that statement above is that: (1) the feasibility analysis needs to be submitted with the ROC/ROV and, (2) if requested by the PCI SSC or compliance accepting entity (i.e., Brand or bank), the QSA is required to produce the analysis.  As a result, this is a non-optional exercise.

The feasibility analyses must document that:

  • The assessment is feasible to be fully completed at this time using onsite methods, remote methods, or a combination of onsite and remote methods.
  • The assessment is only feasible to be partially completed at this time.
  • The assessment is not feasible currently.

According to the guidance, it is only those assessments that are completely feasible that can be conducted.

The Council includes a very important note regarding the analyses.

“The feasibility analysis determines whether the use of remote testing methods is feasible for a particular assessment.  Determining that a remote testing method is feasible does not guarantee that use of the testing method will produce the level of assurance needed for the assessor to reach a finding; this will depend on how the remote testing method is implemented and used, whether the testing can be completed for all applicable components and areas, and whether sufficient evidence is provided for the assessor to make a determination.  Assessors and entities should continue to monitor and evaluate the effectiveness of the remote testing methods throughout the assessment to confirm whether the testing methods are performing as intended and whether additional testing may be needed.”

This concept of “assurance” appears to all be in the eye of the beholder.  Meaning, if the Council, Brands or Banks determine, in their opinion, that the remote methods are not providing appropriate levels of assurance, the ROC/ROV can be rejected.  Not that a lot of banks are going to reject ROCs/ROVs on this, but I can see the Council’s AQM reviews and Card Brands rejecting ROCs/ROVs on analyses that they deem flawed or incomplete.  The AQM process is the most concerning because a QSAC could end up in remediation due to a failure to appropriately document the remote assessment feasibility.

As with most edicts issued by the Council, they should have produced a form for this feasibility analysis so that everyone understands what is required from these feasibility analyses.  Can the feasibility analysis be documented in section 1.2 of the reporting template or is a separate document required?  I would recommend this for the obvious remote assessments of COVID and everything in The Cloud.  I would recommend a separate document for feasibility analyses that are longer in discussion.

Sadly, I foresee a lot of confusion and heartache in the QSAC community as we move through this new requirement.  That is because I see a lot of assessments that are blocked due to COVID travel restrictions or the assessed entity having no physical offices being rejected for “flawed” feasibility analyses when it should just be allowed with no further documentation or discussion.

It will take time to see how this shakes out.

UPDATE 11/29/2021 – I received a comment on this post (see below) and the confusion is beginning. A service provider has had one of their customers request the documentation regarding what is provided in Appendix A of the remote assessment guidance document as well as the remote assessment feasibility study. Since these are ROC documents, there is no requirement from the Council that requires any organization to turn over their ROC to any third party other than their acquiring bank or the card brands. The AOC is the communication document to third parties. If an organization wishes to turn over Appendix A from the guidance, that is the organization’s decision, but it is NOT mandatory nor it is required by the Council.

17
Sep
21

2021 Government IT Symposium

I am honored to have been granted the privilege to speak at the 2021 Government IT Symposium this coming November.

I will be speaking (virtually) on Tuesday, November 16, at 145PM CT/1945 UTC.  My presentation is titled ‘PCI Compliance – Yes, That Includes Governments’.  The reason for my session is that while the PCI DSS has been around for over 15 years, government entities still question how it applies to them and why.  In my years doing assessments for government entities, I have found there are a number of unique situations that complicate their assessments.  In my session I will cover the basics of the PCI DSS and provide a walk through of the potential traps that tend to trip up government entities.

If you want to register for this symposium, go here to register.

I look forward to seeing you there.

31
Jul
21

PCI Dream Team LIVE! Is Coming In October

The PCI Dream Team will be appearing LIVE at the (ISC)2 Security Congress in Orlando this Fall, Monday, October 18 through Wednesday, October 20, 2021.   Our session is scheduled for Tuesday, October 19, at 11:45 AM ET/ 1545 UTC.

While we will be live at the conference, you can also attend the conference and our session virtually.  So other than training budget limitations, there is no other good reason you cannot join us.

As usual, we will be taking questions live and via email at pcidreamteam AT gmail DOT com.  We also monitor Twitter if you use #pcidreamteam.

We are expecting our usual lively discussion of all topics PCI and other security standards if time allows.

We really are looking forward to physically seeing people at the conference.

05
Apr
20

The Joke That Is SAQ A

This week another outbreak of Magecart was detected in at least 19 eCommerce sites.  It is using a new way to obfuscate and gather cardholder data (CHD).  As I read through the latest description, it brought to mind SAQ A.

But before I launch into that diatribe, first a little bit of history so that everyone understands why SAQ A even exists.

In the early wild, wild west days of payment card security on the internet, enterprising solution providers were pandering “outsourced” solutions that would “avoid” compliance with the then Visa Cardholder Information Security Program (CISP) and MasterCard Site Data Protection (SDP) compliance efforts.  What they were selling was a solution that used a variety of Web site techniques to keep the CHD away from the merchant’s Web site.  These solutions sold themselves because they took the merchant out of scope from the very onerous Visa and MasterCard security programs.

Then along came the PCI DSS and the self-assessment questionnaires (SAQ).  As part of that process, the Council and the Brands realized that these so-called out of scope solutions were not really “out of scope”.  The result was SAQ A which covers these outsourced solutions.  For years they had kept their solutions out of the card brands’ compliance programs and now they were included.  SAQ A was good news, bad news moment for the solution providers.  The bad news was that there was no escaping the fact that their customers were now in scope for PCI compliance.  However, the good news was that to placate these solution providers who were lobbying loudly for no scope, the Council and Brands minimized the number of requirements in SAQ A to a very, very bare minimum so that these outsourced solutions would not scare their customer bases off due to PCI compliance.

Just for the record.  SAQ A is the absolute bare minimum number of requirements any merchant can comply with and be considered PCI compliant.  There is nothing less.

And Now The Jokes – Bad As They Are

The first joke is that SAQ A is the absolute prime example of compliance does not equal security, bar none.

Anyone that thinks compliance with SAQ A keeps their customer payments secure is seriously lying to themselves.  Magecart in all of its forms is exhibit number 1 as to why SAQ A is a joke and should be retired.

I have told my clients since SAQ A was published that if they thought compliance with SAQ A would keep them out of trouble to think again.  Yes, SAQ A keeps the processors, banks and brands happy, but it does nothing to manage the risk presented by any web site.  That is because if the code/executable/script on their server that invokes the redirect or iFrame is ever tampered with (as with Magecart), it will not be the processor or bank held legally responsible, it will be the merchant that operates that web site that is legally on the hook.

That is the second joke of SAQ A.  Merchants think they have pushed the payment card processing risk of their eCommerce operation off to a service provider and they have not.  Unknowingly, they still have a lot of skin in the game.  More than they realize or want to realize.

Yet time and again, I encounter merchants following SAQ A that blindly go about life without regularly patching, maintaining or monitoring their web site because “SAQ A says I do not need to do that”.  All of this under the mistaken belief that SAQ A’s requirements create security for that web site which they do not.  Sadly, I have also encountered a number of merchants over the years that have been caught in the SAQ A trap and found out the hard way the monetary and business costs of their beliefs in SAQ A protecting them from bad actors.

SAQ A Is Compliance Not Security

In the last update of the SAQs in 2018, the Council did address a minor shortcoming in SAQ A.  That addition was to require organizations to ensure that their Web server was patched current for critical vulnerabilities.  However, from a risk perspective for an internet-facing system, that did very little to ensure the security of merchant Web sites used for directing payment processing.

Notably, SAQ A does not require at least any of the following:

  • Only one major service running, i.e., Web server with eCommerce application.
  • External and internal vulnerability scanning.
  • External and internal penetration testing.
  • Critical file monitoring to identify if the redirect or iFrame invocation method has been tampered with.
  • Logging and monitoring of the Web server and Web applications.

Most information security professionals would still likely consider even these aforementioned requirements inadequate.  These are all items I have told my clients I recommend, but even these absolute bare minimum steps for securing a Web server are not required for SAQ A compliance.

As a result, is it any surprise that most information security professionals and most QSAs consider SAQ A worthless for anything other than PCI compliance?  Organizations that truly understand information security also realize that SAQ A is not security and follow SAQ A-EP for ensuring the security of their out of scope Web servers.

The bottom line is that we in the payment security industry need to lobby the PCI SSC, banks and card brands to get rid of SAQ A before even more organizations get hurt.

21
May
19

An Inadvertent Service Provider

A discussion came up on the last PCI Dream Team session regarding situations at universities that have bookstores and cafeterias operated by third parties on their networks and those vendors processing payment card transactions.  QSAs encounter this situation not only at universities and colleges, but also with hospitals, health clinics and large corporations.

The Situation

As organizations focus on customer and employee perks, QSAs encounter third parties operating business outlets within a variety of organizations.  These businesses include coffee shops, convenience stores, dry cleaners, bookstores, restaurants, cafeterias, parking ramps, travel agencies, pharmacies, health clubs and a whole host of other businesses.  Of course, all of these third parties accept payment cards for their services and need a way to process those cards.  Organizations offering these perks have existing wired and wireless infrastructure that get leveraged to connect these third parties to the internet and their payment processors.  Thus, bringing that network and everything attached to that network into scope for PCI compliance.

As a result, this situation creates a PCI compliance problem because the organization is now a service provider as well as a merchant.  The organization thought by outsourcing these businesses it was reducing PCI scope not increasing scope.  But scope increases because since they are now considered a service provider, they must provide each of these third parties with a Service Provider Attestation Of Compliance (AOC) for that network connectivity.

But it can and does get worse.  I have encountered situations where the outsourcing organization provides help desk, firewalls and other support services for these third parties, further complicating their PCI compliance responsibilities.

What Do You Do? Option 1 – Get Out Of Scope

There are some ways to get out of scope, but these can be complex and/or expensive.

The first way to get out of scope is to force all of your third parties to get their own network connectivity from their own internet service provider (ISP).  The problem with this is that an ISP will likely have to run wire into your facilities to make those connections.  That can be disruptive as well as expensive and complicated due to locations within existing buildings.  And what if each business wants their own ISP because of a contract relationship?  That will mean multiple ISPs tearing up your facilities.  Not necessarily the best situation.

The most extreme solution to get out of scope is for the outsourcing organization to implement carrier equipment and become a “carrier” to these third parties.  I have had a few clients go down this road, but it is not cheap and can also be more trouble than it is worth.  However, for a university or large hospital/clinic complex with lots of third parties, this solution can actually be a cheaper route to implement and operate.

But the beauty of these solutions is that your organization is totally out of scope so there are no service provider PCI assessment requirements.

What Do You Do? Option 2 – Reduce Scope

There are also a couple of ways to reduce scope.  But reducing scope requires at a minimum the creation of a Service Provider SAQ D and AOC.

The quickest and easiest way to reduce scope is that the outsourcing organization can implement end-to-end encryption between the third party’s connection and the internet.  However, this adds the requirements in section 4 to the assessment as well as keeps the endpoints in scope for PCI compliance.

Another option to reduce scope is to require these third parties to implement encryption from their operation to anyone outside of the outsourcing organization.  While this seems simple, it usually never is simple.  Never mind the fact that if that encryption is ever stopped (most times without your knowledge), the outsourcing organization’s network is back in scope.  Typically, when this gets brought up as a solution, a lot of the third parties balk or say they do not know how to encrypt their connections.  Never mind the fact of the complexity of proving that the outsourcing organization does not have encryption keys and that every third party connection is encrypted becomes problematic.  It ends up more trouble than it is worth.

The only good news about reduced scope is that you only need to fill out a Service Provider SAQ D and AOC because you have no idea the transaction volumes being processed by any of these third parties.  That said though, it is additional paperwork that needs to be filled out annually and given to all your third parties.

Heaven help you though if you offer firewall, help desk and other support services in addition to connectivity.  Those just complicate your compliance and reporting efforts.  All I can say is, if you can stop offering those services, stop.  If you cannot stop those services, then be prepared to document and report on the PCI compliance of each of those services.  That can be done in a single assessment, but the AOC must cover each of those services provided individually in a separate section 2g.

Never mind the fact that if some of those services offered give your organization insight into the number of transactions processed by your third parties such as you provide payment processing under one or more of your merchant identifiers, you may end up having to conduct a Service Provider Report On Compliance (ROC) because the transaction volume exceeds one of the card brands’ annual service provider transaction volumes.

There you have it on third parties and their payments on your network.

15
Dec
16

The Council Speaks On A Number Of Topics

The Council had a Webinar session for QSAs and ISAs on Thursday, December 15. It was a great session, but at only an hour, there were a lot of questions that went unanswered.  The following were the more notable discussion topics.

Not Tested

The Council got the message and they are working on new wording for the AOCs as well as some guidance for “Not Tested” and how it can be used and not impact PCI compliance.  They expect to have something issued in the first quarter of 2017.

Network Segmentation and Scoping

This was a very hot topic and drew a lot of questions and some useful answers as well as generating a slew of new questions.

We got a definition of “purpose-built controls”.  There really is not any change here in what the Council has told QSAs and ISAs in the past regarding segmentation.  The bottom line is that “purpose-built controls” are those controls that segment one network from another network.  That can be firewall rules, access control lists (ACL) or any other controls that control or limit the communications from one network to another network.  I posed a question regarding encryption such as TLS and IPSec as still being a valid segmentation control, but it did not get answered.  I am assuming that it still is a valid control given the Council’s statement that nothing has changed, but until we have explicit confirmation, that still is an assumption, not a fact.

The Council answered a number of questions regarding whether or not in-scope devices can be on the same network segment as out of scope devices can co-exist.  As usual, we go the “it depends” discussion.  The bottom line is that it depends on the threat presented by the out of scope devices to those in-scope.  If an organization has lax security controls over all of their networks and devices, then I would be hesitant to allow out of scope devices to be on the same network segment as in-scope devices.

One of the most amazing discussions on this topic was an answer given regarding whether or not a device that has only an outbound connection from the cardholder data environment (CDE) can be considered out of scope.  Under the Open PCI Scoping Toolkit, this would be categorized as a 2C system.  The Council started out with their stock answer of “it depends” and then clarified that answer.  The answer given was that while the system would be in scope because it is connected to the CDE, what requirements it would need to comply with would depend on the risk presented by the system to the CDE.  This seemed to give organizations an opportunity to argue a minimization of requirements.  I am sure this will result in a lot of arguments between QSAs, ISAs and their assessees in the future.

As a funny aside, the Council mentioned the “three hop rule” and then feigned ignorance as to where it came from.  As I pointed out in my post, it was from the 2014 Community Meeting in Orlando.

Not-Listed Encryption Solutions

This guidance is a train wreck and just seems to keep getting worse.  The Council gave a lot of answers to questions, but it just seemed like they were digging an ever deeper hole, not filling it in.

The biggest news is that the Non-Listed Encrypted Solution Assessment (NESA) document should be available for review in the first quarter of 2017.

The next biggest news was the Council reconfirming that this is only guidance/recommendations and not some new process that is mandatory.  They even made sure to tell everyone attending that QSAs are NOT to hold up an organization’s ROC/SAQ over not having a NESA for their E2EE solution.  So if an E2EE solution does not have a NESA, then the fallback based on a lack of guidance from the Council is to preform whatever procedures that the merchant’s acquiring bank recommends.

The purpose of this Information Supplement the Council stated was to provide QSAs, merchants, service providers and banks with the Council’s acceptable way to deal with assessing E2EE solutions.  While on its face this statement and rationale makes sense, it does not make sense from the standpoint that the organizations driving the E2EE solutions are the banks and processors that have partnered with the E2EE solution providers.  Given that the banks and processors are the same organizations driving PCI compliance of the merchants that consume those E2EE solutions it seems rather odd that they would be questioning what is acceptable for PCI compliance of their approved E2EE solutions.

At the end of the day, it just seems that this NESA process is a solution looking for a problem and that the only problem the process really solves is getting more E2EE solutions to just finish the NESA and validate as a P2PE solution.

Until the banks and processors get behind the NESA process, I see this effort as dead on arrival.

So it sounds like it will be a busy first quarter for the Council.

The Council stated that the slide deck for this session will be posted to the Portal sometime after the first of the year.

30
Sep
16

2016 North American PCI Community Meeting

It was a hectic week out in Las Vegas at the Community Meeting this year.  I wish I had more time this year to just hang out with everyone, but I was in the middle of a number of assessments that needed to get done, so I was working at night and attending sessions during the day.

By the time you read this, the slide decks from the sessions will have been posted on the Council’s Web site.  So all of you that attended will be able to download those presentations.  You go to the link provided in the program guide, provide your name, organization name, email address and the password from the program guide (ve4eqepR) and you are in.

The Council tried the 20 minute “TED Talk” format again with the Wednesday sessions.  A number of the sessions I attended could have easily used an extra 10 minutes if not a complete hour.  I know the Council is trying to move things along and get a lot of information covered, but trying to discuss topics like “the cloud” or EMV standards just cannot be properly accomplished in 20 minutes.  I do not care how good a speaker or organized the presentation.

Here are some of the more notable highlights.

The Assessor Session Is Back

Possibly the most anticipated session of the Community Meeting this year was the return of the Assessor Session after being missing for two years.  But unlike previous years where this session occurred before the start of the Community Meeting, the return of the Assessor Session was moved to the end of the Community Meeting.  I heard a number of complaints throughout the week from assessors about being at the end of the meeting.  Yet when Thursday lunch came around, there were a lot of QSAs, ISAs and ASVs that adjusted their travel schedules (Guru included) to attend this session.

While I originally agreed with people that moving the Assessor Session to the end was not a good idea, the more I have thought about it, the more I think it was better at the end.  That way assessors can have questions covering topics that come up during the meeting get answered while we are all together.  I know we all want to get home, but I think the Assessor Session offers more value to all of us being at the end.

On the not so good side, the Council chose to use up an hour and 10 minutes to present a variety of topics, some of which took way too long to discuss.  But the larger question was why was this material not presented during the main conference?  Not only did all of the meeting attendees miss out, but there were people that did not get their questions asked.  I am also sure that running long discouraged a lot of people from asking questions as well.

That said, there were a number of good questions asked during this session and the Council rewarded five people with large PCI SSC coffee mugs for their “good” questions.

One question though really created a stir.  I will address that question regarding multi-factor authentication (MFA) as a separate post to be published later.  However I will say this about this discussion.  The Council really needs to go back and re-think their position on MFA if what they said is accurate.

The Council was asked about SAQ A and where it is headed.  The concern in the assessor community is that the mechanism that issues/controls the iFrame/redirect needs protection.  However the changes to SAQ A for v3.2 did not seem to address this obvious risk.  Based on how the question was answered, I am guessing that the hosting community is trying to keep SAQ A as simple and easy as possible regardless of the risk.

Another area that the Council agreed to review was the change to requirement 3.2 in the ROC Reporting Template.  In v3.2 of the template you can no longer mark those requirements as Not Applicable however it was pointed out that an ‘NA’ was still allowed in the SAQ D.  The reason for seeking this clarification was related to past comments from the Council to follow SAQs for P2PE (SAQ P2PE) and outsourced eCommerce (SAQ A) when filling out a ROC for merchants with these solutions.  It was pointed out that neither of these SAQs has requirement 3.2 in them, so how is a QSA/ISA supposed to respond to it in the reporting template if it cannot be marked as ‘NA’.

Understanding The Current Data Breach Landscape (aka Verizon DBIR Report Discussion)

When Verizon sends out Chris Novak, you know you will get a great presentation on the data breach incident report aka ‘The DBIR’.  This year was no exception albeit somewhat depressing as Chris again pointed out that most breaches are the result of sloppy operations, lax security and insecure applications.  Essentially security issues that we should have gotten past a long, long time ago but have not.

Architecting for Success

Who better to talk about success than a representative from the Jet Propulsion Laboratory (JPL) talking about how to develop spacecraft to explore the most inhospitable environment we know, outer space and planetary bodies.  Brian Muirhead was the keynote speaker on Wednesday and is the Chief Engineer for the Mars Science Laboratory, the group that designed and developed the various Mars exploration rovers.  He gave a great discussion on how to look out for problems and develop self-managing devices.  Very interesting and I am sure an eye opener for people that we need to stop accepting the sloppy and messy solutions we get for handling cardholder data.

Internet of Things Keynote

The Thursday keynote was just a great time.  While there seemed to be very little directly relevant to PCI compliance presented by Ken Munro and an associate from Pen Test Partners, it was a fabulous time exploring the wonderful world of flawed technology from a tea kettle, to a refrigerator to a child’s doll.  In the case of the child’s doll, they removed the word filter database and therefore allowed the doll to say things that no child’s toy should say.

What was relevant to PCI was the ease with which these folks were able to reverse engineer firmware and software used by these devices.  It gave a lot of people unfamiliar with IoT and penetration testing in the room pause as to how seemingly sophisticated technology can be easily abused.

Cloud Security

While it was great to see Tom Arnold from PSC, the even better thing about this presentation was the fact that Amazon provided an actual human being, in the form of Brad Dispensa, to talk about Amazon’s EC2 Cloud.  While billed as a discussion on incident response, the session provided great insight into AWS’s EC2 service offering as well as the variety of new tools available to manage the EC2 environment and also provide auditors and assessors with information regarding the configuration of that environment.  The key take away from this session is that organizations using EC2 can provide everything needed for conducting a PCI assessment using their EC2 Master Console.

EMVCo

Brian Byrne from EMVCo gave a great 20 minute session on EMV.  The slide deck will be more valuable than the presentation because he had so much content to share and so little time to share it in.  Of note was his discussion of version 2.0 of three domain secure otherwise known as 3D Secure or 3DS.  While v1.0 will remain under the control of Visa, EMVCo has taken over management and development of the 3DS standard.  The new version is in draft and only available to EMVCo members, so this was the first time I had been able to see what the new version has to offer.  But because of the time constraint, I will need to wait for the slide deck to be published to know more.

PCI Quality Assurance Program

Brandy Cumberland of the Council provided a great presentation on the Council’s quality assurance program that all QSAs have become familiar.  I appreciated her discussion of James Barrow who took over the AQM program after most of us wanted to kill his predecessor for creating one of the most brutal QA programs we had ever seen.  James efforts to make the AQM program more relevant cannot be underestimated as he took over a very troubled affair.  This was a bittersweet discussion as James passed away right after last year’s Community Meeting and will be greatly missed by those of us that came to know and respect him.  Brandy took over the AQM program when James left the Council and has been doing a great job ever since.  She is possible one of the best resources the Council has and does the AQM program proud.

Application Security at Scale

The last great session of the conference I saw was from Jeff Williams of Contrast Security.  The reason this session was great was it discussed what application developers can do to instrument their applications for not only security, but also for operational issues.  He introduced us to interactive AppSec testing (IAST) and run-time application self-promotion (RASP).  The beauty of this approach is that applications get security in the form of embedded instrumentation that results in actionable analytics which then allow decisions to be made to respond to threats to these applications.  It sounds like an interesting approach and concept and I cannot wait to see it in action.

As always, it was great to see and catch up with all of my friends in Las Vegas at the PCI Community Meeting.  It was also great to meet a lot of new people as well.  I look forward to seeing all of you again next year in Orlando.

10
Jun
16

Is The PCI DSS Even Relevant Any More?

First the National Retail Federation (NRF), then bloggers.  Organizations and people are piling on the PCI SSC and standards all because of the United States Federal Trade Commission’s (FTC) fact finding project.  Seems like PCI is now a bad three letter word.  But with the changes that have been implemented or will soon be implemented, I am starting to wonder about the relevance of the PCI DSS.  So I thought I would explore these topics and explain what has lead me to that conclusion.

Ever since the FTC announced there little fact finding mission, I have consistently said that the FTC is late to the party.

Why do I think the FTC is late?

The FTC’s fact finding efforts are I am sure in response to the Target, Michael’s, Home Depot, etc. data breaches which resulted in tens of millions of payment card accounts being exposed and potentially used for fraudulent purposes.  Remember, they are a governmental body, so taking action can take a bit of time, in this case at least three years and longer than most people would have desired.  But they eventually got around to it.  While this fact finding effort is a valid way to get up to speed on a problem, the trouble is that the threat landscape has changed since those notorious breaches and the FTC got its act together.

What in the threat landscape has changed?

The vast majority of mid-sized and large retailers have or are in the process of implementing point-to-point encryption (P2PE) or end-to-end encryption (E2EE) and tokenization solutions to minimize their PCI scope to only the point of interaction (POI) otherwise known as the card terminal.  As a result, the threat of large scale breaches at these merchants is or soon will be in the next 12 to 18 months (based on my knowledge of a large number of such efforts) near zero.  The reason being is that these merchants’ point of sale (POS) and other systems will no longer have access to cardholder data (CHD) or sensitive authentication data (SAD).

How can the threat be near zero?

The threat with P2PE/E2EE and tokenization limits scope to only the POI and is very, very low because of how the POI must be implemented to work with P2PE/E2EE and/or tokenization.  I am not going to discuss in detail the security features of these solutions so as not to tip the hand of those organizations implementing them.  Let me just say that there is a lot of information required that must be loaded into the POI in order to swap out terminals.  Even then, there are additional controls involving the registration of the device by the merchant and/or service provider that preclude terminal swaps without generating some form of alerts.

The one threat that still does remain is the use of an overlay for skimming cards.  But that risk varies from POI vendor to POI vendor and even by POI model within a vendor.  And it is not like vendors have not taken notice of the overlay problem.  Vendors have gotten a clue and are changing the design of their POI to make them as difficult as possible to use an overlay.  I have a client that went with a POI that has various angles, long swipe tracks, LED lights and other features that would make an overlay very expensive to engineer but also very difficult to appear seamless to customers and clerks.  Over time I expect to see all POI manufacturers adopt strategies to minimize the ability to use overlays.

The result of all of this is that merchants are no longer the risk (if they even present a risk) they were two or more years ago.

So who or what does that leave at risk?

ECommerce Web sites are still a huge problem.  EMV as it exists today does nothing to stem the problem of online fraud.  Even if a merchant has outsourced eCommerce, they still have to manage that environment as well as deal with the chargebacks and disputes that come from eCommerce card transactions.  I have heard rumors of solutions that are coming to address eCommerce, but I have yet to see any formal announcements of those solutions.  So for the foreseeable future, eCommerce will still be in-scope for some amount of PCI assessment.  So merchants with an eCommerce presence will likely still have to address some form of PCI assessment for that environment.

Any merchant that has not gotten on the P2PE/E2EE and tokenization bandwagon.  All merchants should be getting POI that encrypt and/or tokenize at the swipe or dip of a customer’s card.  Adopting such solutions will leave the merchant with only having to comply with requirements in 9.9 and 12.  I know for some merchants that will mean an investment, but the payoff is extremely reduced PCI scope and effectively taking almost all of the risk out of card payments.

The organizations that end up with a huge target on their backs are any service providers, transaction processors, issuers or financial institutions that have CHD and/or SAD stored in their files and/or databases.  An unfortunate fact of life is that transaction processors, issuers and financial institutions are always going to have to have some amount of CHD/SAD in their files and databases because of the nature of their business.  It is these organizations where the full on (i.e., Report On Compliance or ROC) PCI DSS assessment will never go away.

For merchants that have moved to P2PE/E2EE/tokens, I could see a move to an annual self-verification that those solutions are still implemented and functioning as designed.  I could additionally see that, every three years or so, the card brands requiring an independent assessment by a QSA/ISA that the controls for P2PE/E2EE/token solutions are still in place and functioning correctly.  The reason for independent verification is that changes get made and those changes might affect the environment making it less secure.  For merchants not using P2PE/E2EE/tokens, I would think the current SAQs and ROC will remain in place with an annual assessment required.

Will other PCI standards be marginalized or disappear?

The PA-DSS will never leave us.  Software developers need to develop secure code and those service providers, transaction processors, issuers and financial institutions that store CHD/SAD need applications that do that securely, so there is a built in constituency for the PA-DSS.  ECommerce solutions are also still going to need PA-DSS validation.  But regardless of whether P2PE/E2EE and tokenization are implemented, any application potentially dealing with CHD/SAD will need to be assessed under PA-DSS to ensure that any CHD stored is stored securely and is erased securely.  Then there are the unknowns of the future.  You never know what might come along in the future, so there is always a possibility that some solution might need to securely store CHD or other payment related information.  The bottom line is that I find it very hard to believe that the PA-DSS could ever be dropped.

The PTS standard will also not disappear because those POI need to be validated to handle CHD/SAD securely and work properly regardless of P2PE/E2EE solutions.  The PTS is the only standard that is a card brand requirement, not a PCI DSS requirement.  It is the card brands that demand merchants use only PTS validated POI and I do not see that requirement going away when the POI is going to become the remaining target at merchants.

The ASV standard will not go anywhere as there will still be eCommerce solutions that will require vulnerability scanning.  Most merchants will implement eCommerce solutions that minimize their PCI scope using a redirect or iFrame.  Although I can see it coming that even using those solutions will still require the merchant’s eCommerce site, now deemed as out of scope, to be scanned for vulnerabilities.  The reason is that the invocation point of the redirect or iFrame is at risk of modification by an attacker.

One standard I do believe that will eventually go away is P2PE.  The reason is that there is very little to gain with a P2PE versus an E2EE solution.  Both solutions are essentially the same, the only additional work required for E2EE is documenting that E2EE has been implemented appropriately and submitting that documentation to the client’s acquiring bank and getting the bank to agree to the PCI scope reduction.  As a result, I believe that the P2PE standard will slowly and quietly disappear into the night as the cost of going through the assessment process along with the Council filling fees just cannot be justified by a lot of influential vendors such as Verifone and First Data.

There is my rationale for where I think things are hopefully headed.  Only time will tell if the rest of the world sees things the same way.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031