Archive for the 'PCI SAQ' Category

15
Dec
16

The Council Speaks On A Number Of Topics

The Council had a Webinar session for QSAs and ISAs on Thursday, December 15. It was a great session, but at only an hour, there were a lot of questions that went unanswered.  The following were the more notable discussion topics.

Not Tested

The Council got the message and they are working on new wording for the AOCs as well as some guidance for “Not Tested” and how it can be used and not impact PCI compliance.  They expect to have something issued in the first quarter of 2017.

Network Segmentation and Scoping

This was a very hot topic and drew a lot of questions and some useful answers as well as generating a slew of new questions.

We got a definition of “purpose-built controls”.  There really is not any change here in what the Council has told QSAs and ISAs in the past regarding segmentation.  The bottom line is that “purpose-built controls” are those controls that segment one network from another network.  That can be firewall rules, access control lists (ACL) or any other controls that control or limit the communications from one network to another network.  I posed a question regarding encryption such as TLS and IPSec as still being a valid segmentation control, but it did not get answered.  I am assuming that it still is a valid control given the Council’s statement that nothing has changed, but until we have explicit confirmation, that still is an assumption, not a fact.

The Council answered a number of questions regarding whether or not in-scope devices can be on the same network segment as out of scope devices can co-exist.  As usual, we go the “it depends” discussion.  The bottom line is that it depends on the threat presented by the out of scope devices to those in-scope.  If an organization has lax security controls over all of their networks and devices, then I would be hesitant to allow out of scope devices to be on the same network segment as in-scope devices.

One of the most amazing discussions on this topic was an answer given regarding whether or not a device that has only an outbound connection from the cardholder data environment (CDE) can be considered out of scope.  Under the Open PCI Scoping Toolkit, this would be categorized as a 2C system.  The Council started out with their stock answer of “it depends” and then clarified that answer.  The answer given was that while the system would be in scope because it is connected to the CDE, what requirements it would need to comply with would depend on the risk presented by the system to the CDE.  This seemed to give organizations an opportunity to argue a minimization of requirements.  I am sure this will result in a lot of arguments between QSAs, ISAs and their assessees in the future.

As a funny aside, the Council mentioned the “three hop rule” and then feigned ignorance as to where it came from.  As I pointed out in my post, it was from the 2014 Community Meeting in Orlando.

Not-Listed Encryption Solutions

This guidance is a train wreck and just seems to keep getting worse.  The Council gave a lot of answers to questions, but it just seemed like they were digging an ever deeper hole, not filling it in.

The biggest news is that the Non-Listed Encrypted Solution Assessment (NESA) document should be available for review in the first quarter of 2017.

The next biggest news was the Council reconfirming that this is only guidance/recommendations and not some new process that is mandatory.  They even made sure to tell everyone attending that QSAs are NOT to hold up an organization’s ROC/SAQ over not having a NESA for their E2EE solution.  So if an E2EE solution does not have a NESA, then the fallback based on a lack of guidance from the Council is to preform whatever procedures that the merchant’s acquiring bank recommends.

The purpose of this Information Supplement the Council stated was to provide QSAs, merchants, service providers and banks with the Council’s acceptable way to deal with assessing E2EE solutions.  While on its face this statement and rationale makes sense, it does not make sense from the standpoint that the organizations driving the E2EE solutions are the banks and processors that have partnered with the E2EE solution providers.  Given that the banks and processors are the same organizations driving PCI compliance of the merchants that consume those E2EE solutions it seems rather odd that they would be questioning what is acceptable for PCI compliance of their approved E2EE solutions.

At the end of the day, it just seems that this NESA process is a solution looking for a problem and that the only problem the process really solves is getting more E2EE solutions to just finish the NESA and validate as a P2PE solution.

Until the banks and processors get behind the NESA process, I see this effort as dead on arrival.

So it sounds like it will be a busy first quarter for the Council.

The Council stated that the slide deck for this session will be posted to the Portal sometime after the first of the year.

Advertisements
30
Sep
16

2016 North American PCI Community Meeting

It was a hectic week out in Las Vegas at the Community Meeting this year.  I wish I had more time this year to just hang out with everyone, but I was in the middle of a number of assessments that needed to get done, so I was working at night and attending sessions during the day.

By the time you read this, the slide decks from the sessions will have been posted on the Council’s Web site.  So all of you that attended will be able to download those presentations.  You go to the link provided in the program guide, provide your name, organization name, email address and the password from the program guide (ve4eqepR) and you are in.

The Council tried the 20 minute “TED Talk” format again with the Wednesday sessions.  A number of the sessions I attended could have easily used an extra 10 minutes if not a complete hour.  I know the Council is trying to move things along and get a lot of information covered, but trying to discuss topics like “the cloud” or EMV standards just cannot be properly accomplished in 20 minutes.  I do not care how good a speaker or organized the presentation.

Here are some of the more notable highlights.

The Assessor Session Is Back

Possibly the most anticipated session of the Community Meeting this year was the return of the Assessor Session after being missing for two years.  But unlike previous years where this session occurred before the start of the Community Meeting, the return of the Assessor Session was moved to the end of the Community Meeting.  I heard a number of complaints throughout the week from assessors about being at the end of the meeting.  Yet when Thursday lunch came around, there were a lot of QSAs, ISAs and ASVs that adjusted their travel schedules (Guru included) to attend this session.

While I originally agreed with people that moving the Assessor Session to the end was not a good idea, the more I have thought about it, the more I think it was better at the end.  That way assessors can have questions covering topics that come up during the meeting get answered while we are all together.  I know we all want to get home, but I think the Assessor Session offers more value to all of us being at the end.

On the not so good side, the Council chose to use up an hour and 10 minutes to present a variety of topics, some of which took way too long to discuss.  But the larger question was why was this material not presented during the main conference?  Not only did all of the meeting attendees miss out, but there were people that did not get their questions asked.  I am also sure that running long discouraged a lot of people from asking questions as well.

That said, there were a number of good questions asked during this session and the Council rewarded five people with large PCI SSC coffee mugs for their “good” questions.

One question though really created a stir.  I will address that question regarding multi-factor authentication (MFA) as a separate post to be published later.  However I will say this about this discussion.  The Council really needs to go back and re-think their position on MFA if what they said is accurate.

The Council was asked about SAQ A and where it is headed.  The concern in the assessor community is that the mechanism that issues/controls the iFrame/redirect needs protection.  However the changes to SAQ A for v3.2 did not seem to address this obvious risk.  Based on how the question was answered, I am guessing that the hosting community is trying to keep SAQ A as simple and easy as possible regardless of the risk.

Another area that the Council agreed to review was the change to requirement 3.2 in the ROC Reporting Template.  In v3.2 of the template you can no longer mark those requirements as Not Applicable however it was pointed out that an ‘NA’ was still allowed in the SAQ D.  The reason for seeking this clarification was related to past comments from the Council to follow SAQs for P2PE (SAQ P2PE) and outsourced eCommerce (SAQ A) when filling out a ROC for merchants with these solutions.  It was pointed out that neither of these SAQs has requirement 3.2 in them, so how is a QSA/ISA supposed to respond to it in the reporting template if it cannot be marked as ‘NA’.

Understanding The Current Data Breach Landscape (aka Verizon DBIR Report Discussion)

When Verizon sends out Chris Novak, you know you will get a great presentation on the data breach incident report aka ‘The DBIR’.  This year was no exception albeit somewhat depressing as Chris again pointed out that most breaches are the result of sloppy operations, lax security and insecure applications.  Essentially security issues that we should have gotten past a long, long time ago but have not.

Architecting for Success

Who better to talk about success than a representative from the Jet Propulsion Laboratory (JPL) talking about how to develop spacecraft to explore the most inhospitable environment we know, outer space and planetary bodies.  Brian Muirhead was the keynote speaker on Wednesday and is the Chief Engineer for the Mars Science Laboratory, the group that designed and developed the various Mars exploration rovers.  He gave a great discussion on how to look out for problems and develop self-managing devices.  Very interesting and I am sure an eye opener for people that we need to stop accepting the sloppy and messy solutions we get for handling cardholder data.

Internet of Things Keynote

The Thursday keynote was just a great time.  While there seemed to be very little directly relevant to PCI compliance presented by Ken Munro and an associate from Pen Test Partners, it was a fabulous time exploring the wonderful world of flawed technology from a tea kettle, to a refrigerator to a child’s doll.  In the case of the child’s doll, they removed the word filter database and therefore allowed the doll to say things that no child’s toy should say.

What was relevant to PCI was the ease with which these folks were able to reverse engineer firmware and software used by these devices.  It gave a lot of people unfamiliar with IoT and penetration testing in the room pause as to how seemingly sophisticated technology can be easily abused.

Cloud Security

While it was great to see Tom Arnold from PSC, the even better thing about this presentation was the fact that Amazon provided an actual human being, in the form of Brad Dispensa, to talk about Amazon’s EC2 Cloud.  While billed as a discussion on incident response, the session provided great insight into AWS’s EC2 service offering as well as the variety of new tools available to manage the EC2 environment and also provide auditors and assessors with information regarding the configuration of that environment.  The key take away from this session is that organizations using EC2 can provide everything needed for conducting a PCI assessment using their EC2 Master Console.

EMVCo

Brian Byrne from EMVCo gave a great 20 minute session on EMV.  The slide deck will be more valuable than the presentation because he had so much content to share and so little time to share it in.  Of note was his discussion of version 2.0 of three domain secure otherwise known as 3D Secure or 3DS.  While v1.0 will remain under the control of Visa, EMVCo has taken over management and development of the 3DS standard.  The new version is in draft and only available to EMVCo members, so this was the first time I had been able to see what the new version has to offer.  But because of the time constraint, I will need to wait for the slide deck to be published to know more.

PCI Quality Assurance Program

Brandy Cumberland of the Council provided a great presentation on the Council’s quality assurance program that all QSAs have become familiar.  I appreciated her discussion of James Barrow who took over the AQM program after most of us wanted to kill his predecessor for creating one of the most brutal QA programs we had ever seen.  James efforts to make the AQM program more relevant cannot be underestimated as he took over a very troubled affair.  This was a bittersweet discussion as James passed away right after last year’s Community Meeting and will be greatly missed by those of us that came to know and respect him.  Brandy took over the AQM program when James left the Council and has been doing a great job ever since.  She is possible one of the best resources the Council has and does the AQM program proud.

Application Security at Scale

The last great session of the conference I saw was from Jeff Williams of Contrast Security.  The reason this session was great was it discussed what application developers can do to instrument their applications for not only security, but also for operational issues.  He introduced us to interactive AppSec testing (IAST) and run-time application self-promotion (RASP).  The beauty of this approach is that applications get security in the form of embedded instrumentation that results in actionable analytics which then allow decisions to be made to respond to threats to these applications.  It sounds like an interesting approach and concept and I cannot wait to see it in action.

As always, it was great to see and catch up with all of my friends in Las Vegas at the PCI Community Meeting.  It was also great to meet a lot of new people as well.  I look forward to seeing all of you again next year in Orlando.

10
Jun
16

Is The PCI DSS Even Relevant Any More?

First the National Retail Federation (NRF), then bloggers.  Organizations and people are piling on the PCI SSC and standards all because of the United States Federal Trade Commission’s (FTC) fact finding project.  Seems like PCI is now a bad three letter word.  But with the changes that have been implemented or will soon be implemented, I am starting to wonder about the relevance of the PCI DSS.  So I thought I would explore these topics and explain what has lead me to that conclusion.

Ever since the FTC announced there little fact finding mission, I have consistently said that the FTC is late to the party.

Why do I think the FTC is late?

The FTC’s fact finding efforts are I am sure in response to the Target, Michael’s, Home Depot, etc. data breaches which resulted in tens of millions of payment card accounts being exposed and potentially used for fraudulent purposes.  Remember, they are a governmental body, so taking action can take a bit of time, in this case at least three years and longer than most people would have desired.  But they eventually got around to it.  While this fact finding effort is a valid way to get up to speed on a problem, the trouble is that the threat landscape has changed since those notorious breaches and the FTC got its act together.

What in the threat landscape has changed?

The vast majority of mid-sized and large retailers have or are in the process of implementing point-to-point encryption (P2PE) or end-to-end encryption (E2EE) and tokenization solutions to minimize their PCI scope to only the point of interaction (POI) otherwise known as the card terminal.  As a result, the threat of large scale breaches at these merchants is or soon will be in the next 12 to 18 months (based on my knowledge of a large number of such efforts) near zero.  The reason being is that these merchants’ point of sale (POS) and other systems will no longer have access to cardholder data (CHD) or sensitive authentication data (SAD).

How can the threat be near zero?

The threat with P2PE/E2EE and tokenization limits scope to only the POI and is very, very low because of how the POI must be implemented to work with P2PE/E2EE and/or tokenization.  I am not going to discuss in detail the security features of these solutions so as not to tip the hand of those organizations implementing them.  Let me just say that there is a lot of information required that must be loaded into the POI in order to swap out terminals.  Even then, there are additional controls involving the registration of the device by the merchant and/or service provider that preclude terminal swaps without generating some form of alerts.

The one threat that still does remain is the use of an overlay for skimming cards.  But that risk varies from POI vendor to POI vendor and even by POI model within a vendor.  And it is not like vendors have not taken notice of the overlay problem.  Vendors have gotten a clue and are changing the design of their POI to make them as difficult as possible to use an overlay.  I have a client that went with a POI that has various angles, long swipe tracks, LED lights and other features that would make an overlay very expensive to engineer but also very difficult to appear seamless to customers and clerks.  Over time I expect to see all POI manufacturers adopt strategies to minimize the ability to use overlays.

The result of all of this is that merchants are no longer the risk (if they even present a risk) they were two or more years ago.

So who or what does that leave at risk?

ECommerce Web sites are still a huge problem.  EMV as it exists today does nothing to stem the problem of online fraud.  Even if a merchant has outsourced eCommerce, they still have to manage that environment as well as deal with the chargebacks and disputes that come from eCommerce card transactions.  I have heard rumors of solutions that are coming to address eCommerce, but I have yet to see any formal announcements of those solutions.  So for the foreseeable future, eCommerce will still be in-scope for some amount of PCI assessment.  So merchants with an eCommerce presence will likely still have to address some form of PCI assessment for that environment.

Any merchant that has not gotten on the P2PE/E2EE and tokenization bandwagon.  All merchants should be getting POI that encrypt and/or tokenize at the swipe or dip of a customer’s card.  Adopting such solutions will leave the merchant with only having to comply with requirements in 9.9 and 12.  I know for some merchants that will mean an investment, but the payoff is extremely reduced PCI scope and effectively taking almost all of the risk out of card payments.

The organizations that end up with a huge target on their backs are any service providers, transaction processors, issuers or financial institutions that have CHD and/or SAD stored in their files and/or databases.  An unfortunate fact of life is that transaction processors, issuers and financial institutions are always going to have to have some amount of CHD/SAD in their files and databases because of the nature of their business.  It is these organizations where the full on (i.e., Report On Compliance or ROC) PCI DSS assessment will never go away.

For merchants that have moved to P2PE/E2EE/tokens, I could see a move to an annual self-verification that those solutions are still implemented and functioning as designed.  I could additionally see that, every three years or so, the card brands requiring an independent assessment by a QSA/ISA that the controls for P2PE/E2EE/token solutions are still in place and functioning correctly.  The reason for independent verification is that changes get made and those changes might affect the environment making it less secure.  For merchants not using P2PE/E2EE/tokens, I would think the current SAQs and ROC will remain in place with an annual assessment required.

Will other PCI standards be marginalized or disappear?

The PA-DSS will never leave us.  Software developers need to develop secure code and those service providers, transaction processors, issuers and financial institutions that store CHD/SAD need applications that do that securely, so there is a built in constituency for the PA-DSS.  ECommerce solutions are also still going to need PA-DSS validation.  But regardless of whether P2PE/E2EE and tokenization are implemented, any application potentially dealing with CHD/SAD will need to be assessed under PA-DSS to ensure that any CHD stored is stored securely and is erased securely.  Then there are the unknowns of the future.  You never know what might come along in the future, so there is always a possibility that some solution might need to securely store CHD or other payment related information.  The bottom line is that I find it very hard to believe that the PA-DSS could ever be dropped.

The PTS standard will also not disappear because those POI need to be validated to handle CHD/SAD securely and work properly regardless of P2PE/E2EE solutions.  The PTS is the only standard that is a card brand requirement, not a PCI DSS requirement.  It is the card brands that demand merchants use only PTS validated POI and I do not see that requirement going away when the POI is going to become the remaining target at merchants.

The ASV standard will not go anywhere as there will still be eCommerce solutions that will require vulnerability scanning.  Most merchants will implement eCommerce solutions that minimize their PCI scope using a redirect or iFrame.  Although I can see it coming that even using those solutions will still require the merchant’s eCommerce site, now deemed as out of scope, to be scanned for vulnerabilities.  The reason is that the invocation point of the redirect or iFrame is at risk of modification by an attacker.

One standard I do believe that will eventually go away is P2PE.  The reason is that there is very little to gain with a P2PE versus an E2EE solution.  Both solutions are essentially the same, the only additional work required for E2EE is documenting that E2EE has been implemented appropriately and submitting that documentation to the client’s acquiring bank and getting the bank to agree to the PCI scope reduction.  As a result, I believe that the P2PE standard will slowly and quietly disappear into the night as the cost of going through the assessment process along with the Council filling fees just cannot be justified by a lot of influential vendors such as Verifone and First Data.

There is my rationale for where I think things are hopefully headed.  Only time will tell if the rest of the world sees things the same way.

11
May
16

Heads Up – Changes To SAQ A

I had a question this week regarding v3.2 of SAQ A that pointed out there have been some changes to that SAQ that people may have not noticed given the larger issues with the changes to the PCI DSS.  As a reminder, SAQ A is the least amount of PCI requirements any in-scope organization can comply.

Those added requirements to SAQ A are:

  • 2.1(a) – Are vendor-supplied defaults always changed before installing a system on the network?
  • 2.1(b) – Are unnecessary default accounts removed or disabled before installing a system on the network?
  • 8.1.1 – Are all users assigned a unique ID before allowing them to access system components or cardholder data?
  • 8.1.3 – Is access for any terminated users immediately deactivated or removed?
  • 8.2 – In addition to assigning a unique ID, is one or more of the following methods employed to authenticate all users?
  • 8.2.3(a) – Are user password parameters configured to require passwords/passphrases meet the following?
  • 8.5 – Are group, shared, or generic accounts, passwords, or other authentication methods prohibited
  • 12.10.1(a) – Has an incident response plan been created to be implemented in the event of system breach?

Even when a merchant outsources all of their card processing, these are controls that can still apply because in a lot of cases, a merchant is responsible for setup, management and maintenance of their outsourced payment processes and applications.

In addition, merchant employees will also interact with an outsourced payment system to handle chargebacks and disputes.  Those user accounts used by the outsourced environment will normally be managed by someone at the merchant, not necessarily the service provider.

In regards to incident response, the merchant will be involved with incident response even when they have totally outsourced their payment environment.  The merchant will work with their outsourcer to work through an incident and those responsibilities of the merchant need to be documented.

As a result, the addition of these controls should not be a surprise to anyone.

13
Feb
16

Crystal Ball Time

I was recently reminded that we are in year three of the current PCI DSS version. According to the PCI SSC’s standard lifecycle, that means the Council is starting to work on version 4 of the PCI DSS and PA-DSS.

So what is coming in version 4 of the PCI DSS? The Guru and his fellow PCI Wizards have some thoughts.

Business As Usual

I have written about this before. Everyone is betting that business as usual (BAU) will make its official appearance in the next version of the PCI DSS. If the past is any indication, BAU will most likely be a “best practice” until June 30, 2018 and then required thereafter.

For those that have forgotten, BAU was introduced as a concept with v3 of the PCI DSS. Pages 13 and 14 of the PCI DSS discuss the concept of BAU. The idea being to get organizations to integrate certain requirements of the PCI DSS into their organizational procedures to better ensure the security of cardholder data (CHD).

When BAU was first discussed back in 2013, I was able to identify at least 213 requirements that could be considered as BAU requirements (277 if you include Appendix A). Samples of BAU requirements include:

  • 1.2.a Examine diagram(s) and observe network configurations to verify that a current network diagram exists and that it documents all connections to the cardholder data environment, including any wireless networks,
  • 2.2.b Identify any enabled insecure services, daemons, or protocols and interview personnel to verify they are justified per documented configuration standards,
  • 1.a Examine the data-retention and disposal policies, procedures and processes,
  • 1 For a sample of system components including all operating system types commonly affected by malicious software, verify that anti-virus software is deployed if applicable anti-virus technology exists,
  • 6.3.b Observe processes and interview personnel to verify that follow-up to exceptions and anomalies is performed,
  • 2.1.a Review the scan reports and verify that four quarterly internal scans occurred in the most recent 12-month period, and
  • 8.1 Verify that a list of service providers is maintained.

Essentially, all the prospective BAU requirements are those that have some sort of timing involved for evaluating them.

Not only will adding in BAU be at issue, but I am guessing that the Council will probably reduce their use of the words “periodic” and “periodically” in the next version of the DSS. That is because with BAU they will likely begin to recommend actual minimum time frames for conducting these procedures. Known timings in v3 include annually, quarterly, daily, whenever changes occur and my personal favorite, whenever “significant” changes occur. Which means it will be all the more important for organizations to define what a “significant” change means in their environment.

But the introduction of BAU will likely not totally wipe out the use of “periodic” in the DSS. As a result, organizations will still have to define for those requirements that use “periodic” or “periodically” what the period is based on their risk assessment.

The bottom line is that if you have not been thinking ahead you need to get thinking ahead as to how BAU is going to impact your organization.

Requirement 9.9

New with v3 was the addition of requirement 9.9 which focuses on managing the risks to the card terminal or point of interaction (POI). Originally dismissed by most organizations as an annoyance, recent events with Safeway, other merchants and ATMs, security of the POI has come into laser-like focus. And with more and more merchants implementing point-to-point encryption (P2PE) or end-to-end encryption (E2EE), the security of the POI becomes all the more important as it is the only device that remains in-scope for the merchant. It is believed that with all of the issues with POI that have occurred, the Council will focus on beefing up 9.9 to address these higher risks by defining review schedules.

The first requirement that merchants need to deal with much better than they have is 9.9.1 which addresses maintaining an accurate inventory of POI. If a merchant has not implemented P2PE/E2EE, then the risk that a POI is swapped out for one that has been tampered with is extremely high. Tampering of POI is typically done such that the POI does not appear to have been tampered with such as installing a USB drive to collect data or installing revised software in the POI.

The quickest and easiest way to identify swapped out POI is to periodically review your POI inventory and make sure that all the serial numbers of those POI in use are the same as the numbers on your inventory. If they are not, then you should take that POI out of service and get a trustworthy one from your transaction processor.

Another control that you should use is tamper proof tape over the seams of the POI. If someone opens the POI, the tape will be torn and it should be obvious that the POI has been tampered with.

Earlier I distinguished merchant that have implemented P2PE/E2EE from those that have not implemented it. So what, if anything, should merchants with P2PE/E2EE implemented be doing? P2PE/E2EE solutions require the injection of an initial key into the terminal as well as recording a device number. If any of that information changes, the POI will not work. So if the terminal is swapped out, it will not work without the processor properly configuring the POI. This does not mean that the merchant does not have to manage their POI inventory, it just means that reviewing that inventory does not need to occur as often.

So how often should POI be reviewed against inventory? For merchants that have not implemented P2PE/E2EE, I would recommend no less than weekly. However, I do have clients that review their POI at every manager shift change. These are clients that have had POI swapped out and/or tampered with. Merchants that have implemented P2PE/E2EE should review their POI inventory at least semi-annually if not quarterly.

The next area that will likely be enhanced is requirement 9.9.2 which addresses the inspection of devices. I would anticipate that merchants will now be required to develop detailed procedures for the examination of POI by their retail management and cashiers.

But unlike with 9.9.1, this will not be broken down by those merchants that have and have not implemented P2PE/E2EE. Why? Because of terminal overlays that can be placed on top of certain POI and go unnoticed. These overlays can collect CHD regardless of whether P2PE/E2EE is implemented. As a result, examination of POI will likely be required to be done daily if not more often based on the assessed risk.

For merchants that have implemented E2EE, they may need to have to do an additional check depending on where E2EE is implemented. If the E2EE solution does not encrypt at the POI, then these merchants will also have to examine their point of sale (POS) in addition to their POI in order to comply with 9.9.2. One of the big reasons for this is because of Target style breaches where a memory scraper was used to obtain CHD because encryption was done at the POS, not the POI, and the connection between POI and POS was in clear text.

For merchants that use integrated keyboard card readers or USB card readers, there are other schemes such as keyboard loggers, USB adapters and other attacks that focus on the POS for obtaining CHD. For those merchants, they will also need to be doing daily inspections of their POS systems to ensure that equipment that does not belong is quickly identified.

The final requirement that will likely see changes is with 9.9.3 which is all about training retail personnel in the procedures of protecting the POI. With all of the changes to 9.9.1 and 9.9.2, you had to know that training would also have to change.

The biggest change will be the mandatory training. Yes, it was mandatory before, but with the focus on tampering with POI, training of retail staff will be very important. Why? Because they are the people on the front line and would be the ones most likely to notice something not right with their work area.

In order for these people to notice things out of order, they need to be trained, trained often and trained repeatedly. This is why security awareness training is so frustrating for security professionals is that it takes more than just one session every year to be effective. That and the fact that not everyone catches on at the same time with the same amount of training and there will be some people that never catch on.

The bottom line here is that your retail personnel will need to be drilled regularly on POI and POS security. How you choose to accomplish that training effort is up to your organization. But doing little or no training will not be an option.

SSL/Early TLS

The obvious change here will be the integration of the new deadlines for SSL and early TLS.

That said, the Council may actually provide some additional changes in section 4 regarding secure communication over TLS. In addition, by the time v4 of the PCI DSS is released, TLS v1.3 will likely be an official standard so changes in that regard may also be included.

Miscellaneous

Finally, I am sure there will be the usual wording and clarification changes that have come with every prior release.

Depending on any breaches that occur in the next few months, there may be some additional surprises in v4 to address the vulnerabilities identified by those breaches.

So there you have it. Start getting ready for the new PCI DSS v4 that will show up around November 2016.

07
Dec
15

Using SAQ C

There seems to be a lot of confusion over SAQ C and when it can and should be used. SAQ C was developed for the franchise industry, particularly the fast food and small retailer. The idea was that the franchisee would implement the franchise preferred point of sale (POS) solution, connect that POS solution to the Internet and start processing transactions.

Before going any further I must add the following caveat to this post. While I have based this post on all of the training and discussion over the years with the PCI SSC regarding SAQ C, this post is only my opinion and does not mean I am correct. The only official answer is the one you get from your acquiring bank. It is up to your acquiring bank to determine what SAQ your organization should use for your PCI assessment.

To refresh everyone’s memory about SAQ C, the criteria for using SAQ C are as follows.

  • “Your company has a payment application system and an Internet connection on the same device and/or same local area network (LAN);

  • The payment application system/Internet device is not connected to any other systems within your environment (this can be achieved via network segmentation to isolate payment application system/Internet device from all other systems);

  • The physical location of the POS environment is not connected to other premises or locations, and any LAN is for a single location only;

  • Your company retains only paper reports or paper copies of receipts, and these documents are not received electronically; and

  • Your company does not store cardholder data in electronic format.”

The key to understanding SAQ C are the second and third bullets. The third bullet indicates that the POS application cannot be connected to any other locations. The second bullet indicates that the payment application cannot be connected to any other systems within the organization’s processing environment. The bottom line is that the solution must be stand alone and fully segmented away from any other applications and systems.

These criteria can be easily met by solutions such as the MICROS e7 POS solution but can run afoul of integrated systems such as the MICROS RES or other similar fully integrated solutions that offer accounting, timekeeping, order management, inventory and other applications in addition to POS.

The MICROS RES solution can use SAQ C if and only if the POS application can be logically or physically segmented from the rest of the MICROS RES applications. However, in my experience, the MICROS RES and similar applications must operate as a single, integrated solution and segmentation is not possible and therefore SAQ C cannot be used.

Another place where SAQ C cannot be used is where the franchisee is linking all of their locations together back to a corporate office. I encounter this a lot where the franchisee has multiple locations and all of those locations are on a wide area network (WAN) connected to their corporate office. Transactions may be flowing directly out from the retail locations or funneled back to corporate and then out to the transaction processor. Corporate may also be monitoring the local location networks and managing the local locations’ systems and applications.

I also encounter situations where the franchisee is connected to the franchise corporate office for the ordering of inventory and the collection of sales information. The most common occurrences of this situation is with fast food franchise operations and in the lodging industry where locations are connected to the franchise corporate networks for passing information to/from the local systems. The corporate franchise may also be managing and maintaining the franchisee systems as well as part of the franchise agreement. All of these situations also preclude the use of SAQ C.

The bottom line is that SAQ C can only be used in situations where you have a LAN-based POS and no other applications or network connectivity other than to the Internet for the sole purpose of processing transactions.

So what does a merchant do when SAQ C is not an option? Sorry, but in my humble opinion, the merchant version of SAQ D is your only option when you have an integrated POS solution on a network.

Again, as a final reminder, it really does not matter what I think as all of this is up to your acquiring bank to officially approve. I am just giving my thoughts as to how I think things should work based on my training.

07
Jan
15

SAQ A And SAQ A-EP Clarification

With the advent of SAQ A and A-EP, there seems to be confusion as to what meets what for each SAQ.  I thought I covered this rather well in my post titled ‘Of Redirects And Reposts’.  But apparently that was not clear enough.

For outsourced eCommerce solutions, the criteria from SAQ A states it can be used if and only if:

“The entirety of all payment pages delivered to the consumer’s browser originates directly from a third-party PCI DSS validated service provider(s).”

For some service providers it appears that there seems to be a lot of misunderstandings as to what constitutes “originates directly from a third party”.  A lot of processors believe that if their customers’ Web sites are not storing cardholder data (CHD) or sensitive authentication data (SAD) then they are out of scope regardless of the method used to process a customer’s payment.  What they seem to forget is that applications that process and/or transmit CHD/SAD are in-scope for PCI compliance just as those that store CHD/SAD.

For SAQ A and A-EP, the Council took their lead from Visa Europe as to what is meant by “originates directly”.  Visa Europe’s Processing eCommerce Payments guide has a great matrix that explains the difference between SAQ A and A-EP by payment processing type and merchant level.

Visa Europe SAQ A SAQ A-EP ROC Matrix

With redirects and iFrames, the merchant’s Web server never comes into contact with the CHD or SAD because the customer is communicating directly with the transaction processor’s server.  PayPal is a prime example of a redirect and meets the criteria of SAQ A.  With a direct post, JavaScript, XML or any other techniques, the merchant’s eCommerce server is at least processing and/or transmitting the CHD/SAD to the processor’s servers.  That is because there is some form of code/executable/script/etc. that is running on the merchant’s eCommerce server thus placing it directly in-scope.

Where things seem to get confusing is with processors that offer multiple methods of completing payments.  Unfortunately, it also appears to be just as confusing to the processors’ sales personnel as well.  We have encountered numerous instances where the processor’s sales people believe all of their solutions make the merchant out of scope when only the redirect/iFrame solution they have provides such a scope reduction.  All of their other solutions place the merchant directly in-scope.

The bottom line is that it is extremely important to get the transaction processor to explain how a payment is processed to determine whether your server is or is not out of scope.  Even if the sales person says the solution is an iFrame or a redirect, make sure to quiz them enough to ensure that they truly are delivering you an iFrame or redirect solution.

But a word to the wise.  Security professionals will question a merchant’s decision to not worry about the security of their eCommerce Web server because there still is a risk even with the redirect or iFrame approaches.  That risk is that the code/executable/script/etc. that invokes the redirect or iFrame on the merchant’s server gets tampered with or changed and now invokes a Web site that is not the transaction processor’s Web site.  As a result, a merchant’s customers’ CHD/SAD could be sent to Timbuktu and no one would be the wiser until goods/services are not provided due to non-payment.

As a result, security conscious merchants will, at a minimum, ensure their eCommerce servers are properly security hardened, patched current and will monitor the code/executable/script/etc. for changes.  Should a change be detected, the server would then be brought offline and fixed to ensure that transactions are properly processed.

Hopefully this provides everyone with clarity on how to use these SAQs peroperly.

One additional thing I would like to point out.  If you look at the Level 1 merchant line of the Visa Europe matrix, it shows ROC subscripted with either an ‘A’ or an ‘A-EP’.  I point this out because if you meet the criteria of either of the SAQs but are a Level 1 merchant, you can mark all of the ROC requirements not in the respective SAQ as ‘Not Applicable’ and only provide testing evidence for those requirements in the relevant SAQ.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

September 2017
M T W T F S S
« Aug    
 123
45678910
11121314151617
18192021222324
252627282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,868 other followers