Archive for the 'Card Brands' Category

05
Jul
17

NESA – Guidance In Search Of A Problem

On Thursday, June 29, the PCI SSC held their quarterly Assessor update webinar.  One of the more interesting discussions was on the topic of the non-listed encryption solution assessment or NESA.

For those unfamiliar with NESA, it is an attempt by the Council to have all end-to-end encryption (E2EE) solutions such as First Data’s TransArmor and Verifone’s Verishield assessed against the relevant PCI P2PE standards to ensure they are secure.  The problem is that the card brands and the banks have not gotten behind the NESA approach so it has effectively stalled out much like the P2PE program has stalled out.  But on the Thursday webinar we found out that it has really stalled out and the Council seems to be getting desperate to salvage it.

The goals of NESA are:

  • The Council reiterated that the NESA requires that a P2PE-QSA is required to conduct the assessment using the PCI P2PE assessment programs as guidance. Essentially, the NESA is a P2PE validation without the Council’s certification and listing of the solution on the Council’s Web site.
  • NESA provides a consistent approach to evaluating non-listed encryption solutions against “best practices”.
  • It provides other PCI assessors, acquiring banks and merchants with information about the risk and PCI DSS responsibilities when using a non-listed encryption solution.
  • It provides input to a merchant’s QSA to consider when conducting the merchant’s PCI assessment.

All of these are admirable goals of the NESA.  But the question still remains, do we need the NESA?

According to the Council a lot of people in the “payments community” have been clamoring for NESA.  I am not sure exactly who the Council is referring to as the “payments community” but it certainly has not been the banks or the brands.  Those two constituencies are already partnered up with E2EE and P2PE solutions and have not been clamoring for anything other than to use those solutions.

The Council did bring up the organizations behind the solutions already listed as P2PE validated.  That would make sense as they have a vested interest in forcing non-listed encryption solutions through the process.  But as to banks, the brands and QSAs pushing this agenda?  I would seriously doubt it.

Then there is the issue that the Council says that QSAs are stumped when they encounter an E2EE solution.  The process of assessing E2EE solutions has been known by QSAs since E2EE solutions were rolled out years ago by the various vendors.  But with the introduction of P2PE, I would bet that the Council’s QSA/ISA training does not cover how to handle E2EE solutions.  And I am sure since the invention of the NESA process, they have even more reasons not to instruct QSAs on how to assess an E2EE solution.  Yet I am sure that they still discuss how to assess an application that is not PA-DSS validated.  That is a “shame on them” for ignoring the realities of the real world.

But the process is not that involved.  When encountering an E2EE solution, the QSA needs to ensure that the E2EE solution is implemented according to its implementation guide (IG).  A transaction processor/gateway or an acquiring bank may also require packet captures to ensure that the data stream is encrypted.  All of that assessment and testing documentation is submitted to the acquiring bank and the bank explicitly grants the merchant scope reduction.  Then the QSA can follow the requirements in SAQ P2PE for an assessment.  All of which adds probably two hours to a merchant’s PCI assessment versus the costs of a full on P2PE assessment.  When looking at the costs of a P2PE assessment plus the listing fees to have the solution placed on the Council’s Web site, is there any wonder a lot of E2EE solution providers have eschewed the P2PE program.

First Data and Verifone have been adamant since P2PE was introduced that they will never go through P2PE because it is not needed.  Given they are partnered with most of the large processors and banks, their lack of support for P2PE means a lot and also means that until they get on board with either NESA or P2PE, both of these standards are in trouble.

But the most troubling comments occurred at the end of the Council’s brief discussion of NESA.

  • NESA is NOT a program. It is only “guidance”.
  • NESA may not result in scope reduction.
  • There is no formal NESA documentation or template.

When the Council says that something is “guidance”, there is no mandate for anyone to do anything.  This is how QSAs are to treat those Information Supplements published periodically by the Council.  In this case, NESA is only a suggestion.  So, until the brands and banks get behind the NESA process, there is no reason to have a NESA performed.

The next two comments go together.  If there is no formal deliverable for QSAs to review, how does a QSA evaluate that any NESA process was conducted adequately?  And if that is the case, of course the granting of scope reduction is not likely.  After all, if a QSA is not sure about the NESA, how is the bank supposed to evaluate it let alone pay for it.  And if scope reduction is not achieved, then what in the world is the point of NESA in the first place?  The only purpose I can see is to give P2PE QSACs an ability to push their services on the E2EE solution vendors to make their services worth the cost incurred with the Council.

The only other benefit that I can see is an opportunity for certain P2PE-QSACs to flood us all with NESA Certificates since their PCI Compliance certificates are worthless.

But in the end, you really start to wonder what the Council was thinking when they put this process together.  Time will tell, but I am guessing and hoping that NESA, like P2PE, will die a quick and quiet death.

09
Jun
17

We Need A Change To 2.3.b

I just wanted to give everyone a “heads up” about some guidance we recently received from the PCI SSC regarding jump boxes or out-of-band (OOB) system management solutions and the use of insecure protocols such as SNMPv1/2 and Telnet.

But did everyone know that this solution also requires a compensating control worksheet (CCW)?

For years (at least since the Phoenix Community Meeting years ago), the Council has been recommending the use of firewalls and jump boxes as a way to secure instances where organizations need to use insecure protocols.  These enclaves are firewalled, VLAN’d and configured so that only the jump box can be used to remotely connect to the devices over Telnet and allowing other insecure protocols to be kept away from other networks.  However, I do not recall any of those discussions ever explicitly calling out the need for a CCW.  I suppose the Council just figured we would all be bright enough to write one up.

What led me to this revelation you ask?

When I was going through my QSA Requalification this spring, they had a scenario with a jump box solution.  One of the questions related to the scenario involved how you would create a CCW for the insecure protocols used in the administrative VLAN that the jump box provided access.  While I answered the questions correctly, it triggered a new question regarding why a CCW was needed in the first place.

Then when the question was posed back to the Council, we got a reply indicating that a CCW would be required because of requirement 2.3.b which states:

“Review services and parameter files on systems to determine that Telnet and other insecure remote-login commands are not available for non-console access.”

The problem with the requirement is that it treats all Telnet with equal distain regardless of risk.  Yes, Telnet is always a clear text protocol, but when it is buried two or three layers away from any general network or the internet and requires administrator credentials and MFA, it is hardly as “at risk” as it would be when PCI started over 15 years ago and networks were as flat as a piece of paper.

As a result, I would like to recommend that the Council work to change 2.3.b to take into account the use of network segmentation, firewalls, VLANs, ACLs, MFA and jump boxes to allow the use of Telnet and insecure protocols when in a properly isolated and secure environment.  It seems silly to me that someone goes through all of the right steps to secure their environment only to be told that they still need a compensating controls to meet a requirement that does not reflect the real risk.

The other reason I feel this needs to be addressed is that a lot of banks and processors seem to see CCWs as a huge red flag.  Something to be avoided at all costs because it implies to them non-compliance.  And non-compliance is a “bad” thing.  I cannot tell you the collective hand wringing some banks go through for really simple CCWs all because they do not want to have any PCI assessments with CCWs.

Ultimately I think this all comes down to the fact that those banks and processors have no clue as to the amount of risk any CCW presents.  This is because most banks and processors staff their PCI compliance areas with auditors and compliance professionals, not technicians.  Given that the PCI DSS is predominately all about security technology and its implementation, these auditors and compliance people are not equipped to make the decisions that typically need to be made regarding CCWs.  As a result, they are all high risk in their eyes and treated accordingly.

Hopefully the Council can address this situation and we can avoid needless documentation for a preferred “best practice”.

22
May
17

Answering Some Dream Team Questions

After our PCI Dream Team event on May 17, I thought I would take some questions that do not require long and involved answers and publish them in this post.  FYI – I have edited and spell checked these, so they likely do not look like you entered them but they should convey your questions as you asked them.  Hopefully I answered on of your questions.

Q: Does anything special need to be done with the use of Virtual Terminals?  We use the virtual terminals to manually enter credit cards from time to time.  The computers used are normal user computers with the basic security done, but I have been wondering if they need to have extra limitations or security put in?

A: There are a lot of solutions that imply they take the workstation/terminal out of scope or magically reduce scope when using virtual desktop (VDI) solutions.  None of it is true.  If a device is used to enter PAN (regardless of HOW), it is a Category 1 device because it is used to enter PAN.  The bottom line is that any device used to enter PAN is in-scope for full PCI compliance.  There is no “magic” to change that fact.

Q: Do all POI devices have a keypad? I’m thinking of PC’s with integrated MCR’s – will those all change to separate POI’s with a keypad?

A: All point of interaction (POI), aka card terminals, that are customer facing have a keypad because they need to be able to accept PIN entry.  Merchants that are going to P2PE/E2EE solutions end up with a separate POI that is connected to the POS PC/terminal via USB so that the POS solution can communicate the total price of the sale as well as to know if the transaction is approved or declined.  The POI securely communicates with the transaction processor over Ethernet or using the USB connection and the Ethernet connection of the POS PC.  In both cases, the POS PC never has access to the sensitive authentication data (SAD)/cardholder data (CHD) as it is encrypted at the POI.  However is using an E2EE solution, the QSA will need to validate that the E2EE solution to ensure that they do in fact encrypt at the POI and therefore the POS PC/terminal is out of scope.  In addition, the merchant will have to contact their acquiring bank to get formal approval that the E2EE solution gives scope reduction for the merchant.  This will likely require the QSA to provide their evidence and assessment procedures to the acquiring bank for that approval.

Q: Are administrator workstations always in scope for PCI DSS regardless if an administrator is connecting to CDE servers via jump box?

A: Yes, because they are “connected to” systems when they access the jump box.  They may not be entering cardholder data (CHD), but they likely can access it or influence its processing/transmission because they are administrators.  That said, I would treat them in the Open PCI Scoping Toolkit vernacular as a Category 2x system.  That means they can probably avoid the bulk of PCI requirements but, at a minimum, need to be properly security hardened, kept updated, have anti-virus/anti-malware and are monitored “periodically”.  And as a reminder, administrators will need to use multi-factor authentication (MFA) after January 31, 2018 when accessing the cardholder data environment (CDE).

Q: Are you having/forcing your clients to follow the December scoping guidance, and are you bringing administrator workstations into scope?

A: I guess I am curious as to when anyone would have thought that administrator workstations ever were out of scope?  Nothing has changed in that regard as they were always in scope for PCI compliance.

Q: Are “crash kits” in restaurants for use when the system is down in scope for compliance?

A: The kits themselves are not in scope, but when they get used, the forms that get generated which contain the embossed image or handwritten PAN and other sensitive authentication data (SAD)/cardholder data (CHD) place those forms in scope for PCI compliance.  They therefore need to be securely stored, securely transmitted and subsequently securely destroyed in accordance to the relevant requirements in section 9.

Q: Does pushing non-cardholder data out of CDE system excludes connected system out of PCI scope? For example pushing non-cardholder data such as CPU usage for monitoring or number of transactions per day used for reporting etc.

A: According to a discussion at the 2016 Community Meeting and a subsequent Assessor call, the Council has publicly stated that if it can be unequivocally proven that the flow is only outbound from the cardholder data environment (CDE) to a device and that the data does not contain cardholder data (CHD), that device can be ruled out of scope.  However you have to get your QSA to buy into that argument and I do not know too many QSAs that will agree with that decision.  In my experience, there is still too much of a risk that cardholder data (CHD) could leak through that flow and saying it is out of scope is not accurate nor is it good practice as it leads to an exfiltration point that is not monitored.  The question you have to ask yourself is, how will it look in that newspaper headline when your organization is breached that you ruled it out of scope because it was outbound only?

Q: PCI DSS requires a firewall in place, are host level firewalls meeting that requirement?

A: Yes, as long as they perform stateful packet inspection (SPI), they are properly and securely configured and they are appropriately monitored like any other in scope firewall.

Q: Regarding vulnerability assessments for internal scans, do we have to address medium vulnerabilities or only critical and high vulnerabilities?

A: The PCI DSS and the Council have been very clear on this which is why it is disconcerting when this question constantly gets asked.  The guidance for requirement 6.2 is very clear as it states, “Consider prioritizing patch installations such that security patches for critical or at-risk systems are installed within 30 days, and other lower-risk patches are installed within 2-3 months.”  The bottom line is that you need to apply ALL patches/updates to all in scope systems as soon as possible.  So get on with patching and updates, no excuses.

Q: More than once I’ve been told that the decision to implement PCI compliant controls is a financial decision. What are the expected fines and penalties for failing?

A: No organization gets to ignore any PCI requirement because of financial or any other reasons.  However in those cases where a requirement cannot be directly met, an organization must then come up with compensating controls that go above and beyond that requirement in order to be in compliance.  In my experience, it is almost always cheaper to meet the PCI requirement than to go the compensating control worksheet approach.  You will have to talk to the card brands as they are the ones that come up with the fines and penalties.

Q: Do you ever foresee the card brands implementing any sort safe harbor clauses in regard to PCI?  If a merchant is doing their best to be secure and (more importantly, as far as PCI is concerned) compliant and they are breached, as it stands right now, PCI will not help you.  Instead, PCI seems to be wielded as a weapon to extract fines from the merchant.

A: You are joking right?  LOL!  Actually, with merchants going to P2PE/E2EE and tokenization solutions, I could envision changes in the PCI compliance process at the merchant level because the risk is only with the POI.  Time will tell.

Q: Have you heard anything further regarding the FTC’s review of PCI?

A: Not a word and I would not expect to hear anything until the FTC decides to tell us anything.  I do know that issues regarding the FTC’s information requests from the QSACs were supposedly worked out and that the requested information was delivered to the FTC.  But that is the extent of my knowledge on the matter.

15
Mar
17

Why We Should Be Concerned About The Verifone Breach

On March 7 Brian Krebs broke the news that Verifone, one of the largest card terminal manufacturers, has suffered a breach. The next day Verifone told the world that the breach was no big deal. No big deal right? Probably not and here is my rationale.

For those of you unfamiliar with Verifone, Verifone is not only a manufacturer of points of interaction (POI, aka card/transaction terminals), it also provides transaction processing services to merchants. As a result, any breach of such an organization puts a lot of the security of the card processing ecosystem at tremendous risk.

Extent Of The Breach

Here is what Verifone has told us about the extent of the breach.

“According to third-party forensic teams, this cyber attempt was limited to approximately two dozen U.S. gas station convenience stores and occurred over a short time period. No other merchants were targeted and the integrity of our payment networks and Verifone’s payment terminals remained secure and fully operational.

Verifone’s information security team identified evidence of this very limited cyber intrusion into our corporate network in January 2017, and we proactively notified Visa, MasterCard and other card schemes.

In concert with our partners, Verifone immediately implemented additional security controls across its corporate networks and began work to determine the type of information that may have been targeted.

It is also worth noting that there have been no adverse events or misuse of any data resulting from this incident. Verifone, partner agencies, and law enforcement remain vigilant and will continue to monitor for this.

We believe that our immediate response and coordination with partners and agencies has made the potential for misuse of information extremely limited.”

The first thing that any forensic examiner will tell you is that determining the extent of a breach is not a trivial process. It takes time. Most times, a lot of time. The reason is that attackers can be very stealthy in how they cover their tracks by wiping logs, leave behind malware/backdoors, and other techniques to obscure what they did and how they did it.  Even though Verifone took almost two months to acknowledge the breach and tell everyone that things are fine, all may not necessarily be well within Verifone.  But only time will tell if that is true.

The troubling thing about Verifone’s statement and likely demanded by their lawyers is the wording at the very end of their statement as they start their last sentence – “We believe”. Legalese that will give them an out should their forensic teams find more issues or issues turn up later.

“Asked about the breach reports, a Verifone spokesman said the company saw evidence in January 2017 of an intrusion in a “limited portion” of its internal network, but that the breach never impacted its payment services network.”

This was followed up by an update by Mr. Krebs after his original post. Verifone stated:

“According to the forensic information to-date, the cyber attempt was limited to controllers at approximately two dozen gas stations, and occurred over a short time frame. We believe that no other merchants were targeted and the integrity of our networks and merchants’ payment terminals remain secure and fully operational.”

Hold on a moment.  What is a “short time frame”?  Oh, and by the way, the attackers had access to controllers and around two dozen gas stations?  And then there is that “According to the forensic information to-date” comment.  That statement would seem to imply that Verifone is not necessary complete with their forensic examination.

So did Verifone or someone else find this breach?

“But a source with knowledge of the matter told KrebsOnSecurity.com that the employee alert Verifone sent out on Jan, 23, 2017 was in response to a notification that Verifone received from the credit card companies Visa and Mastercard just days earlier in January.”

So like most organizations, they were notified by a third party that they likely had been breached.  In this case, two card brands recognized fraudulent transactions that came from merchants serviced by Verifone.

But follow that statement with this one regarding what happened once they were notified.

 “Verifone’s information security team identified evidence of this very limited cyber intrusion into our corporate network in January 2017 …”

My concern with this and the prior statement is that it takes a while for the card brands to recognize fraud.  I have seen it take brands as little as a month to as much as two years for the brands to notify a merchant or service provider that they think there has been a breach.  The reason is that it depends on the extent of the breach (i.e., small versus large merchants, small versus large service provider(s), number of transactions/cards involved), how quickly the cards are used for committing fraud, how quickly those fraudulent transactions are reported back to banks by their customers, how quickly the brands determine a pattern and then that pattern traces back to a likely source or sources.  As a result, I am very suspect as to how long the intruders were in their network and the likelihood that the intrusion was truly as “limited” as Verifone is leading us to believe.

The bottom line in all of this, in my very humble opinion, is that this could just be the tip of the iceberg and this breach could be more extensive than Verifone knows and could have larger ramifications.

Why You Should Care

Given that I suspect that the attackers were in Verifone’s network for a while, I would assume that not just Verifone’s service provider operation was targeted and compromised.

The first clue to this suspicion is that Visa and MasterCard were the ones that notified Verifone that something was going on.  As I stated earlier, the brands take a while to determine a breach which likely means that the attackers were inside Verifone for more than just a short period of time.  In addition, it is rare that PANs collected in a breach are used immediately after they are obtained.  The reason is that there are bigger rewards if they are not used immediately.

The next piece clue in our puzzle is this statement from the Krebs post.

“The source said his employer shared with the card brands evidence that a Russian hacking group known for targeting payment providers and hospitality firms had compromised at least a portion of Verifone’s internal network.”

If this is accurate then it is highly likely that not just card information was gathered.  What also was likely gathered was source code to things like card terminal firmware and software such as Verishield, Verifone’s end-to-end encryption (E2EE) solution.  Any attackers that are focused on targeting payment providers would know that if they were inside of an organization that provides such solutions as Verifone that they should get their software as well as cardholder data (CHD).  If you have the ability to exfiltrate CHD, why not exfiltrate other useful information such as source code, certificates, encryption keys and other sensitive information.

The only good news in this regard is that while a lot of transaction gateways and processors use Verishield, they all have their own certificates and encryption keys.  So the attackers would have only gotten certificates and keys for the merchants processing through Verifone.  Since Verifone is an encryption endpoint, it is possible that the attackers did not get the certificates or encryption keys because they would not necessarily need them to get at the clear text CHD.  However one should ever assume that is the case.

Now What?

The net of all of this is that if you have Verifone terminals and/or Verishield or other Verifone applications, you should probably be doing a lot more monitoring of that hardware and software since there is no reason to believe that it has not been compromised.

It will be interesting as time goes on to see if this is the end of the discussion or if more will come out on the Verifone breach.

20
Nov
16

Revenue Generation Or Payment Security?

Late on Friday, November 18, the PCI Security Standards Council issued a draft Information Supplement titled ‘Assessment Guidance for Non-Listed Encryption Solutions’.  For those of you that follow my blog, these solutions would be what I refer to as end-to-end encryption (E2EE) solutions.  This is a draft document, but I would bet there will be a lot of discussion regarding it.  The good news is that it is a draft and an Information Supplement, so it is not yet official and is only offering a suggestion of how organizations should proceed.

The biggest recommendation that comes from this Information Supplement is the one that will cause the most heartburn and the most discussion.  The Council is recommending that a P2PE QSA assess a vendor’s E2EE solution and issue a non-listed encryption solution assessment (NESA).  As you read further into the document, the NESA is just a different name for a P2PE assessment.  So essentially, what the Council is recommending is a P2PE assessment without the QA review and listing by the Council of the solution on their Web site.

All I can think of is that the Council is taking this approach so that First Data, Verifone and others will be forced to get their E2EE solutions P2PE validated.  After all, if you have to go through a P2PE assessment to allow merchants to use your solution, why stop there?  Why not just get it validated and listed on the Web site?

But the next thing that is troublesome is the implication that regular QSAs are not capable of adequately assessing an E2EE solution.  That somehow the mystical P2PE QSA training process imbues some sort of encryption omnipotence on those that attend and pass the test.  If you have ever looked at the P2PE Report On Validation (ROV), I think most QSAs could easily execute it.

But I think the real reason behind this Information Supplement is revenue.  The Council is driving revenue to their bottom line with these recommendations.  There will likely have to be more P2PE QSAs and those non-listed solutions will likely end up as P2PE validated.  All of those activities generate revenue for the Council.  Revenue that is needed since the card brands have limited their funding of the Council.

Another big reason to believe this is just a revenue generator for the Council is the fact that, unlike a lot of other Information Supplements, this one was not developed by a committee of card brands, Participating Organizations, QSAs or other stakeholders.  In the 14 pages that comprise this Information Supplement, there is no page that lists any outside contributors.

So other than the Council, who could be driving this Information Supplement?

The acquiring banks?  I just completed an assessment of a merchant using an E2EE solution recommended to the merchant by their acquiring bank.  The acquiring bank is major player in the payment processing industry, so you would assume they would have pointed me to the P2PE ROV for the testing of the E2EE solution but they did not.

First Data, TrustCommerce and Verifone have never pointed me to the P2PE ROV for assessing their E2EE solutions.  So the payment processors are not demanding this sort of assessment.

One would think that the card brands would have each issued a press release announcing this draft, but they did not.

That only leaves us with a unilateral decision made by the Council that this was necessary.

But the real question is, how does this Information Supplement improve the security of the payment process?

Have there been a huge number of E2EE solutions that have been breached and this is a response?  I have not heard of any nor have I seen anything in the media indicating that E2EE solutions are a problem.

Are there “fly by night” vendors of E2EE solutions running rampant in the industry?  Not that I have encountered but it would not surprise me if there were a few.  That said, the merchants I have worked with in implementing E2EE solutions only worked with vendors recommended by their acquiring bank, payment processor or payment gateway.  In most of these cases, the solutions were from First Data and Verifone who are widely trusted in the industry.

I suppose this could be a proactive step to get ahead of things getting out of control with E2EE solutions.  But if that were the case, one would think that the card brands and acquiring banks would have been on board and pushing this effort as well as the Council and explaining that they were being proactive.  Nothing on that front either.

That leaves us with the only purpose of this Information Supplement is to generate revenue for the Council at the expense of merchants, E2EE vendors and ultimately consumers.

The P2PE standard has been a big flop in the industry because, surprise, surprise, it is doing nothing to help the industry.  If it had been adopted by the big players such as First Data and Verifone, then we would probably be in a different place.  But there is a reason those big players and others never got on board, because the standard is too cumbersome, time consuming and onerous just like the now failing PA-DSS process.

Do not get me wrong, every organization has to make money to subsidize its existence.  But I am troubled that the Council now appears to be generating requirements for the purposes of revenue generation rather than the securing of the payment process.

It appears that we have turned a corner and that it may not be a good corner to have turned.

30
Sep
16

2016 North American PCI Community Meeting

It was a hectic week out in Las Vegas at the Community Meeting this year.  I wish I had more time this year to just hang out with everyone, but I was in the middle of a number of assessments that needed to get done, so I was working at night and attending sessions during the day.

By the time you read this, the slide decks from the sessions will have been posted on the Council’s Web site.  So all of you that attended will be able to download those presentations.  You go to the link provided in the program guide, provide your name, organization name, email address and the password from the program guide (ve4eqepR) and you are in.

The Council tried the 20 minute “TED Talk” format again with the Wednesday sessions.  A number of the sessions I attended could have easily used an extra 10 minutes if not a complete hour.  I know the Council is trying to move things along and get a lot of information covered, but trying to discuss topics like “the cloud” or EMV standards just cannot be properly accomplished in 20 minutes.  I do not care how good a speaker or organized the presentation.

Here are some of the more notable highlights.

The Assessor Session Is Back

Possibly the most anticipated session of the Community Meeting this year was the return of the Assessor Session after being missing for two years.  But unlike previous years where this session occurred before the start of the Community Meeting, the return of the Assessor Session was moved to the end of the Community Meeting.  I heard a number of complaints throughout the week from assessors about being at the end of the meeting.  Yet when Thursday lunch came around, there were a lot of QSAs, ISAs and ASVs that adjusted their travel schedules (Guru included) to attend this session.

While I originally agreed with people that moving the Assessor Session to the end was not a good idea, the more I have thought about it, the more I think it was better at the end.  That way assessors can have questions covering topics that come up during the meeting get answered while we are all together.  I know we all want to get home, but I think the Assessor Session offers more value to all of us being at the end.

On the not so good side, the Council chose to use up an hour and 10 minutes to present a variety of topics, some of which took way too long to discuss.  But the larger question was why was this material not presented during the main conference?  Not only did all of the meeting attendees miss out, but there were people that did not get their questions asked.  I am also sure that running long discouraged a lot of people from asking questions as well.

That said, there were a number of good questions asked during this session and the Council rewarded five people with large PCI SSC coffee mugs for their “good” questions.

One question though really created a stir.  I will address that question regarding multi-factor authentication (MFA) as a separate post to be published later.  However I will say this about this discussion.  The Council really needs to go back and re-think their position on MFA if what they said is accurate.

The Council was asked about SAQ A and where it is headed.  The concern in the assessor community is that the mechanism that issues/controls the iFrame/redirect needs protection.  However the changes to SAQ A for v3.2 did not seem to address this obvious risk.  Based on how the question was answered, I am guessing that the hosting community is trying to keep SAQ A as simple and easy as possible regardless of the risk.

Another area that the Council agreed to review was the change to requirement 3.2 in the ROC Reporting Template.  In v3.2 of the template you can no longer mark those requirements as Not Applicable however it was pointed out that an ‘NA’ was still allowed in the SAQ D.  The reason for seeking this clarification was related to past comments from the Council to follow SAQs for P2PE (SAQ P2PE) and outsourced eCommerce (SAQ A) when filling out a ROC for merchants with these solutions.  It was pointed out that neither of these SAQs has requirement 3.2 in them, so how is a QSA/ISA supposed to respond to it in the reporting template if it cannot be marked as ‘NA’.

Understanding The Current Data Breach Landscape (aka Verizon DBIR Report Discussion)

When Verizon sends out Chris Novak, you know you will get a great presentation on the data breach incident report aka ‘The DBIR’.  This year was no exception albeit somewhat depressing as Chris again pointed out that most breaches are the result of sloppy operations, lax security and insecure applications.  Essentially security issues that we should have gotten past a long, long time ago but have not.

Architecting for Success

Who better to talk about success than a representative from the Jet Propulsion Laboratory (JPL) talking about how to develop spacecraft to explore the most inhospitable environment we know, outer space and planetary bodies.  Brian Muirhead was the keynote speaker on Wednesday and is the Chief Engineer for the Mars Science Laboratory, the group that designed and developed the various Mars exploration rovers.  He gave a great discussion on how to look out for problems and develop self-managing devices.  Very interesting and I am sure an eye opener for people that we need to stop accepting the sloppy and messy solutions we get for handling cardholder data.

Internet of Things Keynote

The Thursday keynote was just a great time.  While there seemed to be very little directly relevant to PCI compliance presented by Ken Munro and an associate from Pen Test Partners, it was a fabulous time exploring the wonderful world of flawed technology from a tea kettle, to a refrigerator to a child’s doll.  In the case of the child’s doll, they removed the word filter database and therefore allowed the doll to say things that no child’s toy should say.

What was relevant to PCI was the ease with which these folks were able to reverse engineer firmware and software used by these devices.  It gave a lot of people unfamiliar with IoT and penetration testing in the room pause as to how seemingly sophisticated technology can be easily abused.

Cloud Security

While it was great to see Tom Arnold from PSC, the even better thing about this presentation was the fact that Amazon provided an actual human being, in the form of Brad Dispensa, to talk about Amazon’s EC2 Cloud.  While billed as a discussion on incident response, the session provided great insight into AWS’s EC2 service offering as well as the variety of new tools available to manage the EC2 environment and also provide auditors and assessors with information regarding the configuration of that environment.  The key take away from this session is that organizations using EC2 can provide everything needed for conducting a PCI assessment using their EC2 Master Console.

EMVCo

Brian Byrne from EMVCo gave a great 20 minute session on EMV.  The slide deck will be more valuable than the presentation because he had so much content to share and so little time to share it in.  Of note was his discussion of version 2.0 of three domain secure otherwise known as 3D Secure or 3DS.  While v1.0 will remain under the control of Visa, EMVCo has taken over management and development of the 3DS standard.  The new version is in draft and only available to EMVCo members, so this was the first time I had been able to see what the new version has to offer.  But because of the time constraint, I will need to wait for the slide deck to be published to know more.

PCI Quality Assurance Program

Brandy Cumberland of the Council provided a great presentation on the Council’s quality assurance program that all QSAs have become familiar.  I appreciated her discussion of James Barrow who took over the AQM program after most of us wanted to kill his predecessor for creating one of the most brutal QA programs we had ever seen.  James efforts to make the AQM program more relevant cannot be underestimated as he took over a very troubled affair.  This was a bittersweet discussion as James passed away right after last year’s Community Meeting and will be greatly missed by those of us that came to know and respect him.  Brandy took over the AQM program when James left the Council and has been doing a great job ever since.  She is possible one of the best resources the Council has and does the AQM program proud.

Application Security at Scale

The last great session of the conference I saw was from Jeff Williams of Contrast Security.  The reason this session was great was it discussed what application developers can do to instrument their applications for not only security, but also for operational issues.  He introduced us to interactive AppSec testing (IAST) and run-time application self-promotion (RASP).  The beauty of this approach is that applications get security in the form of embedded instrumentation that results in actionable analytics which then allow decisions to be made to respond to threats to these applications.  It sounds like an interesting approach and concept and I cannot wait to see it in action.

As always, it was great to see and catch up with all of my friends in Las Vegas at the PCI Community Meeting.  It was also great to meet a lot of new people as well.  I look forward to seeing all of you again next year in Orlando.

09
Sep
16

Level 3 Versus Level 4 Merchants

There seems to be a lot of confusion over these two merchant levels.  As such I thought I would take a quick moment to clarify them.

From the respective Web sites, here are the definitions for a Level 3 Merchant.

“20,000 to 1 million ecommerce Visa transactions annually” – Visa USA

“Any merchant with more than 20,000 combined Mastercard and Maestro e-commerce transactions annually but less than or equal to one million total combined Mastercard and Maestro e-commerce transactions annually” OR “Any merchant meeting the Level 3 criteria of Visa” – MasterCard

From the respective Web sites, here are the definitions for a Level 4 Merchant.

“Merchants processing less than 20,000 Visa ecommerce transactions annually and all other merchants processing up to 1 million Visa transactions annually” – Visa USA

“All other merchants” – MasterCard

The operative factor is eCommerce transactions.  Level 3 has always been about eCommerce.  It was specifically created to identify those small merchants that were predominately eCommerce focused.  That delineation is important because of the risk presented by card not present (CNP) payment transactions as well as the potential loss of sensitive authentication data (SAD) or cardholder data (CHD) from Web sites used for eCommerce.

However, where the confusion occurs is that both merchant levels end at 1 million total transactions from all payment sources (i.e., eCommerce, MOTO, card present, etc.).

The bottom line is that if your organization is conducting more than 20,000 payment transactions through your eCommerce payment channel, but your total number of payment transactions is less than 1 million, then you are a Level 3 Merchant.  Otherwise, your organization is a Level 4 Merchant.

Now we should all be on the same page.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

July 2017
M T W T F S S
« Jun    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,854 other followers