Archive for the 'PCI ASV' Category

31
Jul
21

PCI Dream Team LIVE! Is Coming In October

The PCI Dream Team will be appearing LIVE at the (ISC)2 Security Congress in Orlando this Fall, Monday, October 18 through Wednesday, October 20, 2021.   Our session is scheduled for Tuesday, October 19, at 11:45 AM ET/ 1545 UTC.

While we will be live at the conference, you can also attend the conference and our session virtually.  So other than training budget limitations, there is no other good reason you cannot join us.

As usual, we will be taking questions live and via email at pcidreamteam AT gmail DOT com.  We also monitor Twitter if you use #pcidreamteam.

We are expecting our usual lively discussion of all topics PCI and other security standards if time allows.

We really are looking forward to physically seeing people at the conference.

08
Oct
18

2018 North American PCI Community Meeting Thoughts

It was an interesting time in Las Vegas this year.  Part of that is due to the fact that we are in Las Vegas.  But part of it was that the Community Meeting seemed to be devoid of the usual anticipation for the Community Meeting and expected pronouncements.  While there were announcements for various standard updates, these were well anticipated and were not a surprise.  Some of the slide decks have been released, but others will not be available until the European Community Meeting is held in a few weeks.

While there were a number of good presentations this year, in my very humble opinion, the best session was the Assessor Session at the end of the meeting.  The good news this year was that a lot of QSAs and ISAs made sure to stick around for this session.  There were a number of good questions asked after the Council’s presentation, but I will wait for the Council’s transcript to be published before weighing in on those.

As in years past, the Council had a presentation at the start.  The following are highlights from that presentation.

AQM Program Highlights

As usual, the AQM team did a bang-up job pointing out common issues found in the various assessment types they review.

On the PA-DSS side of the ledger, a lot of PA-QSAs are having issues with requirement 5.1.6.b regarding application least privilege.  The Council clarified that what they are looking for in this requirement is proof that the application does not run as ‘root’, ‘administrator’ or some other default privileged account in order to run properly.

For P2PE assessments, there have been issues regarding when a double length 3DES key can be used.  The Council explained that a double length 3DES key is only allowed when using derived unique key per transaction (DUKPT).  All other uses must be triple length keys to be in compliance with P2PE.

Apparently, QSAs and their QA minders are totally missing what is meant by “describe how”.  When describing “how” a QSA must describe all of those procedures used to determine the requirement was satisfied as well as how those procedures prove the requirement was met.

QSAC QA manuals still are not covering topics such as evidence retention and destruction, security incident response plans and code of conduct policy.  The Council reminded everyone to make sure all topics in the QSA Qualifications Requirements document are covered.

Compensating controls were a continuing problem area and that should not be a surprise.  I am constantly fascinated when I receive a ROC for proof of PCI compliance performed by another QSAC and get to see what passes for a valid compensating control worksheet (CCW) at other firms.  Apparently ‘intent and rigor’ of the requirement and ‘above and beyond’ are foreign phrases to a lot of QSAs.  Never mind the fact that the controls used, tested and maintained are usually vague in description.  The Council pointed people to their Portal for remedial training of QSAs that cannot comprehend writing a CCW.  I have written a number of posts on compensating controls.  If you want to write good CCWs, start here for the most current post and it will point you to prior posts.

The Council got some interesting questions from QSAs over the year.  The first one is one that a lot of clients ask us, “Do you really have to come onsite?”  Yes, an onsite visit by the QSA is actually required.  However, how long a QSA needs to be onsite can vary from as little as a couple of days for a long-time client to a week or more for a new client.  Onsite visits can be supplemented by video meetings when needed.  Not unusual these days when a client has worldwide operations and not everyone is located at headquarters or will not be available when the QSA is onsite.

The other question was regarding ROC and AOC dates.  How people keep messing these up is beyond me, but as with the CCWs, I see a lot of ROCs and AOCs out of other firms where the dates on the documents are not consistent.  Basically, the last thing any QSAC should do is to set all of the dates in the ROC and AOC to match as part of their document finalization processes.  That way you will avoid this problem.

There was a brief discussion of the Software Security Standard (S3) that will replace the PA-DSS.  Most of the discussion revolved around the proposed timeline.  The standards themselves will be published sometime before year end.  Reporting materials will be published around mid-2019 with training commencing in the Fall of 2019.  The big deadline is that PA-DSS Reports On Validation (ROV) will only be accepted through mid-2020 requiring all reports going forward to be under the S3.  That will mean that by mid-2022, all PA-DSS validated applications will move to “Acceptable for Pre-Existing Deployments”.

Finally, SSL and early TLS got a discussion.  Somehow the word has not gotten around that if a company still uses SSL and/or early TLS, there must be a compensating control developed for the relevant requirements since Appendix A2 no longer exists in v3.2.1 of the DSS.  They also reminded everyone that having SSL or early TLS is NOT an automatic fail.  However, vulnerability scans will have to have explanations developed justify the use of the protocols as well as what is done to mitigate their use.

Card Production Security Assessor Program

If you were not aware, the PCI SSC took over the various card brands’ card production programs and created a single common program similar to what the Council did with the Data Security Standard back in 2006.

In response the Council is creating a new assessor program in 2019.  Card Production Assessor Companies (CPAC) will not need to be existing QSACs nor will assessors need to be QSAs.  The new assessor training program will be rolled out next year for this standard.  The Council did note that existing card production assessors will be somehow recognized by the new program but did not specify how that recognition would be implemented.

As with QSACs and QSAs, the Council will maintain a database of CPACs and qualified card production assessors.

PIN Assessor Program

As with card production, the Council has also been responsible for PIN standards for a few years now.  As a result, the Council is developing a program for creating PIN Assessor Companies and PIN Assessors.

There will be no need for the PIN Assessor Company to be a QSAC nor will assessors be required to be QSAs.  This program will also start in 2019.

Global Executive Assessor Roundtable (GEAR)

This is a new group that was established this year.  Its role is to provide a direct communication channel between the PCI SSC and 20 qualified security assessor companies’ (QSAC) senior executive leadership.  This group met for the first time a few days before the start of the Community Meeting.  Each member of GEAR serves for a two-year term.

The 20 QSACs on the GEAR are:

  • @sec
  • Advantio
  • Coalfire
  • Control Case
  • Foregenix
  • IBM Security
  • isec
  • K3DES
  • nccgroup
  • Protiviti
  • PSC
  • RSM
  • Security Metrics
  • Shellman
  • SISA
  • Sysnet
  • Trustwave
  • UL
  • usd
  • Verizon

As usual, it was great catching up with everyone and meeting new Guru fans.  I really appreciate all of the great comments about the blog.  Even though I see the statistics for the blog, it still amazes me how many people read it and appreciate it particularly when you meet so many of them in person.  It is very humbling.

Hopefully I will see you all next year in Vancouver.

29
Sep
17

What Are You Really Interested In?

As a QSA, we hear this comment all of the time.

“PCI is all about compliance, not security.”

The implication being that the person talking is interested in actually securing their environment not just being PCI compliant.

Yet as the conversation goes on, we get into esoteric discussions regarding scope and how scope can be minimized.  Not necessarily a bad thing, but as these discussions continue, an underlying theme becomes apparent.

This conversation eventually leads to the QSA asking, “What are your drivers that are making you so concerned about minimizing scope?”

The inevitable answer is, “Because, we want to minimize the cost of and/or difficulty in implementing (in no particular order) information security, increasing information security personnel, how many devices we vulnerability scan and penetration test, critical file management tools, anti-virus licenses, devices needing log aggregation and analysis, [insert your security tool/product/device/appliance/widget here].”

It is at that point it becomes painfully obvious that the organization is not at all interested in security.  In fact, they do not give a damn about security.  Their only interest is in checking off the PCI compliance box and moving on to the next annoying compliance checkbox on their list.

I am sure a lot of you are questioning, “How can you be saying this?”

Because, if the organization were truly interested in security, all of the things they mention in their minimization discussion would already be installed in their production environment, if not QA and test environments.  That is right.  They would already be installed and not just on the PCI in-scope stuff.  It would already be installed everywhere in those environments.

Why?

Because all of these security tools and methods are part and parcel of a basic information security program that follows information security “best practices”.  They are not special to PCI, they are required for any successful information security program such as HIPAA, FFIEC, FISMA, HITRUST, etc.

People seem to think that the PCI SSC and the card brands came up with the PCI DSS requirements by arbitrarily pulling the requirements out of thin air.  In fact, I have had people insinuate that the PCI standards are just there for the banks to be mean to merchants and extract more money from them.

But in actuality, the PCI standards come from a lot of recognized sources including the US National Institute of Standards and Technology (NIST) security standards and guidance, US Department of Defense (DoD) security standards and guidance, as well as “lessons learned” from the card brands’ cardholder data breach forensic examinations and working with information security professionals sharing their knowledge of what are the minimum, basic “best practices” required to secure data.

But the key words here are ‘minimum’ and ‘basic’.

Because guess what?  If you want true security (remember that thing you supposedly wanted when we started), then you have to go beyond the PCI DSS requirements.  Hear that people?  If you want true security, your organization must go BEYOND the PCI DSS requirements.  Organizations are complaining about doing the basics.  Imagine what their complaints would be like if they had to do true security?  They would be throwing a tantrum that would be easily heard around the world.

Want actual proof that organizations are not doing the basics?

Read the Verizon Data Breach Investigation Report (DBIR) or any of the dozens of data breach reports issued annually by forensic analysis firms.  They all read the same; year after year after nauseating year.  Organizations cannot consistently execute even the basic security requirements specified in any security standard.  Even more disheartening is the fact that it is the same vulnerabilities and mistakes that are the root cause of the vast majority of breaches.

QSAs still get complaints from organizations about the PCI DSS being too difficult and costly to implement and maintain.  Yet these same organizations have the gall to say that PCI is NOT about security.

So, before you go and tell your QSA that PCI is all about compliance, think long and hard about that remark and why you are saying it.  Odds are you are saying it to look good, make a good impression with your QSA, show them that you are a true security professional and that your organization wants to be secure.

Think again.  The truth will eventually come out.  One way or another.

30
Sep
16

2016 North American PCI Community Meeting

It was a hectic week out in Las Vegas at the Community Meeting this year.  I wish I had more time this year to just hang out with everyone, but I was in the middle of a number of assessments that needed to get done, so I was working at night and attending sessions during the day.

By the time you read this, the slide decks from the sessions will have been posted on the Council’s Web site.  So all of you that attended will be able to download those presentations.  You go to the link provided in the program guide, provide your name, organization name, email address and the password from the program guide (ve4eqepR) and you are in.

The Council tried the 20 minute “TED Talk” format again with the Wednesday sessions.  A number of the sessions I attended could have easily used an extra 10 minutes if not a complete hour.  I know the Council is trying to move things along and get a lot of information covered, but trying to discuss topics like “the cloud” or EMV standards just cannot be properly accomplished in 20 minutes.  I do not care how good a speaker or organized the presentation.

Here are some of the more notable highlights.

The Assessor Session Is Back

Possibly the most anticipated session of the Community Meeting this year was the return of the Assessor Session after being missing for two years.  But unlike previous years where this session occurred before the start of the Community Meeting, the return of the Assessor Session was moved to the end of the Community Meeting.  I heard a number of complaints throughout the week from assessors about being at the end of the meeting.  Yet when Thursday lunch came around, there were a lot of QSAs, ISAs and ASVs that adjusted their travel schedules (Guru included) to attend this session.

While I originally agreed with people that moving the Assessor Session to the end was not a good idea, the more I have thought about it, the more I think it was better at the end.  That way assessors can have questions covering topics that come up during the meeting get answered while we are all together.  I know we all want to get home, but I think the Assessor Session offers more value to all of us being at the end.

On the not so good side, the Council chose to use up an hour and 10 minutes to present a variety of topics, some of which took way too long to discuss.  But the larger question was why was this material not presented during the main conference?  Not only did all of the meeting attendees miss out, but there were people that did not get their questions asked.  I am also sure that running long discouraged a lot of people from asking questions as well.

That said, there were a number of good questions asked during this session and the Council rewarded five people with large PCI SSC coffee mugs for their “good” questions.

One question though really created a stir.  I will address that question regarding multi-factor authentication (MFA) as a separate post to be published later.  However I will say this about this discussion.  The Council really needs to go back and re-think their position on MFA if what they said is accurate.

The Council was asked about SAQ A and where it is headed.  The concern in the assessor community is that the mechanism that issues/controls the iFrame/redirect needs protection.  However the changes to SAQ A for v3.2 did not seem to address this obvious risk.  Based on how the question was answered, I am guessing that the hosting community is trying to keep SAQ A as simple and easy as possible regardless of the risk.

Another area that the Council agreed to review was the change to requirement 3.2 in the ROC Reporting Template.  In v3.2 of the template you can no longer mark those requirements as Not Applicable however it was pointed out that an ‘NA’ was still allowed in the SAQ D.  The reason for seeking this clarification was related to past comments from the Council to follow SAQs for P2PE (SAQ P2PE) and outsourced eCommerce (SAQ A) when filling out a ROC for merchants with these solutions.  It was pointed out that neither of these SAQs has requirement 3.2 in them, so how is a QSA/ISA supposed to respond to it in the reporting template if it cannot be marked as ‘NA’.

Understanding The Current Data Breach Landscape (aka Verizon DBIR Report Discussion)

When Verizon sends out Chris Novak, you know you will get a great presentation on the data breach incident report aka ‘The DBIR’.  This year was no exception albeit somewhat depressing as Chris again pointed out that most breaches are the result of sloppy operations, lax security and insecure applications.  Essentially security issues that we should have gotten past a long, long time ago but have not.

Architecting for Success

Who better to talk about success than a representative from the Jet Propulsion Laboratory (JPL) talking about how to develop spacecraft to explore the most inhospitable environment we know, outer space and planetary bodies.  Brian Muirhead was the keynote speaker on Wednesday and is the Chief Engineer for the Mars Science Laboratory, the group that designed and developed the various Mars exploration rovers.  He gave a great discussion on how to look out for problems and develop self-managing devices.  Very interesting and I am sure an eye opener for people that we need to stop accepting the sloppy and messy solutions we get for handling cardholder data.

Internet of Things Keynote

The Thursday keynote was just a great time.  While there seemed to be very little directly relevant to PCI compliance presented by Ken Munro and an associate from Pen Test Partners, it was a fabulous time exploring the wonderful world of flawed technology from a tea kettle, to a refrigerator to a child’s doll.  In the case of the child’s doll, they removed the word filter database and therefore allowed the doll to say things that no child’s toy should say.

What was relevant to PCI was the ease with which these folks were able to reverse engineer firmware and software used by these devices.  It gave a lot of people unfamiliar with IoT and penetration testing in the room pause as to how seemingly sophisticated technology can be easily abused.

Cloud Security

While it was great to see Tom Arnold from PSC, the even better thing about this presentation was the fact that Amazon provided an actual human being, in the form of Brad Dispensa, to talk about Amazon’s EC2 Cloud.  While billed as a discussion on incident response, the session provided great insight into AWS’s EC2 service offering as well as the variety of new tools available to manage the EC2 environment and also provide auditors and assessors with information regarding the configuration of that environment.  The key take away from this session is that organizations using EC2 can provide everything needed for conducting a PCI assessment using their EC2 Master Console.

EMVCo

Brian Byrne from EMVCo gave a great 20 minute session on EMV.  The slide deck will be more valuable than the presentation because he had so much content to share and so little time to share it in.  Of note was his discussion of version 2.0 of three domain secure otherwise known as 3D Secure or 3DS.  While v1.0 will remain under the control of Visa, EMVCo has taken over management and development of the 3DS standard.  The new version is in draft and only available to EMVCo members, so this was the first time I had been able to see what the new version has to offer.  But because of the time constraint, I will need to wait for the slide deck to be published to know more.

PCI Quality Assurance Program

Brandy Cumberland of the Council provided a great presentation on the Council’s quality assurance program that all QSAs have become familiar.  I appreciated her discussion of James Barrow who took over the AQM program after most of us wanted to kill his predecessor for creating one of the most brutal QA programs we had ever seen.  James efforts to make the AQM program more relevant cannot be underestimated as he took over a very troubled affair.  This was a bittersweet discussion as James passed away right after last year’s Community Meeting and will be greatly missed by those of us that came to know and respect him.  Brandy took over the AQM program when James left the Council and has been doing a great job ever since.  She is possible one of the best resources the Council has and does the AQM program proud.

Application Security at Scale

The last great session of the conference I saw was from Jeff Williams of Contrast Security.  The reason this session was great was it discussed what application developers can do to instrument their applications for not only security, but also for operational issues.  He introduced us to interactive AppSec testing (IAST) and run-time application self-promotion (RASP).  The beauty of this approach is that applications get security in the form of embedded instrumentation that results in actionable analytics which then allow decisions to be made to respond to threats to these applications.  It sounds like an interesting approach and concept and I cannot wait to see it in action.

As always, it was great to see and catch up with all of my friends in Las Vegas at the PCI Community Meeting.  It was also great to meet a lot of new people as well.  I look forward to seeing all of you again next year in Orlando.

10
Jun
16

Is The PCI DSS Even Relevant Any More?

First the National Retail Federation (NRF), then bloggers.  Organizations and people are piling on the PCI SSC and standards all because of the United States Federal Trade Commission’s (FTC) fact finding project.  Seems like PCI is now a bad three letter word.  But with the changes that have been implemented or will soon be implemented, I am starting to wonder about the relevance of the PCI DSS.  So I thought I would explore these topics and explain what has lead me to that conclusion.

Ever since the FTC announced there little fact finding mission, I have consistently said that the FTC is late to the party.

Why do I think the FTC is late?

The FTC’s fact finding efforts are I am sure in response to the Target, Michael’s, Home Depot, etc. data breaches which resulted in tens of millions of payment card accounts being exposed and potentially used for fraudulent purposes.  Remember, they are a governmental body, so taking action can take a bit of time, in this case at least three years and longer than most people would have desired.  But they eventually got around to it.  While this fact finding effort is a valid way to get up to speed on a problem, the trouble is that the threat landscape has changed since those notorious breaches and the FTC got its act together.

What in the threat landscape has changed?

The vast majority of mid-sized and large retailers have or are in the process of implementing point-to-point encryption (P2PE) or end-to-end encryption (E2EE) and tokenization solutions to minimize their PCI scope to only the point of interaction (POI) otherwise known as the card terminal.  As a result, the threat of large scale breaches at these merchants is or soon will be in the next 12 to 18 months (based on my knowledge of a large number of such efforts) near zero.  The reason being is that these merchants’ point of sale (POS) and other systems will no longer have access to cardholder data (CHD) or sensitive authentication data (SAD).

How can the threat be near zero?

The threat with P2PE/E2EE and tokenization limits scope to only the POI and is very, very low because of how the POI must be implemented to work with P2PE/E2EE and/or tokenization.  I am not going to discuss in detail the security features of these solutions so as not to tip the hand of those organizations implementing them.  Let me just say that there is a lot of information required that must be loaded into the POI in order to swap out terminals.  Even then, there are additional controls involving the registration of the device by the merchant and/or service provider that preclude terminal swaps without generating some form of alerts.

The one threat that still does remain is the use of an overlay for skimming cards.  But that risk varies from POI vendor to POI vendor and even by POI model within a vendor.  And it is not like vendors have not taken notice of the overlay problem.  Vendors have gotten a clue and are changing the design of their POI to make them as difficult as possible to use an overlay.  I have a client that went with a POI that has various angles, long swipe tracks, LED lights and other features that would make an overlay very expensive to engineer but also very difficult to appear seamless to customers and clerks.  Over time I expect to see all POI manufacturers adopt strategies to minimize the ability to use overlays.

The result of all of this is that merchants are no longer the risk (if they even present a risk) they were two or more years ago.

So who or what does that leave at risk?

ECommerce Web sites are still a huge problem.  EMV as it exists today does nothing to stem the problem of online fraud.  Even if a merchant has outsourced eCommerce, they still have to manage that environment as well as deal with the chargebacks and disputes that come from eCommerce card transactions.  I have heard rumors of solutions that are coming to address eCommerce, but I have yet to see any formal announcements of those solutions.  So for the foreseeable future, eCommerce will still be in-scope for some amount of PCI assessment.  So merchants with an eCommerce presence will likely still have to address some form of PCI assessment for that environment.

Any merchant that has not gotten on the P2PE/E2EE and tokenization bandwagon.  All merchants should be getting POI that encrypt and/or tokenize at the swipe or dip of a customer’s card.  Adopting such solutions will leave the merchant with only having to comply with requirements in 9.9 and 12.  I know for some merchants that will mean an investment, but the payoff is extremely reduced PCI scope and effectively taking almost all of the risk out of card payments.

The organizations that end up with a huge target on their backs are any service providers, transaction processors, issuers or financial institutions that have CHD and/or SAD stored in their files and/or databases.  An unfortunate fact of life is that transaction processors, issuers and financial institutions are always going to have to have some amount of CHD/SAD in their files and databases because of the nature of their business.  It is these organizations where the full on (i.e., Report On Compliance or ROC) PCI DSS assessment will never go away.

For merchants that have moved to P2PE/E2EE/tokens, I could see a move to an annual self-verification that those solutions are still implemented and functioning as designed.  I could additionally see that, every three years or so, the card brands requiring an independent assessment by a QSA/ISA that the controls for P2PE/E2EE/token solutions are still in place and functioning correctly.  The reason for independent verification is that changes get made and those changes might affect the environment making it less secure.  For merchants not using P2PE/E2EE/tokens, I would think the current SAQs and ROC will remain in place with an annual assessment required.

Will other PCI standards be marginalized or disappear?

The PA-DSS will never leave us.  Software developers need to develop secure code and those service providers, transaction processors, issuers and financial institutions that store CHD/SAD need applications that do that securely, so there is a built in constituency for the PA-DSS.  ECommerce solutions are also still going to need PA-DSS validation.  But regardless of whether P2PE/E2EE and tokenization are implemented, any application potentially dealing with CHD/SAD will need to be assessed under PA-DSS to ensure that any CHD stored is stored securely and is erased securely.  Then there are the unknowns of the future.  You never know what might come along in the future, so there is always a possibility that some solution might need to securely store CHD or other payment related information.  The bottom line is that I find it very hard to believe that the PA-DSS could ever be dropped.

The PTS standard will also not disappear because those POI need to be validated to handle CHD/SAD securely and work properly regardless of P2PE/E2EE solutions.  The PTS is the only standard that is a card brand requirement, not a PCI DSS requirement.  It is the card brands that demand merchants use only PTS validated POI and I do not see that requirement going away when the POI is going to become the remaining target at merchants.

The ASV standard will not go anywhere as there will still be eCommerce solutions that will require vulnerability scanning.  Most merchants will implement eCommerce solutions that minimize their PCI scope using a redirect or iFrame.  Although I can see it coming that even using those solutions will still require the merchant’s eCommerce site, now deemed as out of scope, to be scanned for vulnerabilities.  The reason is that the invocation point of the redirect or iFrame is at risk of modification by an attacker.

One standard I do believe that will eventually go away is P2PE.  The reason is that there is very little to gain with a P2PE versus an E2EE solution.  Both solutions are essentially the same, the only additional work required for E2EE is documenting that E2EE has been implemented appropriately and submitting that documentation to the client’s acquiring bank and getting the bank to agree to the PCI scope reduction.  As a result, I believe that the P2PE standard will slowly and quietly disappear into the night as the cost of going through the assessment process along with the Council filling fees just cannot be justified by a lot of influential vendors such as Verifone and First Data.

There is my rationale for where I think things are hopefully headed.  Only time will tell if the rest of the world sees things the same way.

01
Apr
16

The Council Speaks About v3.2

If you missed it, do not feel bad.  I too had to be told by friends and colleagues that the PCI SSC was having a Webinar on Thursday, March 31, to discuss the upcoming changes to the PCI DSS and PA-DSS as well as changes to other areas as a result.  Apparently the Webinar was announced in the March issue of the QSA newsletter.

To begin their presentation, the Council made a big deal out of explaining why they are dumping the three year update cycle.  The bottom line about this is that they feel the PCI DSS and PA-DSS are mature and therefore any future updates will be evolutionary not revolutionary as they have been in the past.  As a result, we can expect more minor changes more often.  Much like when the PCI DSS started out and we quickly got v1.1 followed by v1.2.

PCI DSS v3.2

The real piece of news here was that two-factor authentication (TFA) is going to be required for all administrative access to the cardholder data environment (CDE) regardless of whether that access is from the internal network or a remote network.  I am sure this is in response to the number of breaches that involved administrators being spear phished.

Speaking of TFA, the Council indicated that they are going to switch terminology from “two-factor” authentication to “multi-factor” authentication (MFA).  However, they were very clear when they discussed this change in terminology that they still mean the three factor model of something you know, something you have, and something you are.  Their rationale on this change is to align the DSS with industry terminology.  In the Q&A they got a lot of questions on this change as most security professionals said that clients would view MFA as including two sets of credentials versus TFA which has truly different factors.  So we will see if the MFA decision stands when the new standard is released.

In addition, the Council outlined some other key changes we can expect to see in the latest version of the DSS.  These are:

  • Two new Appendices are being added to the PCI DSS. The first of which discusses the SSL/early TLS issues.  The second is the incorporation of the Designated Entities Supplemental Validation (DESV) requirements into the DSS.
  • Allowing the display of the PAN to be more than just the first six digits and the last four digits to align the PCI DSS with the coming changes to ISO 7812 which will increase the issuer identification number (IIN) from six digits to eight digits.
  • Adding a number of additional requirements for service providers including: documentation of cryptographic architecture, detection/reporting on critical security control systems, penetration testing to confirm segmentation every six months, establishment of a formal PCI compliance program, and quarterly confirmation that personnel are following all security policies, standards and procedures.
  • Periodic testing that all change control policies, standards and procedures are in place and operating as designed. This is the first of many business as usual (BAU) requirements that will be added to the PCI DSS.

More On SSL/Early TLS

The Council gave a bit more information regarding why they extended the deadline on SSL and early TLS out to June 30, 2018.  As no surprise, the reason for the extension was push back from a variety of sources that found the 2016 deadline too short to convert.

I know from my own experience, I have a few clients that have contracts that do not allow them to make such changes without consultation with every customer impacted.  In one case, it was going to take almost nine months just to consult with all of their impacted customers and then another seven months to implement the changes into production.  In the perfect scenario, they would have cut over around September 2016, but they said past experience indicated a more likely date would have been July 2017 at the earliest.

The presenter reiterated that service providers must meet the June 30, 2016 deadline.

Also discussed was how ASVs are supposed to deal with SSL and early TLS issues.  Until June 30, 2016, if an ASV encounters SSL or early TLS vulnerabilities, the ASV must obtain the mitigation plan or a letter from their customer attesting that a mitigation plan has been developed and the date when the customer will have addressed the vulnerabilities related to SSL and/or early TLS.  The ASV does not need to assess the mitigation plan as the assessment of the mitigation plan is something the organization’s QSA must perform as part of the assessment process.

The final key point on this topic that the Council could not stress enough was, just because the deadline has been pushed out was no justification for an organization to wait until the last minute before addressing these critical vulnerabilities.  If an organization can meet the June 30, 2016 deadline, then they should meet that deadline.  If they need until December 31, 2016 to convert, then they need to mitigate the risk until December 31, 2016 when they can drop SSL and early TLS.  But waiting for the sake of waiting because the deadline is in 2018 is unacceptable and needs to be called out as ‘Not In Place’ by QSAs.

Related to this discussion was the fact that vulnerability management still needed to be addressed through the mitigation.  So if new vulnerabilities to SSL and/or early TLS are discovered while the organization is remediating their implementations of SSL/early TLS, the organization must still comply with requirements 6.2 and 11.2.

PA-DSS v3.2

No news is good news here.  There will be little change to the PA-DSS standard other than to align it with PCI DSS v3.2.

However two significant changes are coming to an application’s Implementation Guide (IG).

The IG will now be required to address debugging logs that contain PAN data.  Those debugging logs will be required to be protected, debugging will need to be immediately disabled once it is no longer needed and the debugging log data must be securely deleted as soon as it is no longer needed.

The IG will also be required to discuss the secure implementation of patches and updates to the application.

PA-DSS v3.1 dealt with the SSL/early TLS issue, so the Council felt that there would be no changes regarding that topic.  That said, they did address the question as to whether or not TLS v1.1 is considered secure and laid out how TLS v1.1 needed to be configured to be secure.  That configuration included:

  • Disable weak ciphers and cipher suites such as MD5, SHA-1 and RC4.
  • Use sufficient key sizes.
  • Prevent fallback to SSL or TLS v1.0.

AQM Update

The Council indicated that the PCI DSS v3.2 and the Report On Compliance (ROC) reporting templates will be released simultaneously for the first time.  Timing for these documents will be late April 2016.  No specific date was provided.

On the PA-DSS side, the Council stated that the v3.2 Report On Validation (ROV) reporting template and the standard will be released in May 2016.  Again, no specific date was provided.

Cutover to v3.2 for both standards was discussed with the PCI DSS cutover being the more specific.  PCI DSS v3.2 will go active upon release with sun setting of v3.1 occurring in October 2016 on whatever day matches the release date.  Cutover and sun setting on PA-DSS will be announced with the release of the v3.2 standard.  Use of both standards and reporting templates can occur immediately but we were reminded that everyone must cutover by the relevant sunset dates.

The Council also indicated that any relevant v3 FAQs will also be updated when the new standards are released.

ROC/ROV Personalization

The final point discussed under the AQM banner was the personalization of the ROC and ROV reporting templates by QSACs and PA-QSACs.  According to the presenter, the Council is hearing complaints from banks and the brands about the “over personalization” of ROC and ROV reports.

The Council stated that they understood the desire of QSACs and PA-QSACs to put their logos on the reports as well as making other “minor” changes to make the reports reflective of their organization.  However, banks and the card brands have been complaining that some of the personalization done had made the reports different enough from the original templates as to make them difficult to quickly review and process.

As a result, the Council has felt it necessary to issue guidelines on what personalization of the ROC and ROV templates is allowed.  Under these new guidelines:

  • Adding a title page to the report templates is allowed.
  • Adding a company’s logo to the report header is allowed.
  • No changes are allowed to any of the reports footers.

If you did miss this Webinar, the Council stated they were recording the session and it will be available on their PCI Portal sometime in the next few days.

05
Jan
16

Unsupported Operating Systems And Applications

One of our QSAs accidentally had their QSA certification lapse and had to go back through in-person QSA training. As a result, all of us in the PCI practice got an opportunity to get caught up on the latest and greatest guidance that the PCI SSC is passing along in their current QSA training. Even though QSAs and ISAs have to go through re-certification training and testing annually, having people go through the in-person training is the only way in some cases to get insight into the latest thinking of the Council.

One of the areas we specifically asked the person to ask their PCI trainer about was unsupported operating systems (OSes) and applications. In the past, such unsupported environments were considered automatically non-PCI compliant because of the ASV automatic failure rules documented in the ASV Program Guide v2.0. As a result, most QSAs constantly get push back from some clients when we encounter unsupported OSes and/or applications. However, we were shocked to find out from our colleague that the Council is no longer advising QSAs and ISAs to automatically mark as non-PCI compliant unsupported OSes and application software unless they are externally facing.

Now before you go off telling management that expensive upgrades are no longer necessary for internal systems and yelling “Alleluia” to the PCI Gods, there are, as you should expect, some caveats to all of this.

First, this is not the Council condoning the use of unsupported OSes and application software. The Council will still tell you that organizations should be using current and supported OSes and application software. This is merely a recognition that upgrades to a supported environment are not always an option in all cases. As a result, organizations might only be able to use unsupported operating systems and applications given hardware and/or customization constraints.

And just so we are all on the same page. Externally facing unsupported OSes and/or application software is still an automatic PCI compliance failure per the latest version of the ASV Program Guide.

Second, in order to continue to use unsupported OSes and applications, your organization will have to create compensating control worksheets for relevant PCI DSS requirements. The first problem with compensating controls is that the controls must go “above and beyond” the controls required by the PCI DSS. So any controls you use to compensate for your unsupported environment must either be not required by the PCI DSS or must go beyond the stated PCI DSS requirements. For example, white listing of installed applications is not a PCI DSS requirement, so that can be used as an effective control. An example of going above and beyond is doing near real-time monitoring of log data because log data is only required to be reviewed daily. For more on writing compensating controls, see my post on the subject.

Which brings up an interesting dilemma depending on the unsupported environment. As a prime example, developing a compensating control for Windows 2000 or Windows ME is probably not going to be possible no matter how many compensating controls you can document in the worksheet. The primary issue that will make this impossible is because of what those older operating systems do to a domain in order to be joined in the domain. The resulting downgrades in security create a litany of issues that no amount of compensating controls will be able to address.

Which points out that just because you make an attempt at compensating controls does not mean that effort will result in something effective or even acceptable to your QSA/ISA. All of those compensating controls for all of the requirements must be in place, operating as designed and assessed as part of your PCI assessment. This is not something you can just toss together at the last minute and hope it will pass muster. As a result, you need to be prepared to admit that there will be instances where the older OSes and/or applications just cannot be compensated for no matter how many other controls you think can implement.

Third, if your organization is going to use unsupported OSes and/or application software, then your organization is going to have to mitigate the risks of this practice. So what mitigations would a QSA/ISA expect to see? Here are a few thoughts.

  • Severely locking down the OS. This is typically done by a utility that white lists the OS and applications on the system. If anything tries to install on the system, it is stopped and an alert is generated.
  • Enabling the generation of all possible log data by the unsupported OS and/or application. Essentially logging all activity from the unsupported OS and/or application. All of this log data feeds the next bullet.
  • Conducting near real time analysis of all log data produced by the unsupported OS and/or application. This will require the use of a system incident and event monitoring (SIEM) solution configured with rules looking for anomalies related to threats to the unsupported OS and/or application. And I can hear people asking now, what are the anomalies I should be looking? See the next bullet.
  • Identification of new threats to the unsupported OSes and/or applications. Threat identification can come from vendors of the unsupported OSes and/or applications as well as from sources such as US CERT, anti-virus vendors and other recognized threat sources. And this is not going to just be some monthly, quarterly or other “periodic” exercise, this is going to have to be an active daily exercise and you will need to prove that it is conducted as such.

And finally do not bother to go through some sort of Rube Goldberg process of bizarre, twisted and convoluted logic you think will get you can pass. There is nothing worse than sending your QSA/ISA through some sort of circular logic that in the end never gets your unsupported OSes and/or applications any closer to being protected than when you started. I have encountered too many instances of a lot of words, pages and diagrams that have no meaning for PCI compliance other than being a lot of words, pages and diagrams all in the hope of baffling the QSA/ISA with a lot of words, pages and diagrams.

All we as QSAs and ISAs ask is that you be intelligent and judicial in what you choose not to upgrade or update.

16
Apr
15

ASV Guidance For SSL/TLS Vulnerabilities

Hidden by all of the news about v3.1 of the PCI DSS being published, is a notice that was sent to all PCI approved scanning vendors (ASV) from the PCI SSC regarding how to handle SSL and “early TLS” vulnerabilities.

In regards to the “early TLS” comment, the Council did define the term by referencing everyone to NIST SP800-52 rev1. That NIST document essentially tells the reader that while TLS 1.1 is allowed, whenever possible, TLS 1.2 should be the only version used. In fact, NIST is highly recommending that all government entities move to TLS 1.2 by January 1, 2016.

FYI TLS 1.3 is in a draft specification by the IETF as we speak. I would expect that we will see TLS 1.3 released by the time the PCI SSC’s June 30, 2016 deadline.

With that covered, what is an ASV to do with a scanning customer’s SSL and TLS 1.0/1.1 issues?

According to the letter sent to the ASVs:

Prior to 30 June 2016: Entities that have not completed their migration should provide the ASV with documented confirmation that they have implemented a risk mitigation and migration plan and are working to complete their migration by the required date. Receipt of this confirmation should be documented by the ASV as an exception under “Exceptions, False Positives, or Compensating Controls” in the ASV Scan Report Executive Summary and the ASV may issue a result of “Pass” for that scan component or host, if the host meets all applicable scan requirements.”

The key here is that you must be mitigating the vulnerability and working to migrate to TLS 1.2.

So what would a mitigation plan look like? Most likely you would monitor for usage of SSL or TLS 1.0/1.1 connections to your devices that only support SSL and TLS 1.0/1.1.

For those of you that are not going to be able to migrate to TLS 1.2, the Council gives ASVs guidance there as well.

After 30 June 2016: Entities that have not completely migrated away from SSL/early TLS will need to follow the Addressing Vulnerabilities with Compensating Controls process to verify the affected system is not susceptible to the particular vulnerabilities. For example, where SSL/early TLS is present but is not being used as a security control (e.g. is not being used to protect confidentiality of the communication).”

The reason the Council has to be able to provide a solution past June 30, 2016 here is that it is my understanding that a lot of comments were received about “baked in” SSL that was going to require wholesale replacement of devices to correct the problem. A lot of those devices are IP-based point of interaction (POI) devices. ASVs have been instructed on the process to use to reduce the CVSS so that the vulnerability is no longer considered “high”.

If you have any further questions regarding this announcement, I would discuss it with your ASV. As with all things PCI, every ASV will have variations based on their own risk adversity as to what this pronouncement says.

15
Nov
14

Security Or Checking A Box?

“Better to remain silent and be thought a fool than to speak out and remove all doubt.” Abraham Lincoln

What is your organization interested in?  Security or checking a box?

Not surprisingly, most people answer “security” and then go on to prove with their actions and words that they are only interested in checking a box.

For all of you out there that argue ad nausea about the meaning of PCI DSS testing requirements and the requisite documentation are interested in one thing and one thing only; checking a box.  I am not talking about the few that have honest differences of opinion on a few of the requirements and how a QSA is interpreting them and assessing them.  I am talking about those of you that fight constantly with your QSA or acquiring bank on the process as a whole.

If you were to step back and listen to your arguments, you would hear someone that is splitting hairs in a vain attempt to avoid having to do something that would improve your organization’s security posture.  In essence, you want to only be judged PCI compliant, not actually be secure.

To add insult to injury, these are also typically the people that argue the most vehemently over the fact that the PCI DSS is worthless because it does not make an organization secure.  Wow!  Want to have your cake and eat it too!  Sorry, but you cannot have it both ways.

Everyone, including the Council, has been very clear that the PCI DSS is a bare minimum for security, not the “be all to end all” for securing an organization.  Organizations must go beyond the PCI DSS to actually be secure.  This where these people and their organizations get stumped because they cannot think beyond the standard.  Without a detailed road map, they are totally and utterly lost.  And heaven forbid they should pay a consultant for help.

But I am encountering a more insidious side to all of this.  As you listen to the arguments, a lot of you arguing about PCI compliance appear to have no interest in breaking a sweat and doing the actual work that is required.  More and more I find only partially implemented security tools, only partially implemented monitoring and only partially implemented controls.  And when you dig into it as we must do with the PCI assessment process, it becomes painfully obvious that when it got hard is when the progress stopped.

“It’s supposed to be hard. If it wasn’t hard, everyone would do it.” Jimmy Duggan – A League Of Their Own

Security guru Bruce Schneier was speaking at a local ISSA meeting recently and when asked about why security is not being addressed better he stated that one of the big reasons is that it is hard and complex at times to secure our technology.  And he is right, security is hard.  It is hard because of our poor planning, lack of inclusion, pick the reason and I am sure there is some truth to it.  But he went on to say that it is not going to get any easier any time soon.  Yes, we will get better tools, but the nature of what we have built and implemented will still make security hard.  We need to admit it will be hard and not sugar coat that fact to management.

Management also needs to clearly understand as well that security is not perfect.  The analogy I like to use is banks.  I point out to people the security around banks.  They have one or more vaults with time locks.  They have video cameras.  They have dye packs in teller drawers.  Yet, banks still get robbed.  But, the banks only stock their teller drawers with a minimal amount of money so the robber can only get a few thousand dollars in one robbery.  Therefore to be successful, a robber has to rob many banks to make a living which increases the likelihood they will get caught.  We need to do the same thing with information security and recognize that breaches will still occur, but because we have controls in place that minimizes the amount or type of information they can obtain.

“There’s a sucker born every minute.” David Hannum

Finally, there is the neglected human element.  It is most often neglected because security people are not people, people.  A lot of people went into information security so that they did not have to interact a lot with people – they wanted to play with the cool tools.  Read the Verizon, Trustwave, etc. breach analysis reports and time and again, the root cause of a breach comes down to human error, not a flaw in one of our cool tools.  Yet what do we do about human error?  Little to nothing.  The reason being that supposedly security awareness training does not work.  Security awareness training does not work because we try to achieve success only doing it once per year not continuously.

To prove a point, I often ask people how long it took them to get their spouse, partner or friend to change a bad habit of say putting the toilet seat down or not using a particular word or phrase.  Never in my life have I ever gotten a response of “immediately”, “days” or “months”, it has always been measured in “years”.  And you always get comments about the arguments over the constant harping about changing the habit.  So why would any rational person think that a single annual security awareness event is going to be successful in changing any human habits?  It is the continuous discussion of security awareness that results in changes in people’s habits.

Not that you have to harp or drone on the topic, but you must keep it in the forefront of people’s mind.  The discussion must be relevant and explain why a particular issue is occurring, what the threat is trying to accomplish and then what the individual needs to do to avoid becoming a victim.  If your organization operates retail outlets, explaining a banking scam to your clerks is pointless.  However, explaining that there is now a flood of fraudulent coupons being generated and how to recognize phony coupons is a skill that all retail clerks need to know.

  • Why are fraudulent coupons flooding the marketplace? Because people need to reduce expenses and they are using creative ways to accomplish that including fraudulent ways.
  • What do the fraudulent coupons do to our company? People using fraudulent coupons are stealing from our company.  When we submit fraudulent coupons to our suppliers for reimbursement, they reject them and we are forced to absorb that as a loss.
  • What can you do to minimize our losses? Here are the ways to identify a fraudulent coupon.  [Describe the characteristics of a fraudulent coupon]  When in doubt, call the store manager for assistance.

Every organization I know has more than enough issues that make writing these sorts of messages easy to come up with a topic at least once a week.  Information security personnel need to work with their organization’s Loss Prevention personnel to identify those issues and then write them up so that all employees can act to prevent becoming victims.

Those of you closet box checkers need to give it up.  You are doing your organizations a huge disservice because you are not advancing information security; you are advancing a check in a box.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

September 2021
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 2,423 other followers