Author Archive for PCI Guru

28
Apr
16

Hold Your Horses

UPDATE: The ROC Reporting Template is available as a PDF on the Document Library page after the Reporting Template and Forms banner almost all the way down the page. The Word version of the ROC Reporting Template is now available from the PCI Portal. No word yet on the PA-DSS and ROV Reporting Template.

Yes, the PCI SSC released the final version of the PCI DSS v3.2, an updated Glossary and Summary of Changes document on their Web site this morning, but we are missing a key piece.  The Report On Compliance (ROC) Reporting Template.

Why is that important you might ask?

The ROC Reporting Template is the document that contains all of the tests that a QSA/ISA needs to conduct to prove that an organization is PCI compliant.  It tells you and your QSA/ISA the evidence needed to gather, how to gather the evidence and level of effort required.  Without that information, an assessment under v3.2 cannot be performed.  Let alone do we truly know the breadth and depth of the changes the Council has made.

The Council promised on their Webinar a month ago that all documents would be released on the same date.  But as of this writing, the ROC Reporting Template is missing in action.

Until we have that document, we have nothing.

Also of note is that the PA-DSS v3.2 and its related Report On Validation Reporting Template are also missing in action as well.

22
Apr
16

Learning Moments From Security Conversations – Part 1

Attacker With Administrator Rights

This conversation was a discussion of an attacker gaining administrative privileges on a network.  This conversation started out benign enough and yet rapidly escalated into a full on war of words.  I had to spend almost 40 minutes arguing with an administrator over the facts that if an attacker had administrative rights it was then “game over” for their organization.  I could not believe the lengths that this administrator went to prove I was wrong.

What started this fiasco was a discussion of the results of their vulnerability scans and penetration testing reports.  The reason the conversation got tense was that the administrator was arguing about how the penetration tester was able to escalate privilege to administrator.  At the core of the argument was the “Low” rated vulnerabilities that were used by the penetration tester to gain access to the system and ultimately compromise the environment.

I am not sure where this idea/myth actually started, but it continues to persist even today after around 20 years of vulnerability scanning.  That idea is that “Low” rated vulnerabilities are somehow not a threat.  Even when you try and explain that regardless of ratings, vulnerabilities are vulnerabilities, some are just easier to use than others and provide quicker compromises than others.

Another reason this is an issue is that most information security personnel are not penetration testers.  Penetration testing is not so much a skill as it is an art form.  Anyone can take high and medium vulnerabilities and leverage them to compromise an environment.  That is why they are rated so high in the first place.  But it takes a true artist with a tremendous amount of knowledge in networking, operating systems and applications to look at the results of a vulnerability scan, take certain low rated vulnerabilities, pair those with certain other vulnerabilities, compromise a system and then compromise the environment.  Not that this always ends up leading to a compromised environment, but it is not as simple and easy which is why it is a shock when it happens.

What the penetration tester did once they had compromised a few systems is that they discovered a way to escalate their privilege to domain administrator through the use of a keyboard logger on a compromised system.  They then collected the domain administrator credentials and it was “game over”, or at least that was the penetration tester’s and my opinion.

So the first point of contention were those “Low” vulnerabilities that the penetration tester used to gain access to a system on the network.  Somehow the administrator believed that those vulnerabilities were off limits because they were rated “Low”.  I did my spiel on vulnerabilities are vulnerabilities and that even the PCI DSS states that all vulnerabilities must be patched within 90 days (some of the “Low” vulnerabilities were over 90 days old).

Finally the administrator conceded that at least those old vulnerabilities needed to be patched but continued to argue that using any “Low” vulnerabilities were not “fair”.  Fair?  I tossed that back in their face and asked what attacker would play “fair”?  Point taken and we moved on.

The next point from the administrator was that even if the penetration tester had domain administrator privileges, they did not have access to the data bases and encryption keys.  Those rights are kept in a different group away from the domain administrators.

I could not believe what I was hearing.  So I next asked if domain administrators could modify the members to those domain groups.  “Of course,” was the quick answer back.  So our simulated attacker could have created a new domain administrator account and added them to the data base and encryption groups?  “Well, yeah, I suppose so,” was the quiet answer back as the administrator was starting to see where things were heading.

Then the argument moved on to control of network devices and the exfiltration of data outside.  This revolved around the fact that domain administrators did not have access to network devices.  However, the RADIUS server that did control access to the network devices was integrated with their Active Directory environment.  So I asked what would stop someone with domain administrator rights from creating a new account and adding that account to the network administration group which would then be replicated to the RADIUS server.

The silence created by that question was deafening.  The administrator was speechless.  They now understood the gravity of the situation.  They were owned and they really did not like that fact.  Granted we had not taken things that far because it is a pain to clean up.  But the client now understood after 40 minutes of arguing about it, that the game was over and their environment was no longer under their control.

This is the problem that most organizations face.  They see everything framed in the control paradigms they have implemented.  The problem is that attackers do not care about controls or their paradigms.  They just care about getting access to information and they structure their efforts accordingly without regard to a control environment.

This is why monitoring is so very important and why near real-time monitoring can save your life if it is configured properly.  But monitoring only works if rules have been structured around those same control paradigms so that when the paradigms are violated, alerts are generated.

In the above example, alerts that would have raised red flags are:

  • Creation of administrative accounts. Such accounts are only rarely created in most environments so when they are created there should be an alert generated and then matched against the account creation request.
  • Addition of accounts to administrative groups. As with administrative accounts, there are very infrequent changes made to these groups.  Again when such an alert is generated, there should be a corresponding change request of some sort.
  • Changes to configurations of network devices and/or servers. These can be problematic because of volume particularly on “Patch Tuesdays” or whenever you do volume patching.  But matching changes to change tickets pays off in discovering attackers.  Since attackers do not register their changes in the change management system, any changes popping up that do not have a corresponding change ticket are likely to be part of an attack.
  • Redirection of network traffic to public IP addresses outside of your business partners or other legitimate IP addresses. Where organizations are most at risk is communications with business partners.  Because of the speed of business these days, a lot of information security people do not sufficiently restrict network traffic between their organization and business partners so that they do not have to constantly make changes.  While that allows near immediate communication flexibility it also allows business partners to be a ready source of attacks and data exfiltration points.
  • Significant increases in outbound traffic volume over ports such as DNS that should not have such increases. Attackers do not obey the port protocol rules, particularly if they are trying to avoid changes to network devices.  In the Target breach, the attackers exfiltrated Target’s cardholder data out through port 53 (DNS).  The reason is that because in most instances port 53 will be open and will not have a restriction on IP addresses allowed to communicate with port 53.

But the obvious area that should receive attention are the patching of those medium and low ranked vulnerabilities.  It just amazes me the twisted logic that sometimes gets used to justify putting off applying patches until the very, very last possible moment all because the vulnerabilities being addressed are not high or critical.  As I said earlier and I cannot stress this enough, vulnerabilities are vulnerabilities regardless of their rank.  They make devices/systems vulnerable, hence their name.

I will share another such discussion in a future post.

16
Apr
16

PCI DSS v3.2 Draft Released

On Friday, April 15, 2016 while a lot of you were probably getting your US income taxes done, the PCI SSC decided to release the draft of v3.2 of the PCI DSS.  I know the announcement message to me from the Council ended up in my company’s spam filter, so you may want to check there if you did not receive a message.  I was lucky enough for a colleague to forward his copy along to me.  However to get the draft, you need access to the PCI Portal to obtain the draft PCI DSS v3.2 and the requisite change log.

These are some of the more notable changes in the new PCI DSS version.

  • The draft provides an official sunset date for v3.1 of the PCI DSS. Regardless of the date in April that v3.2 is released, v3.1 will be withdrawn on October 31, 2016.  So any assessments done after that date will need to comply with and use v3.2.
  • Two new sections to Appendix A have been added. In addition to the Appendix for shared hosting providers (now marked A.1), we get Appendices A.2 and A.3.  2 covers SSL and early TLS for those of you that will miss the June 30, 2016 date.  For those of you that thought 2018 was the deadline and missed discussions on the Webinar about the SSL/early TLS deadline, while the deadline was extended to June 30, 2018, any organizations missing the June 30, 2016 date must fill out Appendix A.2.  A.3 is where the Council added the designated entities supplemental validation (DESV) requirements.
  • There are a number of new requirements for service providers that are best practices until February 1, 2018. Those new requirements include: (1) maintain a documented description of the cryptographic architecture, (2) detect and report on failures of critical security control systems, (3) perform penetration testing on segmentation controls at least every six months, (4) executive management to establish responsibilities for the protection of cardholder data and a PCI DSS compliance program, and (5) perform reviews at least quarterly, to confirm personnel are following security policies and operational procedures.  I would bet that numbers three and five will likely create a lot of contention with service providers.  But you have until February 1, 2018 to get those in place.  However, if experience teaches us anything, service providers had better start now getting these new requirements in place and operating.
  • All organizations picked up the following new requirements that are best practices until February 1, 2018: (1) change control processes to include verification of PCI DSS requirements impacted by a change, and (2) multi-factor authentication for all personnel with non-console administrative access to the CDE. As with the aforementioned new requirements for service providers, these will also require a lot of organizations to get started now to ensure these new requirements are in place and operating.
  • The Council clarified requirement 8.1.5 to show that it is intended for all third parties with remote access, rather than only vendors. While most organizations understood the intent of this requirement, there were a few that played “legal eagle” and refused to require compliance for non-vendors.
  • Requirement 6.5 has been clarified that developers must go through secure coding training at least annually. This change will likely create some consternation for some organizations that are developing their own software that is in-scope for PCI compliance.
  • Clarified 11.5.a by removing “within the cardholder data environment” from the testing procedure for consistency with requirement, as the requirement may apply to critical systems located outside the designated CDE. This will likely expand the number of systems that require critical file monitoring.
  • Clarified 12.8 1 by saying that the list of service providers now must include a description of the service(s) provided.
  • Clarified 12.8.2 by adding guidance that service provider responsibility will depend on the particular service(s) being provided and the agreement between the two parties.
  • One of my pet peeves has finally been addressed. I have always had an issue with requirement 1.1.6 and the use of the terminology “insecure protocols”.  The reason is that in one way or another, all protocols have their insecurities whether they are known or not.  In v3.2, the Council has finally removed the “insecure” designation as, in their words, “these may change in accordance with industry standards.”  It is those small battles at times that make your day.

There are other clarifications and edits that have been made to the new version.

For all of us QSAs, we await the Reporting Template which will detail out the actual testing to be performed which will allow us to assess the real impact to the effort required to conduct an assessment.  As a result, there could still be some surprises with this new version of the PCI DSS.  So stay tuned.

15
Apr
16

Multi-Factor Authentication

During the recent PCI SSC’s presentation on the coming changes to v3.2 of the PCI DSS, one of those changes is the adoption of the term “multi-factor authentication” instead of “two-factor authentication”.  This change resulted in some heated discussion in the Q&A session that followed their presentation.

Even though the Council was very, very clear what they meant by multi-factor a lot of us have concerns about that terminology.  The reason for all of this concern?  It is because most of us in the information security profession dislike the term “multi-factor authentication”.  It is driven by the fact that the term typically includes stupid practices such as using two sets of credentials.  I have discussed this in a previous post, but I thought the time was right to discuss this topic again before QSAs start running into organizations trying to pawn off two passwords as valid multi-factor authentication.

Multiple factors of authentication are defined as:

  • Knowledge also referred to as ‘something you know’ such as a password or passphrase,
  • Possession also referred to as ‘something you have’ such as an RSA SecurID or Symantec VIP token, and
  • Inherence also referred to as ‘something you are’ such as a fingerprint, hand or voice.

In order for multi-factor authentication to be secure, you must use one of the factors from each category, hence the terms one, two and three factor authentication.  Using only a password is single or one factor authentication.  Using a PIN with a fingerprint is two factor authentication.  Using a password with the token number and a fingerprint is three factor authentication.

Therefore using two passwords is not using factors from two of the three categories.  It is using the same category twice which is not considered secure.  This holds true for using a fingerprint and an iris scan as those are also two items from the same category.  Although those biometric factors are arguably much stronger than just two passwords.

Why are two passwords not considered secure?  An attacker only has to compromise your authentication system and they would likely have access to those two sets of credentials.  But if you also require either of the other two factors, the attacker may have credentials but they do not have those other factors needed to use those credentials.  Therefore if you are using true two or three factor authentication your security is still effective.

So all of you out there thinking the Council has approved of using two passwords as an approved multi-factor authentication solution need to think again.  Although I know there will be some that do not get this message and will try and use it in that context anyway.

09
Apr
16

Living In PCI Denial

This was one of those weeks where you see something and all you can do is shake your head and wonder what some organizations think when it comes to PCI.  What added insult to injury in this case was that the organization arguing over PCI compliance is the manufacturer of card terminals, also known as point of interaction (POI).  It shocked me that such an organization was so clueless about PCI as a whole when you would think it is their business to know. But to add insult to injury, my client’s transaction processor and acquiring bank are also apparently clueless.

As background, I am working on a client’s Report On Compliance (ROC).  This client has almost completed with their roll out of an end-to-end encryption (E2EE) solution at all of their 4,000+ retail locations.  This E2EE solution will take all but the POI at those retail locations out of scope for PCI compliance.  That is the good news.

But if there is good news, you know there must be bad news.  In reviewing their documentation of this E2EE solution, I discovered that the POI vendor is providing management and updates to the POI through a terminal management system (TMS).  Since this TMS solution/service connects directly to my client’s cardholder data environment (CDE), I naturally asked the client for a copy of the vendor’s Attestation Of Compliance (AOC) for the TMS solution/service.

I thought those worthless PCI Certificates of Compliance took the cake.  Then, BAM!  I got the following message forwarded to me by my client from the POI vendor.  I have redacted all of the potential information that could identify the relevant parties and the TMS solution/service.

“Please see the follow up note below that you can send to your QSA for review and feedback:

  1. TMS systems in our industry do not require any type of PCI certification since PCI is concerned about card holder information that would be at risk. Since [vendor solution] does not have any card holder data at all, it falls outside of PCI requirements.  [Vendor solution] is merchant configuration and estate management tool only and as such, no payment card information passes through it, or directed to it.  In addition, no secure keys are stored on [vendor solution] so transaction data cannot be decrypted with anything on [vendor solution] or POS.
  2. [Vendor] Hardware and [vendor solution] Software are all PCI PTS compliant and certified and listed on the PCI website. Transactions are encrypted in hardware using the [encryption solution] keys which again [vendor solution] has no knowledge.  Transaction information can only be decrypted by [processor] the processor.  [Vendor solution] has no knowledge of this encrypted information being sent directly from the [vendor] to the processor.
  3. The beauty and simplicity of [vendor solution] semi-integrated terminal application is that is has all transaction data go directly to the Processor ([processor]) and no customer data is directed to the POS or [vendor solution] which makes the POS out of PCI Scope by the very nature of no card holder data in their environment.
  4. [Client] has a merchant certification with [processor] for the [encryption solution] with our [vendor solution] terminal application. Any questions regarding the certification should be directed to [acquiring bank] or a [processor] representative.

Let us know if your QSA has any further questions and we can also schedule a concall with all parties to address any concerns on [vendor solution] TMS and PCI.”

The first thing that wound me up is that this vendor is a business partner of my client’s transaction processor.  The processor is also a business partner of my client’s acquiring bank.  Those two organizations put forth this vendor to my client as being able to provide POI compatible to the processor’s E2EE and tokenization solution.  Obviously from this vendor’s response, these two well-known institutions did nothing in the way of due diligence to ensure that this vendor and its services were PCI compliant.

The second thing that totally irritated me is that there is no excuse for this vendor’s uneducated response.  Granted, this vendor is new to the US market, but they have been supplying POI to other merchants all over other parts of the world.  Which then starts to make you wonder just how lame are the banks, processors, card brands and other QSAs that they have not been called on the carpet about this before.  But that is a topic for another post and a good reason why the FTC is investigating the PCI compliance industry.

So let me take apart this vendor’s response.

“TMS systems in our industry do not require any type of PCI certification since PCI is concerned about card holder information that would be at risk.”

Wrong!  On page 10 of the PCI DSS the first paragraph under ‘Scope of PCI DSS Requirements’ clearly defines what is in scope for PCI compliance.

“The PCI DSS security requirements apply to all system components included in or connected to the cardholder data environment. The cardholder data environment (CDE) is comprised of people, processes and technologies that store, process, or transmit cardholder data or sensitive authentication data. “System components” include network devices, servers, computing devices, and applications.”

The operative phrase the TMS solution/service falls under is “connected to”.  The TMS solution/service directly connects to my client’s CDE.  That solution/service may not process, store or transmit cardholder data (CHD) or sensitive authentication data (SAD), but it is directly connected to my client’s CDE.  As a result, according to the above definition, the TMS solution/service is definitely in scope for PCI compliance.

“[Vendor] Hardware and [vendor solution] Software are all PCI PTS compliant and certified and listed on the PCI website.”

PTS certification is a card brand requirement, not a PCI DSS requirement.  Nowhere in the PCI DSS does it require that a PTS certified POI be used so I really do not care about this statement as it has nothing to do with my PCI DSS assessment activities.  If PTS were a PCI DSS requirement, then all of those people using Square and the like would be non-compliant.

“In addition, no secure keys are stored on [vendor solution] so transaction data cannot be decrypted with anything on [vendor solution] or POS.”

“Transaction information can only be decrypted by [processor] the processor.”

True, your TMS solution/service does not have the encryption keys.  But the firmware delivered by the TMS solution/service does have access.  (Unless you are the first POI vendor I have ever encountered that spent the huge amount of money required to truly create a hardware-only encryption solution.)  Given the low retail price and discounting of your POI you gave my client, I very seriously doubt that is the case.  So the firmware that your TMS solution/service delivers is what is doing the encryption and therefore has access to the encryption keys.  So while the TMS solution/service does not have the keys, it could be used to deliver rogue firmware that could obtain them.

Then there is the firmware delivery itself by your TMS solution.  If someone hacks your TMS environment, how easy would it be for them to have it deliver a rogue version of your firmware?  Since my client has no AOC, I have no idea if your security measures surrounding your TMS solution are adequate to prevent such an attack.

“[Client] has a merchant certification with [processor] for the [encryption solution] with our [vendor solution] terminal application.”

Such a statement ranks up there with those previously mentioned worthless PCI Certificates of Compliance.  Any QSA is required to obtain an AOC for the TMS solution/service to ensure that it is PCI compliant or the solution/service must be assessed as part of the merchant’s PCI assessment.

PCI DSS requirements under 12.8 are very clear as to everything a merchant needs to be able to provide to their QSA regarding third party PCI compliance.  Primarily of which is that AOC for your TMS solution/service among other items of evidence.

So I have a conference call with my client’s bank to discuss this situation.  I pushed back very hard when they told me that my client needs to do a compensating control for their business partner’s incompetence.  I even got an “atta boy” from the bank for identifying to them that they have a PCI compliance and potential security issue.  But I could not make the bank budge on the compensating control so I am off to get that written.

The lesson to be learned from this post is that nothing can be taken for granted when doing a PCI assessment even when you transaction processor and bank are involved.  A lot of people and QSAs would assume that a POI vendor would know better and that their bank and transaction processor had vetted the POI vendor.  Therefore, why do I have to worry about this vendor?  However as I have pointed out, you can never take anything for granted even when it involves organizations that you would think would know better.

This is just one way of many that could result in an organization being breached.  The TMS solution/service is a gateway directly to the merchant’s CDE.  Yet there has been no PCI assessment of that solution/service to ensure that it is PCI compliant and the risk it could be subverted has been minimized.

Thank goodness it is the weekend.  Oh, wait.  This weekend’s project is my income taxes.  Looks like I will be cranky all weekend as well.

07
Apr
16

Just Because You Can Wait, Does Not Mean You Will Be Judged “Compliant”

Based on some of the questions I have received since my post on v3.2, apparently a lot of people missed this little point in my last post about the Council’s Webinar.

“The final key point on this topic that the Council could not stress enough was, just because the deadline has been pushed out was no justification for an organization to wait until the last minute before addressing these critical vulnerabilities.  If an organization can meet the June 30, 2016 deadline, then they should meet that deadline.  If they need until December 31, 2016 to convert, then they need to mitigate the risk until December 31, 2016 when they can drop SSL and early TLS.  But waiting for the sake of waiting because the deadline is in 2018 is unacceptable and needs to be called out as ‘Not In Place’ by QSAs.”

For all of you in denial out there, make sure you truly read that last sentence.

Yes folks.  Your QSA can mark you as non-compliant if your organization does not have a very, very, very good and legitimate documented business reason for not meeting the June 30, 2016 deadline for getting rid of SSL and early TLS.

Want to argue that point?  Fine.  Then you can expect your QSA to put you in arbitration with your acquiring bank on this subject.  If your acquiring bank is willing to sign off on your lame delay, then so be it.  But if your bank denies your request, then expect to be put into remediation by your bank and possibly even be fined for your arrogance.

And one more thing we have since clarified.  If you can meet the June 30, 2016 deadline, then you only need mitigation and migration plans for your QSA.  If you are not going to meet the 2016 deadline, then in addition to the plans your organization will also need to provide a compensating control worksheet (CCW) for 4.1.  Even if you are filing your Report On Compliance (ROC) before June 30, 2016, you still need to provide your QSA with the plans and the CCW if you will miss the 2016 deadline.

So for all of you out there that thought you had dodged a bullet, there is another bullet with your name on it.  You have been warned.

01
Apr
16

The Council Speaks About v3.2

If you missed it, do not feel bad.  I too had to be told by friends and colleagues that the PCI SSC was having a Webinar on Thursday, March 31, to discuss the upcoming changes to the PCI DSS and PA-DSS as well as changes to other areas as a result.  Apparently the Webinar was announced in the March issue of the QSA newsletter.

To begin their presentation, the Council made a big deal out of explaining why they are dumping the three year update cycle.  The bottom line about this is that they feel the PCI DSS and PA-DSS are mature and therefore any future updates will be evolutionary not revolutionary as they have been in the past.  As a result, we can expect more minor changes more often.  Much like when the PCI DSS started out and we quickly got v1.1 followed by v1.2.

PCI DSS v3.2

The real piece of news here was that two-factor authentication (TFA) is going to be required for all administrative access to the cardholder data environment (CDE) regardless of whether that access is from the internal network or a remote network.  I am sure this is in response to the number of breaches that involved administrators being spear phished.

Speaking of TFA, the Council indicated that they are going to switch terminology from “two-factor” authentication to “multi-factor” authentication (MFA).  However, they were very clear when they discussed this change in terminology that they still mean the three factor model of something you know, something you have, and something you are.  Their rationale on this change is to align the DSS with industry terminology.  In the Q&A they got a lot of questions on this change as most security professionals said that clients would view MFA as including two sets of credentials versus TFA which has truly different factors.  So we will see if the MFA decision stands when the new standard is released.

In addition, the Council outlined some other key changes we can expect to see in the latest version of the DSS.  These are:

  • Two new Appendices are being added to the PCI DSS. The first of which discusses the SSL/early TLS issues.  The second is the incorporation of the Designated Entities Supplemental Validation (DESV) requirements into the DSS.
  • Allowing the display of the PAN to be more than just the first six digits and the last four digits to align the PCI DSS with the coming changes to ISO 7812 which will increase the issuer identification number (IIN) from six digits to eight digits.
  • Adding a number of additional requirements for service providers including: documentation of cryptographic architecture, detection/reporting on critical security control systems, penetration testing to confirm segmentation every six months, establishment of a formal PCI compliance program, and quarterly confirmation that personnel are following all security policies, standards and procedures.
  • Periodic testing that all change control policies, standards and procedures are in place and operating as designed. This is the first of many business as usual (BAU) requirements that will be added to the PCI DSS.

More On SSL/Early TLS

The Council gave a bit more information regarding why they extended the deadline on SSL and early TLS out to June 30, 2018.  As no surprise, the reason for the extension was push back from a variety of sources that found the 2016 deadline too short to convert.

I know from my own experience, I have a few clients that have contracts that do not allow them to make such changes without consultation with every customer impacted.  In one case, it was going to take almost nine months just to consult with all of their impacted customers and then another seven months to implement the changes into production.  In the perfect scenario, they would have cut over around September 2016, but they said past experience indicated a more likely date would have been July 2017 at the earliest.

The presenter reiterated that service providers must meet the June 30, 2016 deadline.

Also discussed was how ASVs are supposed to deal with SSL and early TLS issues.  Until June 30, 2016, if an ASV encounters SSL or early TLS vulnerabilities, the ASV must obtain the mitigation plan or a letter from their customer attesting that a mitigation plan has been developed and the date when the customer will have addressed the vulnerabilities related to SSL and/or early TLS.  The ASV does not need to assess the mitigation plan as the assessment of the mitigation plan is something the organization’s QSA must perform as part of the assessment process.

The final key point on this topic that the Council could not stress enough was, just because the deadline has been pushed out was no justification for an organization to wait until the last minute before addressing these critical vulnerabilities.  If an organization can meet the June 30, 2016 deadline, then they should meet that deadline.  If they need until December 31, 2016 to convert, then they need to mitigate the risk until December 31, 2016 when they can drop SSL and early TLS.  But waiting for the sake of waiting because the deadline is in 2018 is unacceptable and needs to be called out as ‘Not In Place’ by QSAs.

Related to this discussion was the fact that vulnerability management still needed to be addressed through the mitigation.  So if new vulnerabilities to SSL and/or early TLS are discovered while the organization is remediating their implementations of SSL/early TLS, the organization must still comply with requirements 6.2 and 11.2.

PA-DSS v3.2

No news is good news here.  There will be little change to the PA-DSS standard other than to align it with PCI DSS v3.2.

However two significant changes are coming to an application’s Implementation Guide (IG).

The IG will now be required to address debugging logs that contain PAN data.  Those debugging logs will be required to be protected, debugging will need to be immediately disabled once it is no longer needed and the debugging log data must be securely deleted as soon as it is no longer needed.

The IG will also be required to discuss the secure implementation of patches and updates to the application.

PA-DSS v3.1 dealt with the SSL/early TLS issue, so the Council felt that there would be no changes regarding that topic.  That said, they did address the question as to whether or not TLS v1.1 is considered secure and laid out how TLS v1.1 needed to be configured to be secure.  That configuration included:

  • Disable weak ciphers and cipher suites such as MD5, SHA-1 and RC4.
  • Use sufficient key sizes.
  • Prevent fallback to SSL or TLS v1.0.

AQM Update

The Council indicated that the PCI DSS v3.2 and the Report On Compliance (ROC) reporting templates will be released simultaneously for the first time.  Timing for these documents will be late April 2016.  No specific date was provided.

On the PA-DSS side, the Council stated that the v3.2 Report On Validation (ROV) reporting template and the standard will be released in May 2016.  Again, no specific date was provided.

Cutover to v3.2 for both standards was discussed with the PCI DSS cutover being the more specific.  PCI DSS v3.2 will go active upon release with sun setting of v3.1 occurring in October 2016 on whatever day matches the release date.  Cutover and sun setting on PA-DSS will be announced with the release of the v3.2 standard.  Use of both standards and reporting templates can occur immediately but we were reminded that everyone must cutover by the relevant sunset dates.

The Council also indicated that any relevant v3 FAQs will also be updated when the new standards are released.

ROC/ROV Personalization

The final point discussed under the AQM banner was the personalization of the ROC and ROV reporting templates by QSACs and PA-QSACs.  According to the presenter, the Council is hearing complaints from banks and the brands about the “over personalization” of ROC and ROV reports.

The Council stated that they understood the desire of QSACs and PA-QSACs to put their logos on the reports as well as making other “minor” changes to make the reports reflective of their organization.  However, banks and the card brands have been complaining that some of the personalization done had made the reports different enough from the original templates as to make them difficult to quickly review and process.

As a result, the Council has felt it necessary to issue guidelines on what personalization of the ROC and ROV templates is allowed.  Under these new guidelines:

  • Adding a title page to the report templates is allowed.
  • Adding a company’s logo to the report header is allowed.
  • No changes are allowed to any of the reports footers.

If you did miss this Webinar, the Council stated they were recording the session and it will be available on their PCI Portal sometime in the next few days.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

May 2016
M T W T F S S
« Apr    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,546 other followers


Follow

Get every new post delivered to your Inbox.

Join 1,546 other followers