Archive for the 'Requirement 3 – Protect stored cardholder data' Category

19
Dec
21

Updated PAN Truncation FAQ

As part of the holiday giving tradition, the PCI SSC has given us an updated FAQ (#1091) on the subject of PAN truncation and it will likely go down as the most confusing FAQ ever.

The FAQ starts out simple enough with the statement:

“A maximum of the first 6 and last 4 digits of the PAN is the starting baseline for entities to retain after truncation, considering the business needs and purposes for which the PAN is used.”

But it is the table that follows that gets messy.

It seems that each of the card brands has their own take on PAN truncation based on PAN length and other factors. Only American Express has stayed the course.

Based on the guidance for UnionPay, Visa, Mastercard, JCB and Discover, the idea of first six/eight and ANY OTHER four is a bit bizarre not to mention risky.

Never mind the obvious warning note at the end of the FAQ that states:

“Access to different truncation formats of the same PAN greatly increases the ability to reconstruct full PAN, and the security value provided by an individual truncated PAN is significantly reduced. If the same PAN is truncated using more than one truncation format (for example, different truncation formats are used on different systems), additional controls should be in place to ensure that the truncated versions cannot be correlated to reconstruct additional digits of the original PAN.”

Personally, I would stick with the good old first six, last four and avoid any of these other formats as you are likely setting yourself up for problems and PCI non-compliance.

Happy holidays to all!

Advertisement
08
Mar
19

Sins Of The Past

This was a question we got from our last PCI Dream Team session on the Cloud.

“Issue – found CVV in historical call recordings that need to be purge/delete. We are not able to purge the entire call record and still need to keep it for record retention. What tools should be evaluated to help address this issue?”

A lot of organizations are discovering that how they did things in the past did not meet PCI, GDPR or other legal or regulatory requirements when data in their possession needs to be protected.  Gone are those freewheeling days of collecting data without worrying about how to secure and protect it.  Customers and the public at large are realizing the value of their information and the need to protect it.  All of which starts organizations thinking about how to reduce the risk they have because they have all of this data and they are being held responsible for having it.  The patchwork of state laws in the US hold a lot of organizations at risk, some higher than others.

There are also the sins that come to light down the road.  It is not unusual to have a PCI in scope application crawl out of the woodwork years down the road at large organizations.  It should have been identified way back when the organization was starting out in PCI, but somehow was missed and just now turned up.  Unfortunately, these discoveries tend to turn up at the 11th hour of the organization’s current PCI assessment and there is no way to include the application without causing a delay in issuing the ROC.

Surprise!

So, let us talk about the last case first.  The application that we uncover very late in the PCI assessment.

What should happen and in the example cited did happen, was a conversation with the acquiring bank.  The situation was explained as well as the risk involved (it was storing encrypted PAN) and the bank was asked do we delay filing the ROC and assess this application (likely a delay of longer than 90 days) or do we keep moving ahead as planned and pick up the newly disclosed application in the next assessment?

The bank decided that they did not want to delay the ROC filing since it was just out of our QA process, had been sent to the client for their review and was due in around 30 days.

The client looked further into the application and determined that it could be easily remediated with tokenization from their gateway.  As a result, when time came for the next year’s assessment, the application had been remediated with tokenization.  We took a look at it and confirmed it no longer contained encrypted PAN and explained to the bank that it would no longer be in scope.

However, things do not always end that well.  I have also had occasions where no remediation was possible for a variety of reasons and had to go in the following year and assess the new discovered application in all its PCI compliance (and in some cases non-compliance) glory.

Remediate

Getting back to our original sin, so to speak.

First and foremost, you may not be able to remediate your files due to legal or regulatory constraints.  So, before you go charging ahead on your remediation efforts, make sure you discuss it with your legal and compliance folks to ensure you are not creating an even bigger problem.  Assuming you are allowed to remediate data, you can proceed with reading the rest of this section.

Structured data is typically easy to remediate.  You find out what XML tags to look for, fields or what database columns are involved, you develop a program to remediate the data to first six and/or last four for the PAN or erasing data for any information you were not supposed to keep and execute.  Easy.  Well, easy until you take into account backups which can complicate remediation if you cannot just erase the backups.

Unstructured data as with call recordings and notes/comments fields can be a nightmare to remediate.  The reason of course is that the data has no structure and does not necessarily occur in the same place.  Unlike XML or a database where data is at least tagged or in a column, unstructured data exists wherever it exists and programs to remediate the sensitive data need to find it and then eradicate it.  That introduces the problem of false positive results.  I wrote all about the “fun” of trying to find cardholder data (CHD) five years ago and it has not necessarily gotten any better.  The bottom line with unstructured data is that it may not be possible to completely remediate the problem.

However, the best you may be able to do is to remediate the data when it is encountered.  Going back to call recordings, if the quality assurance review process or any process that has someone review recordings encounters CHD they redact the information from the file so that it is no longer in that file.  Not perfect, but slowly you reduce the amount you are storing.  You still have to encrypt the files for protection, but you are making an effort to reduce risk by reducing the amount of viable data.

Isolate It

This most commonly occurs with call recordings, but I have encountered the occasional “legacy” application that it applied to as well.

In the either case, the old system is being decommissioned and a new solution (usually outsourced) is being implemented.  The question comes up, “what do we do with the old system?”  The reason is that for customer service, legal and/or regulatory reasons it cannot just be wiped and destroyed.  It needs to be retained for some period of time before that can happen.

The answer is to keep the system powered up, but off any other network.  If people need access, they need to go to a PC or workstation that is connected to a private, air gapped, isolated network that consists of the old system and the PCs or workstations to be used to access the old system.  No internet or other network access is provided, only a network that contains those few isolated systems.  This will allow the system and workstations to age yet remain protected because of the air gap.  Remember, the PCs and workstations will also age along because it is highly likely that new software may not allow connectivity to the old system.  This is why everything will need to be air gapped.

I usually get asked for the reason to keep the old solution powered up.  That comes from a study done long ago by IBM.  What the IBM study found was that systems that get powered off after years of operation have a tendency to fail after being powered off for any extended length of time (i.e., long enough to cool down).  As a result, if you intend to keep the system around and available, you best keep it powered up albeit isolated as discussed earlier.

One of the larger issues with isolation will be monitoring of the air gapped network to ensure it remains air gapped and how you respond if that air gapped is breached.  There are a number of ways to address this issue, so pick the solution that best fits your environment.

Isolation is not a perfect solution.  It will likely require a number of compensating control worksheets (CCW) to address the fact that you have a number of “antique” systems around.  So be prepared for that work effort as it will likely not be small.

26
Nov
18

Email And PCI Compliance

This is a question we got from the recent PCI Dream Team session.

“If you receive emails with CHD and store them for a defined period — does the exchange infrastructure come in to scope? What are the suggested methods to descope apart from not receiving CHD via emails.”

By definition, if an application processes, stores or transmits sensitive authentication data (SAD) or cardholder data (CHD), it is in scope for PCI compliance.  The ONLY way to remove an application from PCI scope is to NOT process, store or transmit SAD/CHD.  So that should answer the questions presented.

With the question answered, I have written about email before, but I thought I would provide some additional guidance now that a lot of organizations are outsourcing their electronic mail (email) to providers such as Microsoft, Google and others.

Outsourcing email has become all the rage of late because it takes dealing with email off of IT’s plate.  IT people hate email because it is a huge operational pain with all of the problems it creates.  Not only does it typically take a lot of servers to operate, most organizations need a hot failover solution in order to ensure their business operations uninterrupted.  Never minding the fact that it is a problematic application that end users seem to often mess up.  Because of this, most IT operations look to a third party to deal with email and get it off their backs.

Over the years I have heard all of the business arguments as to why organizations need to use email for communications, particularly payments.  The most common of which is that it makes for easy communication with customers because everyone knows how to use it.  Add in file transfer, electronic facsimile delivery, voice messaging, unified communications and its ease of use – it is just too good to not use.  Talk about a business case that appears to be beyond reproach.

Here are the problems with email when it comes to PCI compliance.

The first problem, and it is HUGE, is that there is no way for an organization to obtain PCI scope reduction with email in scope.  By definition, an email solution that contains SAD/CHD, it is in the cardholder data environment (CDE).  You want everything in scope?  Well you got it because any workstation that uses email is at a minimum a “Connected To” system and at worst a CDE system if the end user processes the messages that contain SAD/CHD.  The bottom line is that your organization will not achieve any sort of meaningful scope reduction with email in scope because it brings every workstation in the organization into scope.

The second problem with email in scope is that it provides no real way of securing the information stored in the system.  Yes, inboxes can be individually encrypted, but it is trivial to work around that encryption and gain access to the messages, particularly if it is a shared or group inbox.  As a result, there is no way to effectively comply with the requirements in 3 regarding the encryption of CHD at rest.

Never mind the fact that you have to do something about redacting SAD if that is in messages.  That is because once a transaction is conducted, you are no longer allowed to store SAD.  Information redaction becomes hugely problematic in email systems because of where the data could have been sent unbeknownst to the original recipient as well as what email clients it exists.  This whole situation gets significantly worse if your organization must also comply with the European Union’s General Data Protection Regulation (GDPR).

The third problem is with requirement 4.2 that states:

“Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, SMS, chat, etc.).”

This means that ALL EMAIL MUST BE encrypted at all times including internal and external message transmissions.  While this is easily accomplished for internal users, it becomes problematic for external users that will have to either use: (1) PGP or a similar public key infraustructure (PKI) solution, or use (2) a solution provided by your organization such as Proofpoint or similar to ensure secure message delivery.  I can personally attest to the fact that when I have brought up using PGP, Proofpoint or similar for secure communications, I have heard nothing but complaints from users about how difficult it is to use.  All of a sudden, ease of use goes out of the window.

But outsourced email is the final nail in the coffin for PCI compliance.  When you outsource your email to Microsoft, Google or other public cloud providers, they will tell you that their email solutions are NOT PCI compliant and NEVER WILL BE PCI compliant.  Worse, they will not allow you to assess their email hosting environment for your own PCI assessment.  As a result, there is no way to comply with requirements in 12.8 as well as comply with the card brand requirements of only working with PCI compliant service providers.  Therefore, there is no way to obtain a compliant PCI Report On Compliance (ROC) or self-assessment questionnaire (SAQ).

But what about compensating controls?

Any effort to create compensating controls is a giant bottomless rabbit hole.  You will chase your tail forever trying to come up with ways to compensate for controls that cannot be compensated.

In the end, while email is a great tool with excellent ease of use, it is a tool that will not easily lend itself to PCI compliance.  Only bring it into PCI scope if you absolutely have no other choice.  Othwise, avoid having it in scope like the plague.

07
Nov
18

One Last Time On Disaster Recovery

I have written three posts on this topic, yet it still comes up.

Here are the Cliff Notes from those posts.

Hot sites are always in scope for PCI compliance because they can support failover on demand.

Cold sites are never in scope for PCI compliance because there is nothing there that would be in scope.

Warm sites are only in scope if they have cardholder data (CHD) processed, stored or transmitted from that site.

There are nuances with all of this, so if you want more information, read the three posts.

03
Nov
18

Open Source

One of the questions we received at the last PCI Dream Team session was:

“What about open source for 6.5?”

I am sure the person asking wanted to know whether open source payment solutions must comply with the PCI DSS requirements in 6.5.x?

The quick and simple answer is of course, ‘Yes’!  Why would it not?  It is source code after all, so therefore it must comply with the requirements in 6.5.x (as well as other requirements in section 6 and throughout the PCI DSS).  The PCI DSS does differentiate between different sources of application code.  For PCI compliance purposes, code is code is code, regardless of the source.

Now what does come into play is whether or not the PA-DSS validation standard applies to an application.  As PA-DSS relates to open source, I wrote about that over eight years ago, but it is still relevant today.  For the purposes of this post, I am not talking about PA-DSS validated applications.

The next question a QSA typically gets is, “Well 6.5 only applies to internet-facing payment applications, right?”

Wrong!  Any payment application needs to meet the requirements in 6.5.x whether it is internet-facing or internal facing.  Also, it does not matter whether a browser is involved or not although a significant number of the requirements in 6.5.x are related to browser-based applications.

But ensuring open source is PCI compliant goes beyond just 6.5.x.  There are other requirements that, at a minimum, must be applied as well.  Not every requirement in a section or group or requirements may apply, but some will be needed to be covered depending on how the application works.

  • Section 3 related to encryption of stored data and encryption key management;
  • Section 4 related to encryption of communications;
  • Requirements 6.1 and 6.2 for patching and vulnerability management. This can become problematic for open source because as time goes on applications can develop vulnerabilities that the developer community does not address.  This is most likely because the community moved on and your application became an orphan;
  • Requirements 6.4 for application development. Remember, just because your organization did not develop the application, if it is not PA-DSS validated, then it is your responsibility to ensure the code securely processes, stores or transmits sensitive authentication data and/or cardholder data;
  • Requirement 6.6 is also in play regardless of whether or not the application is browser-based. At a minimum, code reviews must be performed.  If the application is browser-based, then you can add in a Web application firewall (WAF) for additional security;
  • Sections 7 and 8 related to access control and user management; and
  • Section10 related to application log data.

Remember, every time a new release of your open source solution becomes available, you have to go through all of this all over again if you intend to use the new release.

So those of you thinking that you can somehow leverage open source to reduce your PCI compliance footprint, think again.  All you have done is outsourced the development of your solution.  The rest is still on you.  In the end, it is really not much of a savings.

29
Oct
18

Virtual Payments

Virtual payments are becoming more and more prevalent outside of the insurance industry as companies realize the convenience of paying virtually.  As a result, more business-to-business (B2B) purchases are being paid for via virtual payments.  It also became obvious at our latest PCI Dream Team session that virtual payments need to be better explained to people so that they understand how they work and their responsibilities for security.

Definition

Technically virtual credit cards have existed for a while.  Businesses have had “virtual” credit cards for making airline and hotel reservations, purchasing office supplies and paying for other business expenses for decades.  The cards do not physically exist (originally they did exist, but this was seen as a security risk), but the business’ accounts payable department had a virtual card with a PAN, card verification code and expiration date issued by Visa, MasterCard or American Express for paying merchants for goods or services.

A virtual payment (or virtual credit card) is essentially the same as a regular, physical credit card with the following exceptions.

  • The primary account number (PAN) can only be used once to make a payment. If you mess up either of the next two criteria, that does not count as a ‘use’.  That said, even if everything is correct and the payment is declined you will have to contact the organization that generated the virtual payment to get a new virtual payment created.  Also, be careful with a virtual card PAN as some processors may generate a PAN that will not Luhn check.
  • Only the merchant defined on the virtual payment can use the payment. For example, if the merchant on the virtual payment is defined as ‘ABC Company’, only ABC Company can submit the transaction for payment.
  • The payment must total exactly to the total authorized on the virtual payment. For example, if the virtual payment is for $1,252.98 USD, then the merchant can only submit a charge for $1,252.98 USD for payment.
  • Virtual payments are flagged as being virtual. So, if someone were to copy the information and put it on a physical card to use physically at a retail outlet, the card would be declined.

How Do Virtual Payments Work?

Virtual payments are typically created by transaction processors such as Chase Paymentech, Elavon or Worldpay.  Although there are a number of independent sales organizations (ISO) and others that have affiliated themselves with processors to also generate such payments.

A lot of accounts payable software solutions now provide connections to transaction processors’ APIs for the generation of virtual payments to pay bills.  You will have to check with the application vendors to determine whose virtual payment solutions their applications support.  But the original way of using a Web browser to access the processor’s virtual payment application is also available.

An organization must sign up for virtual payment services, so it is not something that you can just access.  In addition, it is the responsibility of the organization to manage the users that have the ability to generate virtual payments as well as establish the minimum/maximum transaction amounts, time payments are valid (typically 30 to 90 days) and other payment criteria.  In addition, the solution may also specify the merchants that can be paid through the virtual payment solution.  Once set up, an organization can then generate virtual payments to pay their bills.

One very important step before you start generating virtual payments is that you need to ensure that the organizations you are paying will accept virtual payments or payment cards.  While an organization may have retail outlets that accept payment cards for payment, does not mean that their commercial operations also accept payment cards.  As such, you need to contact the accounts receivable department at the organizations you intend to pay with a virtual payment to ensure that they will process the virtual payments as some organizations cannot or will not.  Also use this as an opportunity to confirm you have the correct name of the organization (as it appears when they process card payments), the correct facsimile number, correct email address (I recommend you get both just in case) and the preferred method of sending the virtual payment (i.e., facsimile or email).  Keep in mind it is not your problem to worry about the payee’s PCI compliance in how they handle your payment.  That is their problem, not yours.

When a virtual payment is generated, it is typically sent to the payee via facsimile.  However, I have also heard that some processors that can send the information via secure email services such as Proofpoint or MimeCast.

If you are accepting virtual payments, you need to be aware of the PCI compliance issues with facsimile.  The problem with using facsimile is that a lot of organizations have implemented facsimile services such as HelloFax, MyFax or eFax and any facsimile messages are automatically delivered to users via corporate email.  Such a solution as eFax brings an organization’s email system into scope for PCI compliance.  As a result, it is important that if your organization will accept virtual payments that those facsimile transmissions are sent to a secure physical facsimile machine located in the area where those payments will be processed.  I have some clients that use secure printing solutions for printing their facsimiles where the user has to use their building HID card to securely print output on any printer.

Secure email solutions will hold the message for the payee to obtain from the secure email Web site interface via a browser.  The secure email solution will send you a notification that you have received a secure message along with a link to that message.  Once you get into the secure email solution, it is up to your organization to ensure you maintain the security of the message and the SAD sent to you.  So, no forwarding the message to your own email system.  No storing message attachments (likely the SAD) to a PC or network drive.  Print out the message and/or attachments to a physical printer and process the payment from those printouts.

SAD Is SAD – CHD Is CHD

As I said earlier, virtual payment messages contain SAD in the form of card verification value in addition to the PAN, expiration date and cardholder name which are cardholder data (CHD).  Just because we are talking about virtual payments and they can be used only once does not mean they can be treated any differently than the same information from a physical payment card.

That said, Visa and MasterCard have their own view of virtual payment information security.  As David Mundhenk reminded everyone on our latest Dream Team session, the card brands also have their own rules in addition to the PCI standards.  So, it is important for everyone to look at the card brands’ rules as well as the PCI standards when dealing with SAD/CHD.  That means not only their security programs, but also their respective Merchant Agreements and asking them questions when you cannot find the answers in any of their official documents.

In the case of virtual payments, Visa and MasterCard differ on security of virtual payment information.  Unfortunately, you would not know that fact if you had not asked each of the brands about this subject because their security programs and merchant agreements do not address the subject.  For the record, American Express and JCB do not have an opinion on the subject.  Obviously SAD is SAD before it is used to process the payment, where the difference comes is after the payment is processed.

Visa wants the information protected even after the payment is processed.  They demand that it be securely destroyed after the payment is processed even though the information is single use.  I kid you not, MasterCard said on a call that if my client wanted to post the printed facsimile on a utility pole out in public, that was okay with them because the information could not ever be used again.  Talk about two polar opposite approaches.  As a result, I recommend following Visa’s recommendation and securely destroy the original message or attachment.  If for whatever reason you need to keep the payment document, securely redact the information, take a copy of the redacted original for your records and then destroy the redacted original.

That is what you need to know about virtual payments.

12
Oct
18

The Requirement 3.2.1 – 3.2.3 Not Applicable Debate

When v3.2 of the ROC Reporting Template came out the QSA/ISA community noticed that requirements 3.2.1 – 3.2.3 could no longer be marked as ‘Not Applicable’.

The rationale the Council gave when they explained why they disallowed ‘Not Applicable’ for these requirements is that they wanted QSAs/ISAs to have to explain what procedures they had followed to confirm that organizations were not storing sensitive authentication data (SAD) in the form of track data, card verification values or PIN blocks.

The push back from QSAs and ISAs was to ask how that was relevant when an organization’s card processing could not come into contact with such information as when P2PE had been implemented?

The Council has long stated that for Level 1 merchants that have, for example, implemented a P2PE solution, they should follow the requirements in SAQ-P2PE to fill out their ROC and mark any requirements not in the SAQ-P2PE as “Not Applicable.  The merchant uses a P2PE validated solution and the requirement is not relevant.”

This Council guidance resulted in the question at the 2016 Community Meeting Assessor Session, “How do you do that for requirements 3.2.1 – 3.2.3 when they cannot be marked ‘Not Applicable’ and do not appear in SAQ-P2PE?”  “Good question.  We will have to get back to you.”, the Council told attendees.

Well, here we are two years and a new version later and these requirements still cannot be marked as ‘Not Applicable’.  A number of people texted me at this year’s Assessor Session to bring this issue up again, but I was tired of arguing and just let it go.

The more I have thought about it, the more I regret not bringing this issue up because it needs to be addressed.

So, if someone attending the Assessor Session at the European or APAC Community Meeting would like to bring this question up, I would appreciate it as would a lot of the QSA/ISA community.

08
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 2

In the last post I discussed what the SOC reports are and what, in general, to look for in a SOC 2/3 report.  Now I want to take you through the more detailed analysis of the SOC reporting so that you can understand why it might not give you the result you desire and how to address that fact.

How Do I Analyze The SOC Report?

Based on the testing in the ‘Information Provided by Independent Service Auditor’ section, you are going to need to map that testing into the PCI ROC tests, if they even fit.  I typically use the Prioritized Approach spreadsheet to do this as it provides a way of documenting the requirements covered and a quick dashboard regarding what is covered.

As you reviewed the domains listed under the SOC 3 report, I am sure you thought; “What is not to like?  It looks like most of what I need for PCI is covered.”  But you would be wrong.  You will find after you map the controls from any SOC 2 report that covers all the TSP domains into the Prioritized Approach that the report will likely only cover around 20% to 25% of the PCI DSS requirements.  That is because the level of detail in the SOC tests are just not as detailed as they are in the PCI DSS.  As a result, SOC 2 reporting does not provide the kind of reliance you need to respond to all of the relevant PCI DSS requirements.

For example, while SOC will likely test that password controls are in place, you will be unable to ascertain if the organization enforces seven character or greater password lengths, password complexity, nor if they require passwords to be changed every 90 days or less.  Let alone if the special requirements for vendor password management are enforced.  It is these missing details that create the coverage problems with using the SOC reporting results.

The same can be said for change management.  When tested, the SOC report will likely call out a lot about change management, but not at the level of detail required in the PCI DSS for requirements under 6.4.  You will also find that coverage in requirements 1 and 2 regarding network and server configurations will be lacking in specificity to meet the PCI DSS testing.

Now as a QSA, you have a decision to make.  Can you accept only 20% to 25% of coverage of PCI DSS requirements as being PCI compliant?  I know I cannot.  I need much more to work with before I can get comfortable that a SOC report provides the necessary coverage for PCI compliance.

Now What?

You and your client have expended all this effort and are no closer to the result desired than when this process started.

So, what to do next?

Work with your service providers that provide you SOC reports to include testing that adds the PCI DSS details that are missing.  There will likely be a bit of push back from these service providers because adding testing to their SOC reports will cause the cost of their SOC reports to increase, sometimes significantly.  So be prepared for it.

What you need to do is to have their auditors add the necessary testing details to the description of controls and then have them test that they are in place.  Examples include:

  • Password length, complexity, change frequency and the procedures followed to perform a password reset.
  • Details surrounding privileged and general user management including provisioning, management approvals, users are implemented with least privilege and users are disabled or removed when terminated.
  • Changes tested for segregation of duties between developers and operations, segregation of test, QA and production environments, production data not used for testing, developers do not have unrestricted access to production, test data and accounts removed before applications are promoted to production, changes document impact, they are appropriately authorized, they have been tested, they have been vulnerability assessed and they document backout procedures.
  • If encryption is used to protect data, document the algorithms used, are key custodian agreements in place, are split key processes in place if performing manual key management, indicate if a hardware security module (HSM) is used and are keys changed when their crypto-periods expire or they are believed to be compromised.
  • Document the configuration standards that are followed by device classes such as firewalls, switches, servers and test that they have been implemented.
  • Document that anti-virus is implemented on systems commonly affected by viruses and malware, what the anti-virus solution is that is implemented, the anti-virus solution cannot be disabled and that the anti-virus solution is actively running on all systems it is installed.
  • Document that vulnerability scanning is performed, how often scanning is performed and that vulnerabilities are remediated.
  • Document that penetration testing is performed, how often penetration testing is performed and that findings are remediated.
  • Document that log data is collected from all devices, it is reviewed at least daily and that it contains a date/time stamp, device name, type of log entry and other relevant information.

There are a lot of other areas that could be added to the SOC report, but these are, in my opinion, the bare minimum that need to be added to make the SOC report more relevant for PCI.  I am trying to balance the amount of additional information needed versus the cost of providing it in the SOC report.

By adding all of this will it cover all of the gaps between SOC and PCI?  No.  But it should give your QSA significantly more comfort that the controls in place to meet PCI than what is currently being provided by CPAs.

04
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 1

This is a common question that QSAs encounter from clients.  The client has an SSAE 18 Controls at a Service Organization (SOC) report from one of their service providers and they want to know if they can use it to satisfy any or all of the requirements in 12.8, 12.9 and 12.11 related to vendor management?

The biggest caveat in this discussion is that the PCI SSC does not sanction the use of any report other than a PCI Attestation Of Compliance (AOC) and/or a PCI Report On Compliance (ROC) in addition to any other PCI reports.  The Council has repeatedly stated that if a QSA chooses to rely on an SSAE 18 SOC 2 report (or any other compliance report for that matter), the QSAC and their client accepts the risk if the SSAE 18 SOC 2 does not cover what the QSA claims it covers and therefore relies upon it for fulfilling PCI ROC requirements.  As a result, most QSAs will not accept an SSAE 18 SOC 2 report (or any other non-PCI compliance reports) for any reason.

For those of us “recovering” certified public accountant (CPA) types that have conducted SSAE18 audits, we know how to read and interpret these reports.  As a result, when we are asked about SSAE 18 SOC 2 reports being relevant, our answer is that, “It depends on what the SOC 2 covers and how it was tested.”

Before we get too deep into this discussion though, we need to define the terminology surrounding this topic.  The first thing is that SSAE 18 replaced SSAE 16 as of 2017 even though nothing else appears to have changed.  The next key thing anyone needs to know about SSAE 18 is that there are three reports that can come from this reporting series: SOC 1, SOC 2 and SOC 3.

The first, SOC 1, is for financial auditors only.  It used to be called a SAS 70 years ago.  It is a report focused on financial controls that an external auditor needs to ensure that the financial numbers coming from the third party can be relied upon in their annual audit of their client.  Yes, these SOC 1 reports can cover security controls, but that is only in regard to financial systems, not necessarily the third party’s entire environment.  In addition, the control coverage is typically not as deep as required for PCI compliance.  The bottom line is that any reliance on a SOC 1 report outside of financial systems should never be assumed.

I am going to cover the SOC 3 report next because it covers all of the security domains.  The SOC 3 report (also sometimes referred to as the ‘SysTrust’ report) covers the following domains:

  • Organization and Management – The criteria relevant to how the organization is structured and the processes the organization has implemented to manage and support people within its operating units.
  • Communications – The criteria relevant to how the organization communicates its policies, processes, procedures, commitments, and requirements to authorized users and other parties of the system and the obligations of those parties and users to the effective operation of the system.
  • Risk Management and Design and Implementation of Controls – The criteria relevant to how the entity (i) identifies potential risks that would affect the entity’s ability to achieve its objectives, (ii) analyzes those risks, (iii) develops responses to those risks including the design and implementation of controls and other risk mitigating actions, and (iv) conducts ongoing monitoring of risks and the risk management process.
  • Monitoring of Controls – The criteria relevant to how the entity monitors the system, including the suitability, and design and operating effectiveness of the controls, and takes action to address deficiencies identified.
  • Logical and Physical Access Controls – The criteria relevant to how the organization restricts logical and physical access to the system, provides and removes that access, and prevents unauthorized access to meet the criteria for the principle(s) addressed in the engagement.
  • System Operations – The criteria relevant to how the organization manages the execution of system procedures and detects and mitigates processing deviations, including logical and physical security deviations, to meet the objective(s) of the principle(s) addressed in the engagement.
  • Change Management – The criteria relevant to how the organization identifies the need for changes to the system, makes the changes following a controlled change management process, and prevents unauthorized changes from being made to meet the criteria for the principle(s) addressed in the engagement.

There are also some additional considerations that are related to Confidentiality specified in the Trust Services Principals and Criteria (TSP), but those are not required to be covered in the SOC 3 report.

Finally, there is the SOC 2 report.  The SOC 2 report uses the same TSP as the SOC 3 but with a twist.  The third party can select any or all of the seven domains to be assessed.  Think of it as a “cafeteria style” assessment.  With the SOC 2, the AICPA does not require that all domains be covered (as with the SOC 3), the assessed entity can select only those domains they wish audited.  As a result, a third party could select only the ‘Organization and Management’ domain to be assessed and nothing else in their SOC 2 report.  Therefore, just because you have a SOC 2 does not mean it covers the domains necessary for your PCI assessment.  Like the SOC 3, in addition to the seven domains, the SOC 2 can also cover none, any or all of the additional considerations documented in the TSP.

Within each of these SOC reports there is a Type I and a Type II report.  A Type I report is basically worthless from a reliance perspective because no testing of the controls is ever performed.  With a Type I report, the auditor is signing off on the fact that the third party has controls defined and formally documented.  But without testing, there really is no point to this report.  Yet every now and then, I encounter a Type I report that an organization has relied upon for years.

The only report worth anything is a Type II report which tests the control environment to ensure that the controls are functioning as designed.  So, when you get that SOC 2 report, you need to make sure you have a Type II report where testing has been performed by the auditor.  Even then though, the report might not be as useful as you might think.

I Have A SOC 2 Type II Report From A Service Provider

While you want to read the whole report in detail, when I am pressed for time and cannot read it in its entirety, here is where I focus so that I can get a quick view of what I have.  Some CPA firms provide a one-page Executive Summary that gives the reader a quick overview of the report, provides the timeframe the report covers, opinion, exceptions and other useful information.  But that is not required by the AICPA so you cannot always rely on such an overview being in every report you receive.  When they are available, they can help you focus your quick review efforts even better.

The first thing to do is to read the auditor’s opinion which should be the first section of the report.  It is in the form of a letter on the auditor’s letterhead and signed by the auditing firm.  The opinion the auditor provides will be either:

  • Unqualified – no material control weaknesses or failures were identified.
  • Qualified – some material control weaknesses or failures were identified.
  • Adverse – significant control weaknesses or failures were identified.

An unqualified opinion is what all organizations desire and what most reports document.  But do not be fooled by an unqualified opinion.  There still could have been control weaknesses or failures identified but they did not rise to the level of being considered “material”.  I have seen some unqualified reports with control weaknesses that I would have considered material as their auditor, so you might still want to contact the organization to get clarification on any weaknesses identified.

A report with a qualified opinion is not the end of the world, but that will all depend upon what control weaknesses or failures created the qualification.  Someone misusing their access can be minor compared to not performing backups of servers for months.  As a result, you need to read each control weakness to determine the criticality of the control failure as well as review management’s responses to how they addressed or will address the failure.  Again, you may find yourself contacting the organization to clarify weaknesses documented.

In my experience, reports with an adverse opinion never get issued to the public.  Management sees all of the control failures and weaknesses and then embarks on the long arduous task of cleaning up their control environment.

The next section to look at is the one labeled ‘Information Provided by Independent Service Auditor’ or similar.  This is the section that will contain the testing results and will define which of the domains were covered as well as the timeframe the report covers.  Most organizations issue SOC reports annually, so you always want to make sure that you have the most current report.  If the coverage end date is getting within three months of a year old or more, you should contact the third party and ask them when the next report will be issued.  They should inform you that the new report is in progress and give you an estimated date the report will be issued.  If they do not give you a succinct answer, I would be concerned.

You need to go through this section looking at a couple of things.  The first is to determine which of the domains were covered.  While documenting those domains, you also need to review the testing that was performed and at what level of detail those tests were conducted.  For example, it is not unusual to see tests for change control cover five random changes but not test those changes for having appropriate documentation, backout instructions and testing, only that the changes were approved.  At some point you will need to read this section carefully to determine what, if anything, will cover the testing required by the PCI DSS.  But a quick perusal will usually give you an idea of what you are likely going to get out of the SOC 2 for PCI compliance, if you are going to get anything at all.

This leads to the next section of the report you should read.  The last section of all SOC reports is usually titled ‘Supplemental Information Provided By [Organization Name]’.  This section contains information that was provided by the entity being audited but is not covered by the auditor’s opinion.  There can be all sorts of information presented here but the important point to remember is that the auditor did not test or assess the accuracy of that information.  So, you need to take any information provided in this section with a bit of skepticism.

It is in the Supplemental Information section that you want to look for a sub-section titled ‘Management’s Response to Control Exceptions’ or similar.  Even when an organization has an unqualified opinion, there can still be items listed in this section.  If there are items listed, you want to carefully read what those items were and how management addressed or corrected the condition.  If you find any control issues and responses that concern you, you should contact the entity and get those discussed so that you are comfortable with the situation.  If you cannot get comfortable with the situation, then you may want to consider additional controls at your end to compensate for the control weakness with the third party.

In the next postpost I will take you through a more thorough review of the SOC report.

17
Mar
18

Can Every Requirement Be Met With A Compensating Control?

“In theory, theory works.” – Jeff Hall

Some years back, the PCI SSC came out at the Community Meeting and stated that every PCI DSS requirement could be addressed by a compensating control worksheet (CCW).  A rather broad statement but it started a bunch of us in the PCI community thinking, “Is that really the case?”

Before reading this post, I highly recommend reading my post on writing CCWs so that you can fully appreciate why not every requirement can be met by a CCW.

That said, it turns out that there are a lot of requirements where there is no way to develop a CCW.  Here are just a few examples.

1.1.2 – Network Diagram(s) and 1.1.3 – Data Flow Diagram(s)

What would be the mitigating controls here?  There are none because diagrams are diagrams.  There is nothing you can do to compensate for these missing other than provide them.

1.1.6 – Firewall Rules

As with 1.1.2 and 1.1.3, what could possibly serve as a mitigating control?  If the firewall rules are not able to be reviewed, there is nothing you can rely upon to go above and beyond the control.

I have had people suggest that the QSA could rely on Nmap and vulnerability scans of the firewalls.  But that does not necessarily confirm all of the ports/services that are configured for the firewall nor does it necessarily confirm that the devices using those ports are the same ones that are in scope for PCI compliance.

1.2.3 – Wireless Networking

QSAs have repeatedly been told that this requirement can never be marked as ‘Not Applicable’.  The QSA is required to respond to how they confirmed at wireless was either in or out of scope.  But can you create a CCW for these requirements?

The controls that you need to assess to meet these requirements are the same controls you have to use in the CCW for mitigation.  So, if you have to document and evaluate the controls regardless, why would you bother to write a CCW?  You would not.  You would document and meet the requirements and move on.

3.2 – No Storage of SAD

This is the requirement that started the whole CCW debate.  When the PCI DSS was originally issued, QSAs were trained that this requirement could NEVER, EVER have a compensating control.  But that changed when the Council issued their proclamation a few years back.  But is that really the case?

Remember, a CCW must go above and beyond the intent of the original requirement.  3.2 also states in a note that SAD cannot be stored even if encrypted.  Encryption would be the only mitigating control available to an organization that wants to store SAD.  So what replaces encryption if that cannot be used?  Tokenization by a third party would be an option, but if you go that route, you are not storing the SAD, so the discussion becomes moot.

8.3 – Multifactor Authentication

Some form of multifactor authentication (MFA) is required for non-console administrative access to cardholder data environment (CDE) systems and remote access to an in-scope network.  Since the Council has clearly defined MFA and also knocked down multiple logons with different credentials, what is left?  In the end, there is no way around meeting this requirement other than doing what the requirement states.

10.1 – 10.3 and 10.6 – Log Data

Here is another example of where there really is no way to write a CCW.  You are either gathering log data (centrally or on individual systems) or you are not.  You are either reviewing the log data daily or you are not.  Then there is the requirement of sending log data from internet facing devices to an internal device.  No matter how creative you think you are, there are no controls that will mitigate this situation and also go above and beyond.

As I said at the beginning of this post, these are just some of the examples where a CCW is just not going to make it.  So, the next time you think about meeting a PCI DSS requirement by using a CCW, make sure you understand the requirement and that there are controls that will mitigate the risk and go above and beyond the original intent of the requirement.  You will save yourself and your QSA a lot of time and consternation.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930