Archive for the 'Requirement 2 – Do not use vendor-supplied defaults' Category

08
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 2

In the last post I discussed what the SOC reports are and what, in general, to look for in a SOC 2/3 report.  Now I want to take you through the more detailed analysis of the SOC reporting so that you can understand why it might not give you the result you desire and how to address that fact.

How Do I Analyze The SOC Report?

Based on the testing in the ‘Information Provided by Independent Service Auditor’ section, you are going to need to map that testing into the PCI ROC tests, if they even fit.  I typically use the Prioritized Approach spreadsheet to do this as it provides a way of documenting the requirements covered and a quick dashboard regarding what is covered.

As you reviewed the domains listed under the SOC 3 report, I am sure you thought; “What is not to like?  It looks like most of what I need for PCI is covered.”  But you would be wrong.  You will find after you map the controls from any SOC 2 report that covers all the TSP domains into the Prioritized Approach that the report will likely only cover around 20% to 25% of the PCI DSS requirements.  That is because the level of detail in the SOC tests are just not as detailed as they are in the PCI DSS.  As a result, SOC 2 reporting does not provide the kind of reliance you need to respond to all of the relevant PCI DSS requirements.

For example, while SOC will likely test that password controls are in place, you will be unable to ascertain if the organization enforces seven character or greater password lengths, password complexity, nor if they require passwords to be changed every 90 days or less.  Let alone if the special requirements for vendor password management are enforced.  It is these missing details that create the coverage problems with using the SOC reporting results.

The same can be said for change management.  When tested, the SOC report will likely call out a lot about change management, but not at the level of detail required in the PCI DSS for requirements under 6.4.  You will also find that coverage in requirements 1 and 2 regarding network and server configurations will be lacking in specificity to meet the PCI DSS testing.

Now as a QSA, you have a decision to make.  Can you accept only 20% to 25% of coverage of PCI DSS requirements as being PCI compliant?  I know I cannot.  I need much more to work with before I can get comfortable that a SOC report provides the necessary coverage for PCI compliance.

Now What?

You and your client have expended all this effort and are no closer to the result desired than when this process started.

So, what to do next?

Work with your service providers that provide you SOC reports to include testing that adds the PCI DSS details that are missing.  There will likely be a bit of push back from these service providers because adding testing to their SOC reports will cause the cost of their SOC reports to increase, sometimes significantly.  So be prepared for it.

What you need to do is to have their auditors add the necessary testing details to the description of controls and then have them test that they are in place.  Examples include:

  • Password length, complexity, change frequency and the procedures followed to perform a password reset.
  • Details surrounding privileged and general user management including provisioning, management approvals, users are implemented with least privilege and users are disabled or removed when terminated.
  • Changes tested for segregation of duties between developers and operations, segregation of test, QA and production environments, production data not used for testing, developers do not have unrestricted access to production, test data and accounts removed before applications are promoted to production, changes document impact, they are appropriately authorized, they have been tested, they have been vulnerability assessed and they document backout procedures.
  • If encryption is used to protect data, document the algorithms used, are key custodian agreements in place, are split key processes in place if performing manual key management, indicate if a hardware security module (HSM) is used and are keys changed when their crypto-periods expire or they are believed to be compromised.
  • Document the configuration standards that are followed by device classes such as firewalls, switches, servers and test that they have been implemented.
  • Document that anti-virus is implemented on systems commonly affected by viruses and malware, what the anti-virus solution is that is implemented, the anti-virus solution cannot be disabled and that the anti-virus solution is actively running on all systems it is installed.
  • Document that vulnerability scanning is performed, how often scanning is performed and that vulnerabilities are remediated.
  • Document that penetration testing is performed, how often penetration testing is performed and that findings are remediated.
  • Document that log data is collected from all devices, it is reviewed at least daily and that it contains a date/time stamp, device name, type of log entry and other relevant information.

There are a lot of other areas that could be added to the SOC report, but these are, in my opinion, the bare minimum that need to be added to make the SOC report more relevant for PCI.  I am trying to balance the amount of additional information needed versus the cost of providing it in the SOC report.

By adding all of this will it cover all of the gaps between SOC and PCI?  No.  But it should give your QSA significantly more comfort that the controls in place to meet PCI than what is currently being provided by CPAs.

Advertisement
04
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 1

This is a common question that QSAs encounter from clients.  The client has an SSAE 18 Controls at a Service Organization (SOC) report from one of their service providers and they want to know if they can use it to satisfy any or all of the requirements in 12.8, 12.9 and 12.11 related to vendor management?

The biggest caveat in this discussion is that the PCI SSC does not sanction the use of any report other than a PCI Attestation Of Compliance (AOC) and/or a PCI Report On Compliance (ROC) in addition to any other PCI reports.  The Council has repeatedly stated that if a QSA chooses to rely on an SSAE 18 SOC 2 report (or any other compliance report for that matter), the QSAC and their client accepts the risk if the SSAE 18 SOC 2 does not cover what the QSA claims it covers and therefore relies upon it for fulfilling PCI ROC requirements.  As a result, most QSAs will not accept an SSAE 18 SOC 2 report (or any other non-PCI compliance reports) for any reason.

For those of us “recovering” certified public accountant (CPA) types that have conducted SSAE18 audits, we know how to read and interpret these reports.  As a result, when we are asked about SSAE 18 SOC 2 reports being relevant, our answer is that, “It depends on what the SOC 2 covers and how it was tested.”

Before we get too deep into this discussion though, we need to define the terminology surrounding this topic.  The first thing is that SSAE 18 replaced SSAE 16 as of 2017 even though nothing else appears to have changed.  The next key thing anyone needs to know about SSAE 18 is that there are three reports that can come from this reporting series: SOC 1, SOC 2 and SOC 3.

The first, SOC 1, is for financial auditors only.  It used to be called a SAS 70 years ago.  It is a report focused on financial controls that an external auditor needs to ensure that the financial numbers coming from the third party can be relied upon in their annual audit of their client.  Yes, these SOC 1 reports can cover security controls, but that is only in regard to financial systems, not necessarily the third party’s entire environment.  In addition, the control coverage is typically not as deep as required for PCI compliance.  The bottom line is that any reliance on a SOC 1 report outside of financial systems should never be assumed.

I am going to cover the SOC 3 report next because it covers all of the security domains.  The SOC 3 report (also sometimes referred to as the ‘SysTrust’ report) covers the following domains:

  • Organization and Management – The criteria relevant to how the organization is structured and the processes the organization has implemented to manage and support people within its operating units.
  • Communications – The criteria relevant to how the organization communicates its policies, processes, procedures, commitments, and requirements to authorized users and other parties of the system and the obligations of those parties and users to the effective operation of the system.
  • Risk Management and Design and Implementation of Controls – The criteria relevant to how the entity (i) identifies potential risks that would affect the entity’s ability to achieve its objectives, (ii) analyzes those risks, (iii) develops responses to those risks including the design and implementation of controls and other risk mitigating actions, and (iv) conducts ongoing monitoring of risks and the risk management process.
  • Monitoring of Controls – The criteria relevant to how the entity monitors the system, including the suitability, and design and operating effectiveness of the controls, and takes action to address deficiencies identified.
  • Logical and Physical Access Controls – The criteria relevant to how the organization restricts logical and physical access to the system, provides and removes that access, and prevents unauthorized access to meet the criteria for the principle(s) addressed in the engagement.
  • System Operations – The criteria relevant to how the organization manages the execution of system procedures and detects and mitigates processing deviations, including logical and physical security deviations, to meet the objective(s) of the principle(s) addressed in the engagement.
  • Change Management – The criteria relevant to how the organization identifies the need for changes to the system, makes the changes following a controlled change management process, and prevents unauthorized changes from being made to meet the criteria for the principle(s) addressed in the engagement.

There are also some additional considerations that are related to Confidentiality specified in the Trust Services Principals and Criteria (TSP), but those are not required to be covered in the SOC 3 report.

Finally, there is the SOC 2 report.  The SOC 2 report uses the same TSP as the SOC 3 but with a twist.  The third party can select any or all of the seven domains to be assessed.  Think of it as a “cafeteria style” assessment.  With the SOC 2, the AICPA does not require that all domains be covered (as with the SOC 3), the assessed entity can select only those domains they wish audited.  As a result, a third party could select only the ‘Organization and Management’ domain to be assessed and nothing else in their SOC 2 report.  Therefore, just because you have a SOC 2 does not mean it covers the domains necessary for your PCI assessment.  Like the SOC 3, in addition to the seven domains, the SOC 2 can also cover none, any or all of the additional considerations documented in the TSP.

Within each of these SOC reports there is a Type I and a Type II report.  A Type I report is basically worthless from a reliance perspective because no testing of the controls is ever performed.  With a Type I report, the auditor is signing off on the fact that the third party has controls defined and formally documented.  But without testing, there really is no point to this report.  Yet every now and then, I encounter a Type I report that an organization has relied upon for years.

The only report worth anything is a Type II report which tests the control environment to ensure that the controls are functioning as designed.  So, when you get that SOC 2 report, you need to make sure you have a Type II report where testing has been performed by the auditor.  Even then though, the report might not be as useful as you might think.

I Have A SOC 2 Type II Report From A Service Provider

While you want to read the whole report in detail, when I am pressed for time and cannot read it in its entirety, here is where I focus so that I can get a quick view of what I have.  Some CPA firms provide a one-page Executive Summary that gives the reader a quick overview of the report, provides the timeframe the report covers, opinion, exceptions and other useful information.  But that is not required by the AICPA so you cannot always rely on such an overview being in every report you receive.  When they are available, they can help you focus your quick review efforts even better.

The first thing to do is to read the auditor’s opinion which should be the first section of the report.  It is in the form of a letter on the auditor’s letterhead and signed by the auditing firm.  The opinion the auditor provides will be either:

  • Unqualified – no material control weaknesses or failures were identified.
  • Qualified – some material control weaknesses or failures were identified.
  • Adverse – significant control weaknesses or failures were identified.

An unqualified opinion is what all organizations desire and what most reports document.  But do not be fooled by an unqualified opinion.  There still could have been control weaknesses or failures identified but they did not rise to the level of being considered “material”.  I have seen some unqualified reports with control weaknesses that I would have considered material as their auditor, so you might still want to contact the organization to get clarification on any weaknesses identified.

A report with a qualified opinion is not the end of the world, but that will all depend upon what control weaknesses or failures created the qualification.  Someone misusing their access can be minor compared to not performing backups of servers for months.  As a result, you need to read each control weakness to determine the criticality of the control failure as well as review management’s responses to how they addressed or will address the failure.  Again, you may find yourself contacting the organization to clarify weaknesses documented.

In my experience, reports with an adverse opinion never get issued to the public.  Management sees all of the control failures and weaknesses and then embarks on the long arduous task of cleaning up their control environment.

The next section to look at is the one labeled ‘Information Provided by Independent Service Auditor’ or similar.  This is the section that will contain the testing results and will define which of the domains were covered as well as the timeframe the report covers.  Most organizations issue SOC reports annually, so you always want to make sure that you have the most current report.  If the coverage end date is getting within three months of a year old or more, you should contact the third party and ask them when the next report will be issued.  They should inform you that the new report is in progress and give you an estimated date the report will be issued.  If they do not give you a succinct answer, I would be concerned.

You need to go through this section looking at a couple of things.  The first is to determine which of the domains were covered.  While documenting those domains, you also need to review the testing that was performed and at what level of detail those tests were conducted.  For example, it is not unusual to see tests for change control cover five random changes but not test those changes for having appropriate documentation, backout instructions and testing, only that the changes were approved.  At some point you will need to read this section carefully to determine what, if anything, will cover the testing required by the PCI DSS.  But a quick perusal will usually give you an idea of what you are likely going to get out of the SOC 2 for PCI compliance, if you are going to get anything at all.

This leads to the next section of the report you should read.  The last section of all SOC reports is usually titled ‘Supplemental Information Provided By [Organization Name]’.  This section contains information that was provided by the entity being audited but is not covered by the auditor’s opinion.  There can be all sorts of information presented here but the important point to remember is that the auditor did not test or assess the accuracy of that information.  So, you need to take any information provided in this section with a bit of skepticism.

It is in the Supplemental Information section that you want to look for a sub-section titled ‘Management’s Response to Control Exceptions’ or similar.  Even when an organization has an unqualified opinion, there can still be items listed in this section.  If there are items listed, you want to carefully read what those items were and how management addressed or corrected the condition.  If you find any control issues and responses that concern you, you should contact the entity and get those discussed so that you are comfortable with the situation.  If you cannot get comfortable with the situation, then you may want to consider additional controls at your end to compensate for the control weakness with the third party.

In the next postpost I will take you through a more thorough review of the SOC report.

09
Jun
17

We Need A Change To 2.3.b

I just wanted to give everyone a “heads up” about some guidance we recently received from the PCI SSC regarding jump boxes or out-of-band (OOB) system management solutions and the use of insecure protocols such as SNMPv1/2 and Telnet.

But did everyone know that this solution also requires a compensating control worksheet (CCW)?

For years (at least since the Phoenix Community Meeting years ago), the Council has been recommending the use of firewalls and jump boxes as a way to secure instances where organizations need to use insecure protocols.  These enclaves are firewalled, VLAN’d and configured so that only the jump box can be used to remotely connect to the devices over Telnet and allowing other insecure protocols to be kept away from other networks.  However, I do not recall any of those discussions ever explicitly calling out the need for a CCW.  I suppose the Council just figured we would all be bright enough to write one up.

What led me to this revelation you ask?

When I was going through my QSA Requalification this spring, they had a scenario with a jump box solution.  One of the questions related to the scenario involved how you would create a CCW for the insecure protocols used in the administrative VLAN that the jump box provided access.  While I answered the questions correctly, it triggered a new question regarding why a CCW was needed in the first place.

Then when the question was posed back to the Council, we got a reply indicating that a CCW would be required because of requirement 2.3.b which states:

“Review services and parameter files on systems to determine that Telnet and other insecure remote-login commands are not available for non-console access.”

The problem with the requirement is that it treats all Telnet with equal distain regardless of risk.  Yes, Telnet is always a clear text protocol, but when it is buried two or three layers away from any general network or the internet and requires administrator credentials and MFA, it is hardly as “at risk” as it would be when PCI started over 15 years ago and networks were as flat as a piece of paper.

As a result, I would like to recommend that the Council work to change 2.3.b to take into account the use of network segmentation, firewalls, VLANs, ACLs, MFA and jump boxes to allow the use of Telnet and insecure protocols when in a properly isolated and secure environment.  It seems silly to me that someone goes through all of the right steps to secure their environment only to be told that they still need a compensating controls to meet a requirement that does not reflect the real risk.

The other reason I feel this needs to be addressed is that a lot of banks and processors seem to see CCWs as a huge red flag.  Something to be avoided at all costs because it implies to them non-compliance.  And non-compliance is a “bad” thing.  I cannot tell you the collective hand wringing some banks go through for really simple CCWs all because they do not want to have any PCI assessments with CCWs.

Ultimately I think this all comes down to the fact that those banks and processors have no clue as to the amount of risk any CCW presents.  This is because most banks and processors staff their PCI compliance areas with auditors and compliance professionals, not technicians.  Given that the PCI DSS is predominately all about security technology and its implementation, these auditors and compliance people are not equipped to make the decisions that typically need to be made regarding CCWs.  As a result, they are all high risk in their eyes and treated accordingly.

Hopefully the Council can address this situation and we can avoid needless documentation for a preferred “best practice”.

11
May
16

Heads Up – Changes To SAQ A

I had a question this week regarding v3.2 of SAQ A that pointed out there have been some changes to that SAQ that people may have not noticed given the larger issues with the changes to the PCI DSS.  As a reminder, SAQ A is the least amount of PCI requirements any in-scope organization can comply.

Those added requirements to SAQ A are:

  • 2.1(a) – Are vendor-supplied defaults always changed before installing a system on the network?
  • 2.1(b) – Are unnecessary default accounts removed or disabled before installing a system on the network?
  • 8.1.1 – Are all users assigned a unique ID before allowing them to access system components or cardholder data?
  • 8.1.3 – Is access for any terminated users immediately deactivated or removed?
  • 8.2 – In addition to assigning a unique ID, is one or more of the following methods employed to authenticate all users?
  • 8.2.3(a) – Are user password parameters configured to require passwords/passphrases meet the following?
  • 8.5 – Are group, shared, or generic accounts, passwords, or other authentication methods prohibited
  • 12.10.1(a) – Has an incident response plan been created to be implemented in the event of system breach?

Even when a merchant outsources all of their card processing, these are controls that can still apply because in a lot of cases, a merchant is responsible for setup, management and maintenance of their outsourced payment processes and applications.

In addition, merchant employees will also interact with an outsourced payment system to handle chargebacks and disputes.  Those user accounts used by the outsourced environment will normally be managed by someone at the merchant, not necessarily the service provider.

In regards to incident response, the merchant will be involved with incident response even when they have totally outsourced their payment environment.  The merchant will work with their outsourcer to work through an incident and those responsibilities of the merchant need to be documented.

As a result, the addition of these controls should not be a surprise to anyone.

25
Jul
15

Compensating Control Refresher

From time to time, organizations find themselves in the predicament of not being able to meet a PCI DSS requirement due to business or technical constraints. To address that situation, the PCI SSC has provided the compensating control worksheet (CCW) as a way to work around those requirements that cannot be met directly as stated in the PCI DSS. When the CCW was updated back in 2010 for v1.2, I wrote about those changes and how to write a CCW. However, here we are at v3.1, five years down the road and I still see a lot of poorly and improperly written CCWs. As a result, I think it is time to take people through a refresher on the CCW.

First and foremost, the writing of any CCW is your organization’s responsibility. Your QSA can provide input and guidance, but the origination of the CCW is up to the organization. Once developed, your QSA can review it and make suggestions to enhance and improve the CCW. Once that has been completed, you will then want your acquiring bank to review it to ensure that they will accept it as part of your self-assessment questionnaire (SAQ) or Report On Compliance (ROC) filing.

Secondly, the format of the CCW is dictated by the Council and that format is provided in Appendix B of the SAQ D or in Appendix C of the ROC. Failure to use the proper format will create issues with your QSA, your bank and with the Council, particularly if you are doing a ROC. So please use the Council supplied format and not develop something on your own.

Finally, the PCI SSC has stated that any requirement can have a CCW. In the past, the Council instructed QSAs and ISAs that requirement 3.2 [Do not store sensitive authentication data after authorization (even if encrypted). If sensitive authentication data is received, render all data unrecoverable upon completion of the authorization process] was not allowed to have a CCW. At the 2014 Community Meeting, the Council backed away from that restriction and said that any requirement can have a CCW with no restrictions. However, as a QSA I would have a serious problem accepting a CCW for requirement 3.2 because storing sensitive authentication data (SAD) is the whole reason why the PCI DSS was created to stop.

To remind everyone, the CCW is broken into seven sections.

  • Identification of the PCI DSS requirement(s) being compensated.
  • The constraint or business justification for needing the CCW.
  • The original objective of the requirement(s) being compensated.
  • Identification of any additional risks because of the CCW
  • The compensating controls.
  • The procedures your QSA/ISA followed to confirm that the compensating controls are in place and functioning.
  • The procedures followed by your organization to maintain the compensating controls.

While the Council tells everyone to have an individual compensating control for each requirement, there are some places where a compensating control is the same for a number of requirements. This most often occurs for requirements in section 8 around the various user management requirements or 6.1, 2.2, 11.2 and the processes of vulnerability management. I would highly recommend using one CCW per requirement, but I can understand why you might combine some. Just be judicial in combining them. Also, list not only the number of the requirement(s), but also the text of the requirement from the Requirements column in the PCI DSS. While your QSA might have memorized the PCI DSS requirements, bankers and others that will read the CCW have typically not committed to that level of detail and it will help them with the context of the CCW.

The business justification needs to be more than just “we don’t want to” or “it was too hard”. Believe it or not, I have had a lot of organizations provide just such simplistic and silly reasons for justifying a CCW. Proper justifications can involve budgetary constraints, timing (e.g., not enough time to complete remediation by the end of the assessment period), application requirements (e.g., the application requires XP to run) and/or vendor requirements (e.g., the vendor requires a hardware upgrade to correct the issue). If you do have a target date for addressing the CCW, this is where you want to provide that information so that readers know that the CCW has some time limit.

The original objective is the easiest part of the CCW to develop. The Council has provided the “Guidance” column in the PCI DSS for each requirement and it is the verbiage in that Guidance column that you should use to explain the original objective of the requirement. If you are using the CCW for multiple requirements, this section can get rather lengthy and I would recommend identifying the Guidance information with its requirement to help understanding of the information.

The next section can sometimes be the toughest to develop and that is identification of any additional risks because you are using a CCW. In some cases, there may actually be no additional risk perceived by using a CCW. One such example is when organizations have a separate system management VLAN where network and system administrators can use telnet, SNMPv2 and other “unsecure” protocols in addition to SSH, RDP and other secure protocols to manage devices/systems. These system management VLANs typically require the use of an out of band (OOB) to gain access, administrator credentials different from the administrator’s user credentials and two factor authentication to name just a few of the controls you see in this example. These management/administrative VLANs are no more risky than using only secure protocols.

However, if you are compensating for having to keep Windows XP running, that will likely be a very different story and depending on the compensating controls put in place, the risk could be moderately higher than not have XP around. The key here is that it is that the risk should be assessed and then honestly discussed in the CCW. If you think you are going to say that having XP does not increase risk to your cardholder data environment (CDE), I would seriously think again regardless of your compensating controls in place because any outdated Windows implementation is a security problem waiting to happen regardless of how you think you have mitigated the risk.

The compensating controls section is where the rubber finally meets the road. It is here that you document each individual control that compensates for your organization’s inability to meet the requirement(s) in question. I recommend that people either bullet point or number list each individual control. The reason is that in the next two sections, you need to tie the validation and maintenance items to the controls in this section and doing some sort of list makes it easy for people to ensure they have covered all controls in each section.

The most common mistake made in this section is organizations state that they have a project to remediate the issue(s). Sorry, but this is NOT a control. It is nice information, but it is not a control that can be relied upon. QSAs never want to ever see such statements made about future projects ever in this section. This section is all about what you are doing from a controls perspective to manage the fact that you cannot meet the requirement(s).

Valid controls in this section must also go “above and beyond” what is required by the PCI DSS. Examples of “above and beyond” include:

  • Reviewing log data in real time for a particular condition that would indicate an out of compliance condition on a control. This is above and beyond because log data only needs to be reviewed daily for such conditions.
  • Using whitelisting to identify applications that do not belong on a PC and generating an alert in real time if such applications are found. Such technology is above and beyond because it is not currently required by the PCI DSS.
  • Using critical file monitoring to identify rogue applications that do not belong on a PC and generating alerts in real time if found. Critical file monitoring is a PCI requirement, but this goes above and beyond because monitoring is only required on a weekly basis.

The list here can go on and on, but hopefully I have given you some ideas of how to create compensating controls that can actually compensate for your inability to comply with the requirement(s).

One key point though is that you cannot use a requirement in the same requirement group to compensate for a different requirement in the same group. For example, requirement 6.4 has bunches of sub-requirements under it. You cannot write a compensating control for one sub-requirement in 6.4 and then use a different sub-requirement under 6.4 as one of your compensating controls regardless if it is above and beyond.

The next section will list how the controls were assessed by your QSA/ISA to prove they have been implemented. So using our previous bullet list, here is what the control validation bullets would look like.

  • Observed the system information event management (SIEM) solution and verified that alerts are generated in near real time for [control failure condition] and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [whitelisting solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [whitelisting solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [critical file monitoring solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [critical file monitoring solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.

Finally, you need to document what your organization will do to ensure that the controls remain implemented and effective. This is where most compensating controls fall apart. The organization gets through their assessment and then neglects to keep the compensating controls working. Using our list from the compensating controls section, the maintenance controls would look something like this.

  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the SIEM and test that the alerts for the [control failure condition] are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [whitelisting solution name] and test that the alerts for rogue applications are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [critical file monitoring solution name] and test that the alerts for rogue applications are still functioning as designed.

A good idea in the maintenance section is to set timeframes for remediating any control testing failures.

One other important item of note about the controls, validation and maintenance lists. Notice that there are no “forward looking” statements made such as someone “will” perform or “will” review. CCWs must be shown to be in place and operating. A promise of implementing a control is NOT a control either. The control must be shown to be operating and maintained. That is an important point a lot of organization miss. It means that CCWs cannot be created at the last minute and then be operational past the filing of your SAQ or ROC. If you are going to have to use a CCW, that means you will need to identify the situation early and then get the compensating controls implemented, validated and through at least one maintenance cycle before it can be accepted.

CCWs can buy organizations time while they address issues that will take longer to address than their PCI assessment period. Unfortunately, there are organizations that see the CCW as a way to be judged PCI compliant without addressing their serious security shortcomings. It is not unusual for large organizations to have a number of CCWs particularly if they have legacy applications and hardware. However, I would highly recommend that all organizations only rely on CCWs if there are no other options to achieving PCI compliance.

01
Jan
15

The Three Hop Rule

At the 2014 Community Meeting, the PCI SSC responded to a question about network segmentation with what has come to be termed the “Three Hop Rule”.  The statement was made that if a device/system was “three hops or more” away from the cardholder data environment (CDE), then it was out of scope.  A lot of us in the room were taken aback by this statement.  And based on some questions of late regarding this subject, there is a lot of confusion out there regarding what the Council was trying to say.

First, the term “hop” is not a network security term nor does it even have any security implications.  The term “hop” is defined as:

“Data packets pass through routers and gateways on the way.  Each time packets are passed to the next device, a hop occurs.”

The count of three therefore is the number of hops or “hop count” between devices.  Hop count is defined as:

“Each router along the data path constitutes a hop, as the data is moved from one Layer 3 network to another.  Hop count is therefore a basic measurement of distance in a network.”

Nowhere in these definitions is there any statement about hops, the number of hops between devices and any correlation of hops and hop count as some form of security.  Hence why a lot of us were really concerned about this statement and likely why there is so much confusion and discussion resulting from the comment.

What we believe the Council was getting at was the number of network segments there are between a device/system and the CDE.  However, having three network layers between the CDE and devices/systems is also no guarantee of security.

What provides security at Layer 3 are the access control lists (ACL) or rules that allow or deny packets to traverse particular paths of the network.  ACLs can be implemented to control what devices and/or ports and services can communicate between various networks.  But just because there are ACLs implemented at each hop is also no guarantee that the number of hops between devices also secure the devices.

This is why the requirements in requirement 1 of the PCI DSS require that the QSA review all relevant ACLs to ensure that the network is truly segmented.  It is also why in v3, requirement 11.3 requires that the penetration testing also prove that the network is truly segmented.  As a result, the number of hops between the CDE and a device should not be considered a guarantee and never will be a guarantee that a device is out of scope.

The bottom line is that, in order to be truly out of scope, there needs to be ZERO hops between a device and the CDE.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

27
Oct
13

Atom AMPD And PCI Compliance

Here is a relatively new player in network security space for small and mid-sized businesses (SMBs).  A friend of mine that does a lot of work with SMBs is encountering this solution more and more.  And is there any wonder why when it is portrayed as a God send for SMBs.  On the Atom AMPD Web site, they explain their Kwick Key solution.

“Kwick Key™, a USB flash drive, is the delivery device behind the AtomOS.  The Kwick Key is bootable and plugs into nearly all servers.  Kwick Key users experience a significant savings achieved with a high quality “one key” solution for their networking needs.

Simply install the Kwick Key into an internet-connected server, display the web interface, configure the features and you’re done.  The server is transformed into a multi-functional networking and communication device.

The underlying operating system behind the Kwick Key AtomOS is Linux. The content stored on the server is also backed up on the Kwick Key.  Once configured, the Kwick Key can be transferred to new equipment while maintaining its configuration, providing portability in the event of equipment failure.  A redundant option is also available.”

What is wrong with this picture?

If you said, “Too good to be true,” you would be correct.  There are no silver bullet solutions to security.  However these sorts of “all in one” security solutions are being marketed to SMBs all of the time as a cost saving way to be secure.  And since SMBs do not typically have any significant IT personnel, they are always looking for ways to reduce IT workload and save money.  However, if you need to be PCI compliant, this is not a solution for your organization.  Why?

If you read the Savings page on their Web site, they state:

“Your current IT infrastructure is likely requiring multiple boxes to serve your network and communication needs.  This likely includes multiple boxes supporting firewalls, content filters, routing and VoIP applications; each requiring individual training, maintenance, and ongoing licensing fees.  The AtomOS provides just one platform, one interface, one operating system. It brings to bear the BEST practices via a convergent technology.  All modules are tied together by our proprietary user interface.”

That “all in one” solution approach violates PCI DSS requirement 2.2.1 which states:

“Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)”

The reason for requirement 2.2.1 is to leverage the concept of “defense in depth”.  Defense in depth relies on multiple layers of defense such that if one layer develops vulnerability, the other layers still can provide some security and mitigate for the vulnerability until the vulnerability is fixed.  Under the Atom solution, vulnerability anywhere potentially creates a situation where the whole solution is at risk because of one part’s failure.

As a result, in order to be PCI compliant, it will require you to purchase multiple Kwick Keys.  I would assume that multiple keys will result in costs that negate Atom’s cost advantage over other PCI compliant solutions.

Then go to the solution’s product page for Kwick Key.  Take a look at all of the firewall features that are available.  Looks pretty good until you realize there is one notable feature missing – stateful packet inspection (SPI).  Basically, Atom has implemented port filtering which comes standard on Linux distributions.  Not that this is not secure, but it does not comply with requirement 1.3.6 which explicitly requires that SPI be implemented.

There are ways to add SPI to this solution.  However, that will mean you will have to support it yourself and the whole point of the Atom solution is to get out from under supporting such a solution for your organization.

My assumption is that with an appropriate wireless adapter in the system running Kwick Key that the solution will serve as a wireless access point.  Under requirement 1.2.3, wireless is required to be segregated from an organization’s cardholder data environment (CDE) by a firewall.  Given that the wireless is operating on the same device, it is questionable if compliance with this requirement could be truly accomplished.

The same concerns with wireless would exist with the virtual private network (VPN) solution.  Having the remote access to the internal network also running on the same system is not a best practice.  And how secure such a situation would be on this device is questionable.

You need to remember, this is not a purpose built networking device, this is a repurposed computer running Linux.  It is potentially susceptible to any number of Linux-based attacks and vulnerabilities depending on the services running.  And the more services you pile onto this device, the more potential for vulnerabilities.

Then there is the ability to add a voice over IP (VoIP) call manager solution.  Seriously?  What a silly and very dangerous idea.  Why?  VoIP protocols are primarily stateless (i.e., UDP) which means that they cannot be protected by today’s firewall technology which only work with stateful protocols (i.e., TCP).  I have actually had vendors correct me on this because VoIP call set up (pick up the handset) and tear down (hang up the handset) are conducted using TCP.  What these folks always miss is that the actual conversation is conducted over UDP so that the conversation can be streamed between the phones in use which is the bulk of the activity with a telephone call.  And it is not just one or a few UDP ports that can be open; it is typically a range of thousands of UDP ports that are open to support telephony.  Talk about a target rich environment.

Adding a VoIP call manager on top of your firewall is probably the most dangerous thing an organization could do because VoIP is so easy to attack due to the stateless nature of its protocols.  By implementing VoIP on a firewall you are essentially negating the firewall.  Running VoIP on anything but its own dedicated server on its own dedicated network is the only way VoIP should be configured for security, regardless of a need to be PCI compliant.

Finally, there is no pricing provided for the USB “key”.  I always get concerned about “wonder” solutions that do not provide pricing without contacting the vendor’s sales operation.  Nine times out of ten, all this does is force potential customers to then be contacted relentlessly by sales people until they purchase the solution which is likely overpriced.

This post is not to say that this solution is not appropriate for other organizations.  However, if you need to be PCI compliant, this solution is not for your organization if it is implemented as the vendor describes.

24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.

06
Aug
12

Third Party Service Providers And PCI Compliance

There seems to be a lot of confusion regarding third parties that provide networking or hosting services and their obligations regarding PCI compliance.  This confusion is not uncommon as merchants and their service providers have not necessarily been provided enough guidance to understand their obligations.  I hope this post will clarify those obligations for all involved.

If you learn nothing else from this post, if a third party is providing your organization a service that has access to your cardholder data environment (CDE) or the third party could come into contact you’re your cardholder data (CHD), then that third party must ensure that the service complies with all relevant PCI requirements.  As a result, the third party needs to either allow you or your QSA to assess the services that they are providing or provide you with an Attestation Of Compliance (AOC) that documents that those services have been assessed and they are PCI compliant.

In the past, I have stated that third parties could also submit a letter signed by an officer of the third party stating that all of the services provided to their customer are PCI compliant.  Now that v2.0 of the PCI DSS has a separate AOC and the PCI SAQs have the AOC built into the SAQ, there should be no reason to need such a letter or to ask for one.  If a letter is what your third party is offering, it is better than nothing, but you should be pushing them hard for an AOC.  If they are reluctant to get you an AOC, as part of your vendor management process, you should take that into account and probably begin looking for a new vendor that will provide an AOC for their services.

The most common issue we run into with third parties is that their AOC or other representations of PCI compliance do not cover all of the services provided to the customer.  In case after case, we see the AOC covering requirements 9 and 12 and nothing else even though the services provided may require compliance with some or all of PCI requirements 1, 2, 3, 4, 5, 6, 7, 8, 10 and 11.

In a lot of cases, it is not that the third party does not want to comply with PCI; it is they are taking the lowest common denominator approach and only picked those services where all customers requiring PCI compliance are asking for an AOC.  That way they have reduced their costs of a QSA to assess their environment.  These third parties are accepting the fact that any customer that needs more services assessed will have to do it themselves.

Related to this issue is the third party that offers their SSAE 16 Service Organization Control (SOC) 1 report has proof of PCI compliance.  While a SOC 1 report can cover a few PCI requirements, people must remember that the SOC 1 report is structured specifically for financial auditors to ensure that the controls at a third party are properly constructed to support financial reporting at the customers.  As a result, a SOC 1 report is not going to be a substitute for an AOC that covers all services.  There is an alternative to this and that is to have the third party go through a SSAE SOC 2 report that focuses on the security controls of the PCI in-scope services provided.  We are hearing from third parties inquiring into the SOC 2 report, but cost and a lack of customers requesting such a report are driving why we do not see more SOC 2 reports available.

Another common issue we encounter is the refusal of the third party to cooperate in assessing the services provided to ensure they are PCI compliant.  There are still third parties that argue their services are not in-scope for PCI compliance even when it is painfully obvious that the third party’s personnel have access to their customer’s CDE and/or CHD.

The most common third party relationship we encounter is the management of routers or other layer 3 devices.  Where we encounter the most confusion in this relationship is in regards to the use of encryption to keep the network services organization out of scope for PCI compliance.  The key here is if the network services organization manages the encryption of the network, then they are in-scope for PCI compliance.  The reason is that the employees of the network services organization have access to the encryption keys and therefore could decrypt the communications and gain access to CHD transmitted over the network.  As a result, at a minimum, the network services organization is responsible for complying with some or all of requirements 1, 2, 4, 6, 7, 8, 9, 10 and 12.  If you receive such services and are not getting an AOC that covers these requirements, then you should be doing more work on your own as well as asking the third party why they are not covering more of the necessary PCI requirements.

The next most common service we encounter is the network services firm that is managing or monitoring an organization’s firewalls, remote access or intrusion detection/prevention.  Such services always put the third party in-scope for PCI compliance.  Some or all of requirements 1, 2, 6, 7, 8, 9 and 12 will need to be assessed for compliance with the PCI DSS.  The log capture and analysis requirements in requirement 10 may also be complied with if your organization is not capturing and analyzing the log data from these devices.

Another group of third parties we encounter a lot are records retention vendors.  Organizations like Iron Mountain have conducted their own PCI compliance project and readily hand out their AOC to customers.  However, where we see issues is with such vendors that provide their own tape library for their customers to use for backup.  We have encountered a number of third party’s doing the encryption at their library which puts them in-scope for PCI compliance, at a minimum, for requirements 3, 4, 6, 7, 8, 9, 10, 11 and 12.

We encounter outsourcing the data center a lot with large organizations, but small and mid-sized organizations are also hopping on the data center outsourcing bandwagon.  Where this puts the third party in-scope for PCI compliance is when the third party is responsible for maintaining the environment such as applying patches, managing servers or any other activities that would allow the third party’s personnel to potentially have access to CHD.  In such situations, at a minimum, the third party is responsible for complying with some or all of requirements 2, 5, 6, 7, 8, 9, 10 and 12.  Compliance with some or all of requirement 1 may be applicable if the third party is managing your firewalls or routers.  Compliance with some or all of requirements 3 and 4 may also be applicable if the third party is responsible for managing encryption keys for encrypting CHD or encrypting communications.

Where the most confusion regarding third party responsibilities occurs is in regards to “The Cloud.”  The most common reason for this is that every vendor seems to have a different definition for what “The Cloud” is, based on their particular services.  Using the definitions provided by the National Institute of Standards and Technology (NIST) in their publication SP800-145, ‘The NIST Definition Of Cloud Computing’, I can provide the following guidance.

If your organization is purchasing Infrastructure as a Service (IaaS), then the third party providing these services will typically be out of scope for PCI compliance except for requirements 9 and 12.  There are some instances where IaaS implementations may require compliance with the PCI DSS if the third party is managing network infrastructure that comes into contact with CHD as is usually the case with private cloud environments.

For Platform as a Service (PaaS) and Software as a Service (SaaS), the third party will have to provide PCI compliance for the services they are providing to your organization.  That is because with either of these service offerings, the third party must have access to the CDE and will have the potential of coming into contact with CHD.

The problem with the majority of PaaS and SaaS vendors is that they only deal with your organization through a Web-based interface, i.e., everything is automated – contracts, support, etc.  As a result, the contract is a “take it or leave it” situation that does not usually cover everything needed for PCI compliance, there is no way to independently verify the representations made by the third party as well as the fact that the AOC provided by the third party typically only covers only the physical security requirements in requirement 9 and possibly some of requirements 11 and 12 and nothing related to the other requirements, even though the third party may have responsibilities for PCI compliance outside of what is represented in their AOC.

If this is the case, there is little you or any QSA can do to properly assess the environment to ensure it is truly PCI compliant.  As a result, we have a lot of organizations that try to develop compensating controls for these cloud implementations.  These organizations very quickly and frustratingly find out that there are very few, if any, controls on their side of the equation that can get them to “above and beyond” the original requirement.

I know there are a lot of other examples of services being provided to merchants.  But, hopefully these examples can assist you in clarifying what you need or do not need from your third parties when it comes to PCI compliance.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031