Archive for the 'Requirement 7 – Restrict access to cardholder data' Category

03
Nov
18

Open Source

One of the questions we received at the last PCI Dream Team session was:

“What about open source for 6.5?”

I am sure the person asking wanted to know whether open source payment solutions must comply with the PCI DSS requirements in 6.5.x?

The quick and simple answer is of course, ‘Yes’!  Why would it not?  It is source code after all, so therefore it must comply with the requirements in 6.5.x (as well as other requirements in section 6 and throughout the PCI DSS).  The PCI DSS does differentiate between different sources of application code.  For PCI compliance purposes, code is code is code, regardless of the source.

Now what does come into play is whether or not the PA-DSS validation standard applies to an application.  As PA-DSS relates to open source, I wrote about that over eight years ago, but it is still relevant today.  For the purposes of this post, I am not talking about PA-DSS validated applications.

The next question a QSA typically gets is, “Well 6.5 only applies to internet-facing payment applications, right?”

Wrong!  Any payment application needs to meet the requirements in 6.5.x whether it is internet-facing or internal facing.  Also, it does not matter whether a browser is involved or not although a significant number of the requirements in 6.5.x are related to browser-based applications.

But ensuring open source is PCI compliant goes beyond just 6.5.x.  There are other requirements that, at a minimum, must be applied as well.  Not every requirement in a section or group or requirements may apply, but some will be needed to be covered depending on how the application works.

  • Section 3 related to encryption of stored data and encryption key management;
  • Section 4 related to encryption of communications;
  • Requirements 6.1 and 6.2 for patching and vulnerability management. This can become problematic for open source because as time goes on applications can develop vulnerabilities that the developer community does not address.  This is most likely because the community moved on and your application became an orphan;
  • Requirements 6.4 for application development. Remember, just because your organization did not develop the application, if it is not PA-DSS validated, then it is your responsibility to ensure the code securely processes, stores or transmits sensitive authentication data and/or cardholder data;
  • Requirement 6.6 is also in play regardless of whether or not the application is browser-based. At a minimum, code reviews must be performed.  If the application is browser-based, then you can add in a Web application firewall (WAF) for additional security;
  • Sections 7 and 8 related to access control and user management; and
  • Section10 related to application log data.

Remember, every time a new release of your open source solution becomes available, you have to go through all of this all over again if you intend to use the new release.

So those of you thinking that you can somehow leverage open source to reduce your PCI compliance footprint, think again.  All you have done is outsourced the development of your solution.  The rest is still on you.  In the end, it is really not much of a savings.

Advertisement
08
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 2

In the last post I discussed what the SOC reports are and what, in general, to look for in a SOC 2/3 report.  Now I want to take you through the more detailed analysis of the SOC reporting so that you can understand why it might not give you the result you desire and how to address that fact.

How Do I Analyze The SOC Report?

Based on the testing in the ‘Information Provided by Independent Service Auditor’ section, you are going to need to map that testing into the PCI ROC tests, if they even fit.  I typically use the Prioritized Approach spreadsheet to do this as it provides a way of documenting the requirements covered and a quick dashboard regarding what is covered.

As you reviewed the domains listed under the SOC 3 report, I am sure you thought; “What is not to like?  It looks like most of what I need for PCI is covered.”  But you would be wrong.  You will find after you map the controls from any SOC 2 report that covers all the TSP domains into the Prioritized Approach that the report will likely only cover around 20% to 25% of the PCI DSS requirements.  That is because the level of detail in the SOC tests are just not as detailed as they are in the PCI DSS.  As a result, SOC 2 reporting does not provide the kind of reliance you need to respond to all of the relevant PCI DSS requirements.

For example, while SOC will likely test that password controls are in place, you will be unable to ascertain if the organization enforces seven character or greater password lengths, password complexity, nor if they require passwords to be changed every 90 days or less.  Let alone if the special requirements for vendor password management are enforced.  It is these missing details that create the coverage problems with using the SOC reporting results.

The same can be said for change management.  When tested, the SOC report will likely call out a lot about change management, but not at the level of detail required in the PCI DSS for requirements under 6.4.  You will also find that coverage in requirements 1 and 2 regarding network and server configurations will be lacking in specificity to meet the PCI DSS testing.

Now as a QSA, you have a decision to make.  Can you accept only 20% to 25% of coverage of PCI DSS requirements as being PCI compliant?  I know I cannot.  I need much more to work with before I can get comfortable that a SOC report provides the necessary coverage for PCI compliance.

Now What?

You and your client have expended all this effort and are no closer to the result desired than when this process started.

So, what to do next?

Work with your service providers that provide you SOC reports to include testing that adds the PCI DSS details that are missing.  There will likely be a bit of push back from these service providers because adding testing to their SOC reports will cause the cost of their SOC reports to increase, sometimes significantly.  So be prepared for it.

What you need to do is to have their auditors add the necessary testing details to the description of controls and then have them test that they are in place.  Examples include:

  • Password length, complexity, change frequency and the procedures followed to perform a password reset.
  • Details surrounding privileged and general user management including provisioning, management approvals, users are implemented with least privilege and users are disabled or removed when terminated.
  • Changes tested for segregation of duties between developers and operations, segregation of test, QA and production environments, production data not used for testing, developers do not have unrestricted access to production, test data and accounts removed before applications are promoted to production, changes document impact, they are appropriately authorized, they have been tested, they have been vulnerability assessed and they document backout procedures.
  • If encryption is used to protect data, document the algorithms used, are key custodian agreements in place, are split key processes in place if performing manual key management, indicate if a hardware security module (HSM) is used and are keys changed when their crypto-periods expire or they are believed to be compromised.
  • Document the configuration standards that are followed by device classes such as firewalls, switches, servers and test that they have been implemented.
  • Document that anti-virus is implemented on systems commonly affected by viruses and malware, what the anti-virus solution is that is implemented, the anti-virus solution cannot be disabled and that the anti-virus solution is actively running on all systems it is installed.
  • Document that vulnerability scanning is performed, how often scanning is performed and that vulnerabilities are remediated.
  • Document that penetration testing is performed, how often penetration testing is performed and that findings are remediated.
  • Document that log data is collected from all devices, it is reviewed at least daily and that it contains a date/time stamp, device name, type of log entry and other relevant information.

There are a lot of other areas that could be added to the SOC report, but these are, in my opinion, the bare minimum that need to be added to make the SOC report more relevant for PCI.  I am trying to balance the amount of additional information needed versus the cost of providing it in the SOC report.

By adding all of this will it cover all of the gaps between SOC and PCI?  No.  But it should give your QSA significantly more comfort that the controls in place to meet PCI than what is currently being provided by CPAs.

04
Jul
18

Can I Use SSAE 18 SOC 2 Reports? Part 1

This is a common question that QSAs encounter from clients.  The client has an SSAE 18 Controls at a Service Organization (SOC) report from one of their service providers and they want to know if they can use it to satisfy any or all of the requirements in 12.8, 12.9 and 12.11 related to vendor management?

The biggest caveat in this discussion is that the PCI SSC does not sanction the use of any report other than a PCI Attestation Of Compliance (AOC) and/or a PCI Report On Compliance (ROC) in addition to any other PCI reports.  The Council has repeatedly stated that if a QSA chooses to rely on an SSAE 18 SOC 2 report (or any other compliance report for that matter), the QSAC and their client accepts the risk if the SSAE 18 SOC 2 does not cover what the QSA claims it covers and therefore relies upon it for fulfilling PCI ROC requirements.  As a result, most QSAs will not accept an SSAE 18 SOC 2 report (or any other non-PCI compliance reports) for any reason.

For those of us “recovering” certified public accountant (CPA) types that have conducted SSAE18 audits, we know how to read and interpret these reports.  As a result, when we are asked about SSAE 18 SOC 2 reports being relevant, our answer is that, “It depends on what the SOC 2 covers and how it was tested.”

Before we get too deep into this discussion though, we need to define the terminology surrounding this topic.  The first thing is that SSAE 18 replaced SSAE 16 as of 2017 even though nothing else appears to have changed.  The next key thing anyone needs to know about SSAE 18 is that there are three reports that can come from this reporting series: SOC 1, SOC 2 and SOC 3.

The first, SOC 1, is for financial auditors only.  It used to be called a SAS 70 years ago.  It is a report focused on financial controls that an external auditor needs to ensure that the financial numbers coming from the third party can be relied upon in their annual audit of their client.  Yes, these SOC 1 reports can cover security controls, but that is only in regard to financial systems, not necessarily the third party’s entire environment.  In addition, the control coverage is typically not as deep as required for PCI compliance.  The bottom line is that any reliance on a SOC 1 report outside of financial systems should never be assumed.

I am going to cover the SOC 3 report next because it covers all of the security domains.  The SOC 3 report (also sometimes referred to as the ‘SysTrust’ report) covers the following domains:

  • Organization and Management – The criteria relevant to how the organization is structured and the processes the organization has implemented to manage and support people within its operating units.
  • Communications – The criteria relevant to how the organization communicates its policies, processes, procedures, commitments, and requirements to authorized users and other parties of the system and the obligations of those parties and users to the effective operation of the system.
  • Risk Management and Design and Implementation of Controls – The criteria relevant to how the entity (i) identifies potential risks that would affect the entity’s ability to achieve its objectives, (ii) analyzes those risks, (iii) develops responses to those risks including the design and implementation of controls and other risk mitigating actions, and (iv) conducts ongoing monitoring of risks and the risk management process.
  • Monitoring of Controls – The criteria relevant to how the entity monitors the system, including the suitability, and design and operating effectiveness of the controls, and takes action to address deficiencies identified.
  • Logical and Physical Access Controls – The criteria relevant to how the organization restricts logical and physical access to the system, provides and removes that access, and prevents unauthorized access to meet the criteria for the principle(s) addressed in the engagement.
  • System Operations – The criteria relevant to how the organization manages the execution of system procedures and detects and mitigates processing deviations, including logical and physical security deviations, to meet the objective(s) of the principle(s) addressed in the engagement.
  • Change Management – The criteria relevant to how the organization identifies the need for changes to the system, makes the changes following a controlled change management process, and prevents unauthorized changes from being made to meet the criteria for the principle(s) addressed in the engagement.

There are also some additional considerations that are related to Confidentiality specified in the Trust Services Principals and Criteria (TSP), but those are not required to be covered in the SOC 3 report.

Finally, there is the SOC 2 report.  The SOC 2 report uses the same TSP as the SOC 3 but with a twist.  The third party can select any or all of the seven domains to be assessed.  Think of it as a “cafeteria style” assessment.  With the SOC 2, the AICPA does not require that all domains be covered (as with the SOC 3), the assessed entity can select only those domains they wish audited.  As a result, a third party could select only the ‘Organization and Management’ domain to be assessed and nothing else in their SOC 2 report.  Therefore, just because you have a SOC 2 does not mean it covers the domains necessary for your PCI assessment.  Like the SOC 3, in addition to the seven domains, the SOC 2 can also cover none, any or all of the additional considerations documented in the TSP.

Within each of these SOC reports there is a Type I and a Type II report.  A Type I report is basically worthless from a reliance perspective because no testing of the controls is ever performed.  With a Type I report, the auditor is signing off on the fact that the third party has controls defined and formally documented.  But without testing, there really is no point to this report.  Yet every now and then, I encounter a Type I report that an organization has relied upon for years.

The only report worth anything is a Type II report which tests the control environment to ensure that the controls are functioning as designed.  So, when you get that SOC 2 report, you need to make sure you have a Type II report where testing has been performed by the auditor.  Even then though, the report might not be as useful as you might think.

I Have A SOC 2 Type II Report From A Service Provider

While you want to read the whole report in detail, when I am pressed for time and cannot read it in its entirety, here is where I focus so that I can get a quick view of what I have.  Some CPA firms provide a one-page Executive Summary that gives the reader a quick overview of the report, provides the timeframe the report covers, opinion, exceptions and other useful information.  But that is not required by the AICPA so you cannot always rely on such an overview being in every report you receive.  When they are available, they can help you focus your quick review efforts even better.

The first thing to do is to read the auditor’s opinion which should be the first section of the report.  It is in the form of a letter on the auditor’s letterhead and signed by the auditing firm.  The opinion the auditor provides will be either:

  • Unqualified – no material control weaknesses or failures were identified.
  • Qualified – some material control weaknesses or failures were identified.
  • Adverse – significant control weaknesses or failures were identified.

An unqualified opinion is what all organizations desire and what most reports document.  But do not be fooled by an unqualified opinion.  There still could have been control weaknesses or failures identified but they did not rise to the level of being considered “material”.  I have seen some unqualified reports with control weaknesses that I would have considered material as their auditor, so you might still want to contact the organization to get clarification on any weaknesses identified.

A report with a qualified opinion is not the end of the world, but that will all depend upon what control weaknesses or failures created the qualification.  Someone misusing their access can be minor compared to not performing backups of servers for months.  As a result, you need to read each control weakness to determine the criticality of the control failure as well as review management’s responses to how they addressed or will address the failure.  Again, you may find yourself contacting the organization to clarify weaknesses documented.

In my experience, reports with an adverse opinion never get issued to the public.  Management sees all of the control failures and weaknesses and then embarks on the long arduous task of cleaning up their control environment.

The next section to look at is the one labeled ‘Information Provided by Independent Service Auditor’ or similar.  This is the section that will contain the testing results and will define which of the domains were covered as well as the timeframe the report covers.  Most organizations issue SOC reports annually, so you always want to make sure that you have the most current report.  If the coverage end date is getting within three months of a year old or more, you should contact the third party and ask them when the next report will be issued.  They should inform you that the new report is in progress and give you an estimated date the report will be issued.  If they do not give you a succinct answer, I would be concerned.

You need to go through this section looking at a couple of things.  The first is to determine which of the domains were covered.  While documenting those domains, you also need to review the testing that was performed and at what level of detail those tests were conducted.  For example, it is not unusual to see tests for change control cover five random changes but not test those changes for having appropriate documentation, backout instructions and testing, only that the changes were approved.  At some point you will need to read this section carefully to determine what, if anything, will cover the testing required by the PCI DSS.  But a quick perusal will usually give you an idea of what you are likely going to get out of the SOC 2 for PCI compliance, if you are going to get anything at all.

This leads to the next section of the report you should read.  The last section of all SOC reports is usually titled ‘Supplemental Information Provided By [Organization Name]’.  This section contains information that was provided by the entity being audited but is not covered by the auditor’s opinion.  There can be all sorts of information presented here but the important point to remember is that the auditor did not test or assess the accuracy of that information.  So, you need to take any information provided in this section with a bit of skepticism.

It is in the Supplemental Information section that you want to look for a sub-section titled ‘Management’s Response to Control Exceptions’ or similar.  Even when an organization has an unqualified opinion, there can still be items listed in this section.  If there are items listed, you want to carefully read what those items were and how management addressed or corrected the condition.  If you find any control issues and responses that concern you, you should contact the entity and get those discussed so that you are comfortable with the situation.  If you cannot get comfortable with the situation, then you may want to consider additional controls at your end to compensate for the control weakness with the third party.

In the next postpost I will take you through a more thorough review of the SOC report.

04
Oct
16

The Great Multi-Factor Authentication Debate

The Council brings back the Assessor Session to this year’s Community Meeting and it takes only one question to get passions flowing.  The question was to get a clarification of a comment made by Ralph Poore, Director, Emerging Standards at the Council, about multi-factor authentication (MFA).

First a little background to get everyone up to speed remembering that the US National Institute of Standards and Technology (NIST) SP800-63B standard in question is still a draft and has not been finalized.  However, everyone expects this standard to be adopted largely unchanged and with only minor wording revisions that would not affect the overall recommendations in the standard.

What NIST stated about SMS was in section 5.1.3.2. Out-of-Band Verifiers of SP800-63B which states:

“Due to the risk that SMS messages or voice calls may be intercepted or redirected, implementers of new systems SHOULD carefully consider alternative authenticators. If the out-of-band verification is to be made using the public switched telephone network (PSTN), the verifier SHALL verify that the pre-registered telephone number being used is not associated with a VoIP (or other software-based) service. It then sends the SMS or voice message to the pre-registered telephone number. Changing the pre-registered telephone number SHALL NOT be possible without two-factor authentication at the time of the change. OOB using the PSTN (SMS or voice) is deprecated, and may no longer be allowed in future releases of this guidance.”

NIST is only calling out that new implementations of SMS or voice MFA should consider the security implications of using SMS or voice for MFA.  But NIST has not totally invalidated any existing SMS and voice MFA solutions.  They just do not want any new implementations unless there is no choice because the process is already underway.  So while SMS or voice MFA can still be used in existing implementations, NIST is saying that future implementation of SMS and voice MFA are out of the question, have basically killed those solutions.

With that as our background, in a Community Meeting session, Ralph Poore stated that MFA to devices such as smartphones or back to the same device or browser (i.e., “soft” solutions) were not considered secure because of statements in the NIST Draft of SP800-63B.  I was attending a different session when Ralph made his statements, but I can tell you that my cell phone started buzzing with text messages from various people asking if we had all heard what we had heard.  But since there was no Q&A at that session, there was no way to clarify Ralph’s statements.

As a result, this issue was brought up in the Assessor Session to clarify those MFA comments.  Ralph stood and reiterated his remarks and that sent the room into an absolute tizzy.  It was pointed out that NIST had only invalidated SMS and voice for future two-factor authentication, not all soft token solutions such as RSA’s or Symantec’s application solutions.  However, Ralph continued to repeat his remarks saying that they had invalidated all soft solutions.  That brought the house down and people were loudly explaining that his comments were invalidating decades of recommendations for OOB MFA solutions.  Eventually the room calmed down and the Council agreed to review their position on such “soft” MFA solutions.

So that is where we are with this subject.  Time will tell if the Council revises its statements on MFA and comes into line with what NIST is saying on the subject.

15
Apr
16

Multi-Factor Authentication

During the recent PCI SSC’s presentation on the coming changes to v3.2 of the PCI DSS, one of those changes is the adoption of the term “multi-factor authentication” instead of “two-factor authentication”.  This change resulted in some heated discussion in the Q&A session that followed their presentation.

Even though the Council was very, very clear what they meant by multi-factor a lot of us have concerns about that terminology.  The reason for all of this concern?  It is because most of us in the information security profession dislike the term “multi-factor authentication”.  It is driven by the fact that the term typically includes stupid practices such as using two sets of credentials.  I have discussed this in a previous post, but I thought the time was right to discuss this topic again before QSAs start running into organizations trying to pawn off two passwords as valid multi-factor authentication.

Multiple factors of authentication are defined as:

  • Knowledge also referred to as ‘something you know’ such as a password or passphrase,
  • Possession also referred to as ‘something you have’ such as an RSA SecurID or Symantec VIP token, and
  • Inherence also referred to as ‘something you are’ such as a fingerprint, hand or voice.

In order for multi-factor authentication to be secure, you must use one of the factors from each category, hence the terms one, two and three factor authentication.  Using only a password is single or one factor authentication.  Using a PIN with a fingerprint is two factor authentication.  Using a password with the token number and a fingerprint is three factor authentication.

Therefore using two passwords is not using factors from two of the three categories.  It is using the same category twice which is not considered secure.  This holds true for using a fingerprint and an iris scan as those are also two items from the same category.  Although those biometric factors are arguably much stronger than just two passwords.

Why are two passwords not considered secure?  An attacker only has to compromise your authentication system and they would likely have access to those two sets of credentials.  But if you also require either of the other two factors, the attacker may have credentials but they do not have those other factors needed to use those credentials.  Therefore if you are using true two or three factor authentication your security is still effective.

So all of you out there thinking the Council has approved of using two passwords as an approved multi-factor authentication solution need to think again.  Although I know there will be some that do not get this message and will try and use it in that context anyway.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.

06
Aug
12

Third Party Service Providers And PCI Compliance

There seems to be a lot of confusion regarding third parties that provide networking or hosting services and their obligations regarding PCI compliance.  This confusion is not uncommon as merchants and their service providers have not necessarily been provided enough guidance to understand their obligations.  I hope this post will clarify those obligations for all involved.

If you learn nothing else from this post, if a third party is providing your organization a service that has access to your cardholder data environment (CDE) or the third party could come into contact you’re your cardholder data (CHD), then that third party must ensure that the service complies with all relevant PCI requirements.  As a result, the third party needs to either allow you or your QSA to assess the services that they are providing or provide you with an Attestation Of Compliance (AOC) that documents that those services have been assessed and they are PCI compliant.

In the past, I have stated that third parties could also submit a letter signed by an officer of the third party stating that all of the services provided to their customer are PCI compliant.  Now that v2.0 of the PCI DSS has a separate AOC and the PCI SAQs have the AOC built into the SAQ, there should be no reason to need such a letter or to ask for one.  If a letter is what your third party is offering, it is better than nothing, but you should be pushing them hard for an AOC.  If they are reluctant to get you an AOC, as part of your vendor management process, you should take that into account and probably begin looking for a new vendor that will provide an AOC for their services.

The most common issue we run into with third parties is that their AOC or other representations of PCI compliance do not cover all of the services provided to the customer.  In case after case, we see the AOC covering requirements 9 and 12 and nothing else even though the services provided may require compliance with some or all of PCI requirements 1, 2, 3, 4, 5, 6, 7, 8, 10 and 11.

In a lot of cases, it is not that the third party does not want to comply with PCI; it is they are taking the lowest common denominator approach and only picked those services where all customers requiring PCI compliance are asking for an AOC.  That way they have reduced their costs of a QSA to assess their environment.  These third parties are accepting the fact that any customer that needs more services assessed will have to do it themselves.

Related to this issue is the third party that offers their SSAE 16 Service Organization Control (SOC) 1 report has proof of PCI compliance.  While a SOC 1 report can cover a few PCI requirements, people must remember that the SOC 1 report is structured specifically for financial auditors to ensure that the controls at a third party are properly constructed to support financial reporting at the customers.  As a result, a SOC 1 report is not going to be a substitute for an AOC that covers all services.  There is an alternative to this and that is to have the third party go through a SSAE SOC 2 report that focuses on the security controls of the PCI in-scope services provided.  We are hearing from third parties inquiring into the SOC 2 report, but cost and a lack of customers requesting such a report are driving why we do not see more SOC 2 reports available.

Another common issue we encounter is the refusal of the third party to cooperate in assessing the services provided to ensure they are PCI compliant.  There are still third parties that argue their services are not in-scope for PCI compliance even when it is painfully obvious that the third party’s personnel have access to their customer’s CDE and/or CHD.

The most common third party relationship we encounter is the management of routers or other layer 3 devices.  Where we encounter the most confusion in this relationship is in regards to the use of encryption to keep the network services organization out of scope for PCI compliance.  The key here is if the network services organization manages the encryption of the network, then they are in-scope for PCI compliance.  The reason is that the employees of the network services organization have access to the encryption keys and therefore could decrypt the communications and gain access to CHD transmitted over the network.  As a result, at a minimum, the network services organization is responsible for complying with some or all of requirements 1, 2, 4, 6, 7, 8, 9, 10 and 12.  If you receive such services and are not getting an AOC that covers these requirements, then you should be doing more work on your own as well as asking the third party why they are not covering more of the necessary PCI requirements.

The next most common service we encounter is the network services firm that is managing or monitoring an organization’s firewalls, remote access or intrusion detection/prevention.  Such services always put the third party in-scope for PCI compliance.  Some or all of requirements 1, 2, 6, 7, 8, 9 and 12 will need to be assessed for compliance with the PCI DSS.  The log capture and analysis requirements in requirement 10 may also be complied with if your organization is not capturing and analyzing the log data from these devices.

Another group of third parties we encounter a lot are records retention vendors.  Organizations like Iron Mountain have conducted their own PCI compliance project and readily hand out their AOC to customers.  However, where we see issues is with such vendors that provide their own tape library for their customers to use for backup.  We have encountered a number of third party’s doing the encryption at their library which puts them in-scope for PCI compliance, at a minimum, for requirements 3, 4, 6, 7, 8, 9, 10, 11 and 12.

We encounter outsourcing the data center a lot with large organizations, but small and mid-sized organizations are also hopping on the data center outsourcing bandwagon.  Where this puts the third party in-scope for PCI compliance is when the third party is responsible for maintaining the environment such as applying patches, managing servers or any other activities that would allow the third party’s personnel to potentially have access to CHD.  In such situations, at a minimum, the third party is responsible for complying with some or all of requirements 2, 5, 6, 7, 8, 9, 10 and 12.  Compliance with some or all of requirement 1 may be applicable if the third party is managing your firewalls or routers.  Compliance with some or all of requirements 3 and 4 may also be applicable if the third party is responsible for managing encryption keys for encrypting CHD or encrypting communications.

Where the most confusion regarding third party responsibilities occurs is in regards to “The Cloud.”  The most common reason for this is that every vendor seems to have a different definition for what “The Cloud” is, based on their particular services.  Using the definitions provided by the National Institute of Standards and Technology (NIST) in their publication SP800-145, ‘The NIST Definition Of Cloud Computing’, I can provide the following guidance.

If your organization is purchasing Infrastructure as a Service (IaaS), then the third party providing these services will typically be out of scope for PCI compliance except for requirements 9 and 12.  There are some instances where IaaS implementations may require compliance with the PCI DSS if the third party is managing network infrastructure that comes into contact with CHD as is usually the case with private cloud environments.

For Platform as a Service (PaaS) and Software as a Service (SaaS), the third party will have to provide PCI compliance for the services they are providing to your organization.  That is because with either of these service offerings, the third party must have access to the CDE and will have the potential of coming into contact with CHD.

The problem with the majority of PaaS and SaaS vendors is that they only deal with your organization through a Web-based interface, i.e., everything is automated – contracts, support, etc.  As a result, the contract is a “take it or leave it” situation that does not usually cover everything needed for PCI compliance, there is no way to independently verify the representations made by the third party as well as the fact that the AOC provided by the third party typically only covers only the physical security requirements in requirement 9 and possibly some of requirements 11 and 12 and nothing related to the other requirements, even though the third party may have responsibilities for PCI compliance outside of what is represented in their AOC.

If this is the case, there is little you or any QSA can do to properly assess the environment to ensure it is truly PCI compliant.  As a result, we have a lot of organizations that try to develop compensating controls for these cloud implementations.  These organizations very quickly and frustratingly find out that there are very few, if any, controls on their side of the equation that can get them to “above and beyond” the original requirement.

I know there are a lot of other examples of services being provided to merchants.  But, hopefully these examples can assist you in clarifying what you need or do not need from your third parties when it comes to PCI compliance.

18
Jun
11

PCI SSC Releases Virtualization Guidelines

On Tuesday, June 14, 2011, the PCI SSC released an Information Supplement regarding Virtualization Guidelines.  Not only does this Information Supplement cover virtualization from a VMware and Hyper-V perspective, but also goes into cloud computing.

The supplement is broken into six sections:

  • Introduction
  • Virtualization Overview
  • Risks for Virtualized Environments
  • Recommendations
  • Conclusion
  • Virtualization Considerations for PCI DSS

The Introduction and Overview sections are good foundations.  But if you have a good knowledge of the concepts of virtualization, I would not waste time reading these sections.  The Risks section is a very good discussion of the risks presented by virtualization.  However a lot of readers of this supplement are likely going to be disappointed as there is little new material covered in this section that has not been discussed before in other information sources or even my blog entries.  In my opinion, the Recommendations section presents what would be expected.

The real gem in this supplement is the Appendix that provides the Virtualization Considerations for PCI DSS.  This supplement takes the relevant PCI DSS requirements and provides a lot of guidance regarding what QSAs should consider when assessing virtual environments.  In a number of these, there are also some additional best practices and recommendations made by the writers of the supplement.  In reading these best practices and recommendations, one would think these would be common sense, but I guess you just cannot assume that any more.

Page 23 has the other great gem in a diagram that graphically represents the responsibilities of cloud customers and cloud providers regarding who is responsible for data, software, user applications, operating systems, databases, virtual infrastructure, physical infrastructure and the data center where everything resides across the three types of cloud services; IaaS, PaaS and SaaS.  If you are explaining cloud computing to non-technical people, this is probably one of the best diagrams I have seen to explain responsibilities.

If I had to take the PCI SSC to task on anything, I would argue that cloud computing does not necessarily have anything to do with virtualization.  Yes, a lot of cloud computing solution providers are using virtualized systems to provide their services, but not every cloud provider uses virtualization.  And even if the cloud provider does use virtualization, why is that the customer’s concern?  In my opinion, cloud computing should be an entirely separate document.

I have included below links to all of my prior posts on virtualization for reference.

07
Jun
11

VoIP And PCI Compliance

I have had some interesting conversations with people lately regarding voice over IP (VoIP).  It fascinates me as to how little people really know and understand how this technology works.  But what really scares me is how this lack of information is putting organizations at risk.

The most obvious problem with VoIP is segmenting it away from the cardholder data environment (CDE).  I am really disturbed by the number of organizations that have no security around their VoIP.  Yes a lot of organizations have segmented the VoIP from the rest of their network, but there are no controls that stop anyone from getting into that network segment.  As a result, anyone with the right set of tools can gain access to the traffic in the VoIP network segment.

The next thing that scares me is the lack of security surrounding the VoIP servers including any call recording servers.  People treat these VoIP servers just like their traditional PBX.  Unlike their PBX that likely ran a proprietary version of UNIX, VoIP servers are just Windows or Linux servers running a VoIP application.  As a result, they are susceptible to all of the viruses, malware and everything else any other server is susceptible.  However, these servers typically do not run anti-virus (performance issues) or are they hardened to any rational security standard.  When they get infected or hacked, it seems to be a shock to the system administrator.

And what about the call recording technology?  We keep hearing from vendors of call recording solutions that they use proprietary recording methods requiring special CODECs.  While in some instances the proprietary claims are true, what we are finding more and more is that vendors are just manipulating file header information such that Windows Media Player, iTunes and the like do not recognize the file as being in a valid format.  However, tools such as VLC Media Player are able to see past the header changes and recognize these files for what they are, WAV, MP3 and the like.  Thus some proprietary formatting claims are all a bunch of smoke and cannot necessarily be relied upon for security or privacy.  Another tell on the proprietary nature of call recordings is that, when you “convert” a recording, if the conversion software seems to be copying the recording more than actually converting it, it is likely that the header is being fixed to WAV, MP3, etc.  Real audio conversions typically take more time than just copying because the file has to be fully processed.

But the final insult in this whole scenario is the lack of understanding security personnel have regarding the VoIP protocols.  While VoIP call setup and teardown are usually conducted over TCP/IP (a stateful protocol), the actual call itself is conducted via streaming protocols over UDP/IP (a non-stateful protocol).  As a result, when you start talking to security people about VoIP security, their knee-jerk response is to tell you that VoIP is secured by the corporate firewall.  However, given that the VoIP protocols are stateless, even behind a firewall really does not provide any protection.

So you have a VoIP solution in place.  What should you be doing to ensure its security if it is in-scope for PCI compliance?  Here are my thoughts.

  • Properly segment your VoIP from the rest of your network.  This means either physically or logically separating the VoIP from the rest of your network.  This also means implementing access control rules so that only those devices and people that need access to the VoIP network have access.  If you are also using your VoIP phones as the network jack for a PC, make sure to VLAN that jack to something other than the VoIP VLAN.
  • If you can, implement the VoIP segment without DNS and DHCP and use MAC filtering to avoid the accidental or deliberate plugging in of a PC into a network jack that is VoIP only.  At a minimum, use MAC filtering to at least control what gets plugged into the LAN jack.
  • Closely monitor your VoIP segment and generate alerts on any devices that are unplugged or plugged in.  Also monitor for any protocols other than the VoIP protocols that your VoIP system uses.
  • Do not use the last octet or any other portion of the phone’s IP address as the extension number.  Yes, I know this is an easy way for the help desk to identify and troubleshoot phones, but it is also easy for an attacker to locate targets of interest, so keep that in mind when you are implementing your VoIP solution.
  • Never, ever connect your VoIP network to another VoIP network outside of your explicit control.  Given that VoIP primarily uses UDP/IP, you cannot expect any firewall to protect your VoIP system from anything outside of your control.  Always use plain old telephone system (POTS) circuits to connect to any foreign network.  I know that is not as sexy as VoIP, but how else can you protect your VoIP system from outside influences?
  • Work with your VoIP vendor to harden all servers that manage the VoIP system.  These are just Windows, Linux, etc. systems.  Obviously you will need to do some testing of this and you may not be able to use all of the hardening items in your server hardening standard, but you would be surprised at what you can do that the vendor says will not work.  Remember, they are just trying to cover their butts should a problem crop up.
  • Be careful implementing VoIP on traditional PBXs.  A lot of these solutions are just PCs or servers and can be easily hacked once on your network just like their full VoIP brethren.
  • Get a hold of VLC Media Player or similar tool and see if you can play a recording off of your call recording system.  We are getting about a 25% hit rate using VLC to play recordings.  A lot of the success of this approach depends on the age of the call recording system.  The newer the systems, the more likely it is that you will find that the recordings are just tweaks of existing standards.



Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031