Archive for the 'Requirement 11 – Regularly test security systems and processes' Category

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

07
Dec
13

POS Botnets

Just in time for the holidays.

An article came out this past week regarding botnets that are specifically targeting point of sale (POS) systems.  The reason I bring this up is because of this quote.

“StarDust developers have intimate knowledge of the inner workings of PoS applications such as Clearview PoS.  As a result, the malware can ferret out where in computer memory sensitive data, in some cases in cleartext form, is stored.  StarDust can also sniff network traffic and is able to extract Track1 and Track2 card data.  To remain covert, the software transfers card details only when the terminal is inactive and the screensaver is on. It also uses the RC4 cipher to encrypt data before sending it to the control server.”

Obviously, if your organization uses Clearview POS software you should probably be examining your systems and networks to ensure that they have not been compromised by StarDust.

However, the larger issue is that most merchants do not see themselves as targets of such attacks, let alone have they constructed a secure environment for their POS systems.  Some of this is not entirely the merchant’s fault.  A lot of merchants outsource the maintenance and management of their POS systems to a value added reseller (VAR) and that VAR is the one responsible for the POS network configuration.  Regardless of responsibility, a merchant needs to be aware of these threats and take appropriate action either internally or with their VAR to address these threats and minimize risk.

Regardless of whether it is StarDust or a similar threat, here are some steps you can take to minimize and detect such threats.

  • Segment your POS network from the rest of your internal network and limit POS network segment traffic to only communication to your processor and internal network and system support and operations systems.  This will require the development of network access rules so that traffic can only reach your processor or internal system support and operations systems.  This will limit the number of systems that could compromise your POS environment.
  • Monitor your POS network segment for any traffic that is directed to an external network other than your processor or system support and operations systems.  Your firewall rules should only allow secure connections between your POS network and your processor or your system support and operations systems.  Network traffic going anywhere else should be stopped and reported for further investigation.
  • Monitor your POS systems for any file or configuration changes.  Most anti-virus solutions can provide this capability, but there are also solutions that are specifically engineered for this task.  Regardless of which you choose, configure the tool to alert you as soon as it identifies a potential change to files or configuration of the POS system.  If approved changes were not made to the POS systems and you received an alert, you likely have been compromised.
  • Develop an incident response plan should you receive an alert indicating that your POS systems have been compromised.  An incident response plan provides the organization with a “battle plan” should a compromise occur.  This type of plan is key to minimize the potential reputational impact to the organization should such an attack be confirmed.  A good incident response plan can keep you from making mistakes as you navigate the mine field that is the media circus that comes with a breach.

Three straight forward and simple steps that can minimize the threat of StarDust and a documented incident response process should you unfortunately be breached.

Security does not have to be rocket science.

09
Nov
13

Removing The Drama Of A PCI DSS Assessment

I had to prepare a presentation for a client a while back giving them some tips on how to prepare and get through a PCI assessment as easy as possible.  I thought it might be good to share those thoughts.

Trust But Verify

This famous quote from US President Ronald Reagan is the mantra of a PCI assessment.

The PCI DSS is based on the “trust” that organizations are complying with the PCI DSS.  However self-assessment processes and QSAs are used to “verify” that the organization is, in fact, complying with the PCI DSS.  As a result, the organization being assessed not only has to produce documentation to that effect, but the QSA must also observe that the PCI DSS requirements are being followed.

The net is that, just because you say something is fact, your QSA must substantiate your statements so that they, too, will treat them as fact.  If you remember nothing else but this simple truth, you will understand why a QSA must do what they do.

Scope

If PCI assessments go wrong for any reason, this is probably the primary reason.  It fascinates me that people often profess ignorance of the PCI DSS, yet somehow become experts on the subject when it comes to scoping.

Remember point number one, trust but verify.  Under that premise, the PCI SSC makes a QSA’s primary responsibility to confirm the scope of the PCI assessment as they verify the facts.  As a result, in order to confirm that scope, the QSA must look at everything and then, through investigation and evaluation, determine that the areas you deem out of scope are, in fact, truly out of scope.

Let your QSA ask their questions and conduct their observations without arguing with them about scope.  They are only doing this because they are required to confirm the facts and your fighting with them about scope is only going to making them wonder what you are trying to hide.  The bottom line is that arguing with your QSA about scope only makes your assessment all the more painful and time consuming.

If you truly want to avoid arguing over scoping, get a copy of the Open Source PCI Scoping Toolkit.  Go through your environment and determine the categories of all of your systems and networks.  This is a good annual exercise because you need to prove your scope every year.

Applicability

According to the PCI SSC, there are five PCI DSS requirements that can never, ever be marked as ‘Not Applicable’: 1.2.3, 3.2.1, 3.2.2, 3.2.3 and 11.1.  I have discussed these all before but they deserve another quick discussion here.

Clients will argue ad nauseam that wireless is not implemented or is out of scope and therefore refuse to discuss wireless.  For requirement 1.2.3, a QSA is required to document the procedures they followed to rule wireless in or out of scope.  That of course means the QSA must investigate any wireless networks and evaluate if the controls are rigorous enough to keep wireless out of scope.  For requirement 11.1, the QSA must investigate and evaluate if the organization’s controls surrounding the detection of rogue wireless are appropriate regardless of whether or not the organization has implemented wireless networking.

3.2.1, 3.2.2 and 3.2.3 are all related to the securing of cardholder data when it is stored.  Even if an organization is not storing cardholder data on their systems, a QSA must document the procedures they used to confirm that cardholder data is not stored on the organization’s systems.  This usually involves a review of flat files and database schemas and the running of utilities and queries against those systems and databases looking for cardholder data.

The bottom line is do not argue about something being ‘Not Applicable’ and then hinder the QSA’s investigation to prove it is ‘Not Applicable’.  Do not get me wrong, you need to keep your QSA on point, but remember that QSAs are required to evaluate the situation and then document the process used to determine that a particular requirement is ‘Not Applicable’.  All you do by complicating that investigation is add more time to your assessment and, potentially, cause a requirement to be marked as ‘Not In Place’ instead of ‘Not Applicable’.

Yes, I Did Kind Of Ask That Earlier

Like security, the PCI DSS also works from a ‘defense in depth’ approach.  A lot of the questions QSAs ask are very similar just asked from a different perspective.  The people that develop assessment and audit programs will tell you that this is the most effective way to uncover the level of compliance with a given program.  The reason is that organizations who have not integrated a compliance program into their day-to-day operations will typically provide inconsistent or confusing answers to the similar questions.  Not that this is a perfect technique mind you, but it does work the majority of the time.

Please be patient with your QSA.  They did not write these procedures, but they are required to execute them.

Answer The Question

Most people suck when being questioned, particularly in a legal proceeding, including yours truly.  Lawyers always instruct anyone that will be called to testify in a legal proceeding to take their time, focus on the question being asked and only answer the question being asked.  Never, ever, ever provide any information outside of the question, i.e., do not elaborate.  The trouble is that lawyers know that silence is a vacuum and it is human nature to fill that vacuum with extraneous information.  Hence why they typically have long pauses between questions.

QSAs and auditors tend to operate under the same principle as a lawyer.  People get into trouble when they start talking about things that are outside of the question, out of scope or not relevant to the assessment.  Such responses will at first confuse the QSA for a moment as they try to reconcile your remarks.  But then, the QSA may question whether they truly understand the environment and, possibly, the scope of the assessment.  It is then that they may start quizzing you and your staff as they go back and reconfirm their understanding of the environment.  All of this takes time, time away from the assessment process as you cover old ground while the QSA re-verifies the facts.

The lesson to be learned here is that there is nothing wrong with saying, “I do not know.”  Or “I will have to look into that question and get back to you.”  The worst thing you can do is try and “tap dance” around the question or never really answer the question.  If you do not have the answer, then find out who does have the answer and point the QSA to that person.

Prepare

And finally, the best thing you can do to avoid all of these issues is to walk through the PCI assessment process and requirements with those of your staff that will be interviewed/observed and make sure they understand the questions to be asked and how they should be answered.

If you really want to know what the QSA will ask, why they will ask and the evidence they will require, get a copy of the PCI DSS ROC Reporting Instructions from the PCI SSC Document Library.  The Reporting Instructions document is the “Bible” for QSAs as it documents how they will be assessed in a PCI SSC Quality Assurance review.  Reviewing and understanding this document will go a long way to minimizing the “What do you need that for?” questions that all QSAs encounter.

For each requirement’s tests, the Reporting Instructions will tell you:

  • What observations, if any, need to be performed and documented.
  • What documents, if any, need to be collected and reviewed and what information needs to be identified in those documents.
  • What people, if any, need to be interviewed and about what topic(s).
  • What processes, actions taken or states of equipment, if any, need to be observed and documented.
  • Whether or not sampling can be used.

Using the Reporting Instructions, you can also gather a lot of the observations ahead of time.  Your QSA will still have to conduct some observations such as that default passwords are not used, that timeouts occur, that change management operates and the like.  But by gathering screen shots and documenting what you used as testing conditions will go a long way to making your assessment go much more smoothly and quickly.

Hopefully this discussion will help you get through your next PCI assessment without all of the associated drama that can come from such an exercise.

24
Sep
13

Coming Attractions

On September 12, 2013 the PCI SSC released the drafts of version 3 of the PCI DSS and PA-DSS.  In reviewing the PCI DSS, there are six new requirements that will be considered ‘best practices’ until July 1, 2015 when they will become requirements.

  • 6.5.6 – Insecure handling of PAN and SAD in memory.
  • 6.5.11 – Broken Authentication and Session Management
  • 8.5.1 – Service providers with access to customer environments must use a unique authentication credential (such as a password/phrase) for each customer environment.
  • 9.9 – Protect point-of-sale (POS) devices that capture payment card data via direct physical interaction with the card from tampering and substitution.
  • 11.3 – Develop and implement a methodology for penetration testing that: is based on industry-accepted penetration testing approaches (for example, NIST SP800-115), includes coverage for the entire CDE perimeter and critical systems, includes testing from both inside the network, and from outside of the network attempting to get in, includes testing to validate any segmentation and scope-reduction controls, defines application-layer penetration tests to include, at a minimum, the vulnerabilities listed in Requirement 6.5, defines network-layer penetration tests to include components that support network functions as well as operating systems, includes review and consideration of threats and vulnerabilities experienced in the last 12 months, and specifies retention of penetration testing results and remediation activities results.
  • 12.9 – Additional requirement for service providers: Service providers acknowledge in writing to customers that they will maintain all applicable PCI DSS requirements to the extent the service provider handles, has access to, or otherwise stores, processes, or transmits the customer’s cardholder data or sensitive authentication data, or manages the customer’s cardholder data environment on behalf of a customer.

I will discuss requirements 6.5.6 and 11.3 in separate posts.  I am not going to discuss 6.5.6 until I have a better understanding of how the PCI SSC expects QSAs to test that memory is being managed properly.  I am avoiding 11.3 because it contains enough for a post of its own.  But the others can be addressed in this post.

First, I have to say that I was amazed that these actually had to be codified as they are addressed through a number of other requirements.  But having run into numerous instances where I have encountered these situations, I understand why the PCI SSC felt the need to explicitly codify them.

For requirement 6.5.11, the guidance provided states:

“Secure authentication and session management prevents unauthorized individuals from compromising legitimate account credentials, keys, or session tokens that would otherwise enable the intruder to assume the identity of an authorized user. “

This requirement is targeting the botnets and Trojan attacks such as with Citadel and Zeus.  The problem here is that these are attacks on the end user, not the merchant.  As a result, what this new requirement is going to likely be looking for is for the merchant to be using methods to secure authentication and communications such that man-in-the-middle, man-in-the-browser and similar attacks are minimized or even eliminated.  It will be interesting to see how the PCI SSC expects this to be accomplished.

It has been a long time coming for 8.5.1.  Most QSAs have encountered this situation and we never liked it.  The situation that I speak of is managed service providers and software vendors using the same user identifier and password for all of their customers which they support.  While one can appreciate why this occurs, it does create a problem should those common credentials become known outside of the organization which has been the case in a number of breaches.  As a result, the PCI DSS has been changed to include this new requirement to require managed service providers and software vendors to use unique authentication credentials with each customer.

Requirement 9.9 is to explicitly address a best practice that has been used by a lot of merchants.  A number of merchants have experienced the tampering of card terminals over the years.  This typically was in the form of soldering a USB thumb drive or SD card into the terminal to collect track data and then replacing a good terminal with the doctored terminal at the merchant.  This threat is typically mitigated by video monitoring of terminals as well as the use of serialized security tape or tamper evident seals over a terminal’s case seams that is checked at least daily to ensure that terminals have not been changed out or tampered with.

And finally, requirement 12.9 calls out that service providers explicitly acknowledge in a document that they will maintain compliance with the PCI DSS for all relevant services.  Apparently the existing requirements in 12.8 were not providing enough assurance that service providers were complying with the PCI DSS.  So now we are going to require that all service providers acknowledge, in writing, that they will maintain compliance with all relevant PCI DSS requirements for all services provided to their customers.

04
Jul
13

Is It The Standard Or Process?

I question whether or not there really is a need to change the PCI DSS this Fall.  You look at the requirements and if they were truly followed as designed; the vast majority of breaches would either not occur or would be fairly limited and contained.  But when you look at the breach reports from Verizon, Trustwave and the like, they tell a different story.  Most breaches were the result of one or more PCI requirements that just were not working properly and a breach occurred as a result.

I think where the changes need to be made is in the assessment process as I believe this is where the most impact could be realized.  The current assessment process falls way short in confirming that an organization is complying with the PCI DSS all of the time, not just when the QSA is around.  This results in the very valid complaint by PCI critics that the process is a “check the box” approach.

If you look at the current PCI assessment process, it tests a very limited number of requirements over the period of time of the assessment.  QSAs only ensure that an organization has had: four quarters of passing external and internal vulnerability scans (11.2), quarterly facility wireless scanning (11.1), and that any changes to the cardholder data environment have been appropriately documented and approved (6.4).  All of the other requirements are assessed whenever the QSA assesses them at that point in time.  The rest of the time the organization could be non-compliant and the QSA would be none the wiser.

If we are truly going to address the findings in these breach reports, we need to be ensuring that organizations comply with the PCI DSS all of the time.  That means that QSAs need to be testing more requirements over the assessment time period to ensure that an organization is actually complying with requirements.

Based on that premise, here are my recommendations.

  • Change controls.  The PCI DSS requires QSAs to request all changes to the CDE so that they can determine if more than quarterly vulnerability scanning and annual penetration testing was required during the assessment period.  The breach reports indicate that attackers are consistently finding simple misconfigurations as a way into networks which means that change control is likely not being followed consistently.  QSAs should be testing the entire change control process to ensure that changes to infrastructure are being appropriately tracked, reviewed and approved throughout the assessment period.  The reason is that we regularly encounter information security personnel that are only involved in evaluating and reviewing changes that affect PCI compliance and nothing else.  We also encounter instances where only changes that affect PCI compliance are tracked.  You have to wonder how changes are determined to affect PCI compliance.  Obvious changes to the CDE are easy to identify.  But other changes could implicitly affect the CDE but not necessarily be identified as such because the people reviewing them do not see the connection.  As a result, organizations have no idea if changes outside of the CDE could impact their PCI compliance because there is either no record of those changes or information security has not been consulted.  These organizations are typically relying only on luck to protect them.
  • Mitigation of vulnerabilities.  Most QSAs assess patching by reviewing the quarterly vulnerability scans and making sure that vulnerabilities do not appear on the next quarterly scan.  If any vulnerabilities appear on subsequent scanning reports, then QSAs are supposed to assess what mitigating controls were put in place while the vulnerability was unpatched.  QSAs typically do a pretty good job proving that organizations’ patching processes work reliably.  But when it comes to mitigation, QSAs do not necessarily do a great job determining that open vulnerabilities are truly mitigated.  This is not always the QSA’s fault as the organization may not be keeping the necessary documentation to support that open vulnerability risks are being mitigated.  The bottom line here is that the assessment process needs to assess all vulnerabilities that were left unpatched during the assessment period to ensure that they were mitigated while they remained unpatched.
  • Access controls.  As with change controls, the current PCI assessment process only requires the QSA to test the accounts of those accounts that have access to the cardholder data environment (CDE).  To add insult to injury, most organizations have multiple access control systems in use for securing servers, infrastructure, third party monitoring, etc.  All of these systems are typically in-scope for assessment, but a lot of QSAs focus only on those in-house.  Access control is an all or nothing proposition, you are either doing it consistently across the board or you are not.  Based on the breach reports, attackers are using whatever accounts they can to get a foothold and then work their way to accounts that provide more privileges.  If that is the case, then how will testing only accounts that have access to cardholder data (CHD) help this situation?  It does not.  Therefore, the assessment of access controls needs to look at the whole picture, not just a privileged few.  The bulk of testing of the process may be relegated to those with access to CHD, but the entire process of granting and revoking access needs to be assessed to ensure that controls are being followed for everyone.  Testing of privileged accounts needs to be enhanced to address what those accounts have access.  Do the DBAs have administrative access to the OS?  Do network administrators have administrative access to servers?  Do system administrators have administrative access to network devices?  These are just examples of questions that a lot QSAs do not answer during their assessments.  A lot of QSAs are only interested in determining that privileged access is controlled, not who has access, why they have access and is that access justified.
  • Monitoring and Alerts.  QSAs are to ensure that logging and alerting is enabled as part of their testing.  QSAs are to ensure that an organization has log data online for at least three months and offline for a year.  QSAs then move to observing the various monitoring consoles to ensure that alerts are generated.  However no testing is done for the period of the reporting period to ensure that alerting functioned as configured by sampling the alerts generated and then ensuring that those alerts were properly investigated to ensure they were not serious or required further investigative techniques.  Based on the breach reports, anomalies are not being researched and this is resulting in breaches taking months to identify or, worse, going unnoticed.
  • Sampling.  Finally, testing needs to be more than just testing three or four items in each of these categories.  There needs to be sampling over the entire assessment period not just the day the QSA is investigating the requirement or one item per quarter.  A random statistic sample would be best but given populations that might be not feasible in the time frame required to produce a report.  However it is not inconceivable that this could result in at least 50 to 100 items being tested in some of these categories.

These are the key tests that would go a long way in improving the assessment process and address the findings from the breach reports.  The trouble is that this sort of enhanced testing is obviously going to drive up the cost of a PCI assessment whether the QSA does the testing or an organization’s internal audit function does the testing.  If the PCI SSC and the card brands are truly serious about making the standards meaningful, then this is where I would focus, not on making changes to the standard.

Remember my mantra, “Security is not perfect.”  So while these changes are not going to absolutely, positively prevent all breaches, they will go a long way in ensuring that organizations are as secure as they can be all of the time rather than just when the QSA is around.

28
May
13

BlackPOS

I got a Tweet from a friend today regarding this new piece of malware found out in the wild and dubbed ‘BlackPOS’.  BlackPOS is very similar in nature to vSkimmer.  Now before everyone goes off and panics, if you are religiously following the PCI DSS, BlackPOS should not be an issue and here is why.

  • Requirement 11.5 – Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.  BlackPOS does a lot of manipulation around known file names, but the hash values of those files should change from the known good values, so any file monitoring system should alert on that fact.  It also uses file names that would never exist on a production system, so those should also generate an alert.  In addition, BlackPOS creates a TXT file that also should generate an alert when created.  However, if you are not alerting in real-time, you should be so that you pick up these issues as soon as possible.  This is where the bad guys are headed with their attacks, so you may as well alert as soon as an incident occurs so that you can address it before it gets out of control.
  • Requirement 1.1.5 – Documentation and business justification for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure.  BlackPOS uses FTP to move the TXT file from the POS system to their server.  If you are allowing FTP to flow freely from your POS or cardholder data environment (CDE) to anywhere on the Internet, you were not PCI compliant in my opinion, even if you had some bizarre business justification.
  • Requirement 5.1 – Deploy anti-virus software on all systems commonly affected by malicious software (particularly personal computers and servers).  While BlackPOS was only identified today, the anti-virus vendors will most likely have signatures out by the time you read this, so they will be looking for BlackPOS by the time you get your updated signatures.

Just these three requirements can stop this sort of an attack.  Yet, time and again we see these attacks succeed because people are not properly implementing their file integrity and not restricting network traffic flowing out of their internal networks.

PCI compliance does work when you use it the way it was intended.

24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.

06
Aug
12

Third Party Service Providers And PCI Compliance

There seems to be a lot of confusion regarding third parties that provide networking or hosting services and their obligations regarding PCI compliance.  This confusion is not uncommon as merchants and their service providers have not necessarily been provided enough guidance to understand their obligations.  I hope this post will clarify those obligations for all involved.

If you learn nothing else from this post, if a third party is providing your organization a service that has access to your cardholder data environment (CDE) or the third party could come into contact you’re your cardholder data (CHD), then that third party must ensure that the service complies with all relevant PCI requirements.  As a result, the third party needs to either allow you or your QSA to assess the services that they are providing or provide you with an Attestation Of Compliance (AOC) that documents that those services have been assessed and they are PCI compliant.

In the past, I have stated that third parties could also submit a letter signed by an officer of the third party stating that all of the services provided to their customer are PCI compliant.  Now that v2.0 of the PCI DSS has a separate AOC and the PCI SAQs have the AOC built into the SAQ, there should be no reason to need such a letter or to ask for one.  If a letter is what your third party is offering, it is better than nothing, but you should be pushing them hard for an AOC.  If they are reluctant to get you an AOC, as part of your vendor management process, you should take that into account and probably begin looking for a new vendor that will provide an AOC for their services.

The most common issue we run into with third parties is that their AOC or other representations of PCI compliance do not cover all of the services provided to the customer.  In case after case, we see the AOC covering requirements 9 and 12 and nothing else even though the services provided may require compliance with some or all of PCI requirements 1, 2, 3, 4, 5, 6, 7, 8, 10 and 11.

In a lot of cases, it is not that the third party does not want to comply with PCI; it is they are taking the lowest common denominator approach and only picked those services where all customers requiring PCI compliance are asking for an AOC.  That way they have reduced their costs of a QSA to assess their environment.  These third parties are accepting the fact that any customer that needs more services assessed will have to do it themselves.

Related to this issue is the third party that offers their SSAE 16 Service Organization Control (SOC) 1 report has proof of PCI compliance.  While a SOC 1 report can cover a few PCI requirements, people must remember that the SOC 1 report is structured specifically for financial auditors to ensure that the controls at a third party are properly constructed to support financial reporting at the customers.  As a result, a SOC 1 report is not going to be a substitute for an AOC that covers all services.  There is an alternative to this and that is to have the third party go through a SSAE SOC 2 report that focuses on the security controls of the PCI in-scope services provided.  We are hearing from third parties inquiring into the SOC 2 report, but cost and a lack of customers requesting such a report are driving why we do not see more SOC 2 reports available.

Another common issue we encounter is the refusal of the third party to cooperate in assessing the services provided to ensure they are PCI compliant.  There are still third parties that argue their services are not in-scope for PCI compliance even when it is painfully obvious that the third party’s personnel have access to their customer’s CDE and/or CHD.

The most common third party relationship we encounter is the management of routers or other layer 3 devices.  Where we encounter the most confusion in this relationship is in regards to the use of encryption to keep the network services organization out of scope for PCI compliance.  The key here is if the network services organization manages the encryption of the network, then they are in-scope for PCI compliance.  The reason is that the employees of the network services organization have access to the encryption keys and therefore could decrypt the communications and gain access to CHD transmitted over the network.  As a result, at a minimum, the network services organization is responsible for complying with some or all of requirements 1, 2, 4, 6, 7, 8, 9, 10 and 12.  If you receive such services and are not getting an AOC that covers these requirements, then you should be doing more work on your own as well as asking the third party why they are not covering more of the necessary PCI requirements.

The next most common service we encounter is the network services firm that is managing or monitoring an organization’s firewalls, remote access or intrusion detection/prevention.  Such services always put the third party in-scope for PCI compliance.  Some or all of requirements 1, 2, 6, 7, 8, 9 and 12 will need to be assessed for compliance with the PCI DSS.  The log capture and analysis requirements in requirement 10 may also be complied with if your organization is not capturing and analyzing the log data from these devices.

Another group of third parties we encounter a lot are records retention vendors.  Organizations like Iron Mountain have conducted their own PCI compliance project and readily hand out their AOC to customers.  However, where we see issues is with such vendors that provide their own tape library for their customers to use for backup.  We have encountered a number of third party’s doing the encryption at their library which puts them in-scope for PCI compliance, at a minimum, for requirements 3, 4, 6, 7, 8, 9, 10, 11 and 12.

We encounter outsourcing the data center a lot with large organizations, but small and mid-sized organizations are also hopping on the data center outsourcing bandwagon.  Where this puts the third party in-scope for PCI compliance is when the third party is responsible for maintaining the environment such as applying patches, managing servers or any other activities that would allow the third party’s personnel to potentially have access to CHD.  In such situations, at a minimum, the third party is responsible for complying with some or all of requirements 2, 5, 6, 7, 8, 9, 10 and 12.  Compliance with some or all of requirement 1 may be applicable if the third party is managing your firewalls or routers.  Compliance with some or all of requirements 3 and 4 may also be applicable if the third party is responsible for managing encryption keys for encrypting CHD or encrypting communications.

Where the most confusion regarding third party responsibilities occurs is in regards to “The Cloud.”  The most common reason for this is that every vendor seems to have a different definition for what “The Cloud” is, based on their particular services.  Using the definitions provided by the National Institute of Standards and Technology (NIST) in their publication SP800-145, ‘The NIST Definition Of Cloud Computing’, I can provide the following guidance.

If your organization is purchasing Infrastructure as a Service (IaaS), then the third party providing these services will typically be out of scope for PCI compliance except for requirements 9 and 12.  There are some instances where IaaS implementations may require compliance with the PCI DSS if the third party is managing network infrastructure that comes into contact with CHD as is usually the case with private cloud environments.

For Platform as a Service (PaaS) and Software as a Service (SaaS), the third party will have to provide PCI compliance for the services they are providing to your organization.  That is because with either of these service offerings, the third party must have access to the CDE and will have the potential of coming into contact with CHD.

The problem with the majority of PaaS and SaaS vendors is that they only deal with your organization through a Web-based interface, i.e., everything is automated – contracts, support, etc.  As a result, the contract is a “take it or leave it” situation that does not usually cover everything needed for PCI compliance, there is no way to independently verify the representations made by the third party as well as the fact that the AOC provided by the third party typically only covers only the physical security requirements in requirement 9 and possibly some of requirements 11 and 12 and nothing related to the other requirements, even though the third party may have responsibilities for PCI compliance outside of what is represented in their AOC.

If this is the case, there is little you or any QSA can do to properly assess the environment to ensure it is truly PCI compliant.  As a result, we have a lot of organizations that try to develop compensating controls for these cloud implementations.  These organizations very quickly and frustratingly find out that there are very few, if any, controls on their side of the equation that can get them to “above and beyond” the original requirement.

I know there are a lot of other examples of services being provided to merchants.  But, hopefully these examples can assist you in clarifying what you need or do not need from your third parties when it comes to PCI compliance.

02
Oct
11

Defense In Depth

I have a slide in my security presentation deck that discusses the concept of defense in depth and how when you start opening ports or start using encrypted data streams how you are punching holes into one or more of your security layers.  It amazes me how many people still do not understand how defense in depth works and how much security standards such as the PCI DSS rely on this concept.

So let us take a look at the various elements of security and the requirements of the PCI DSS and see how they bring defense in depth to bear.  Keep in mind this is an example and does not encompass everything an organization could do to increase defense in depth.

For most organizations, the first level of defense is at their firewall.  Requirements 1 and 2 talk to how you should use a firewall and secure it.  The biggest mistake that organizations make is not configuring their firewall properly.  And by configuration, I am not just talking about the configuration of the firewall’s software; I am also talking about where and how the firewall is used in the network.

The next level of defense for most networks is usually some form of intrusion detection/ prevention system.  Some of the requirements in 10 and 11 talk to intrusion detection/prevention.  IDS/IPS capability may be provided in a separate appliance or may be part of an organization’s firewall.  The key to using an IDS/IPS is ensuring that it is kept current with its attack signatures, monitoring its log data and/or console and ensuring that it is not be overwhelmed by network traffic.

One thing that continues to amaze me is how many implementations of IDS/IPS I encounter where the IDS/IPS are in the middle of encrypted data streams.  IDS/IPS systems cannot examine encrypted data streams unless they have the decryption keys which they typically do not have access.  As a result, encrypted data streams are not examined and therefore sensitive data and or attacks could be going right past the IDS/IPS.

How users authenticate to your network and devices is also a level of defense.  Requirements 7 and 8 of the PCI DSS talk to this point.  And it is not just authentication to applications that process, store or transmit cardholder data, it is also authentication to infrastructure devices and to databases.  

It has been more than five years since the “sa” default password debacle and yet you still encounter applications that use service accounts to access their database and those service accounts have no password.  The rationale?  “We did not want to code the password into the application,” is the common reply.

The other big area of authentication issues that you encounter is with firewalls, routers switches and other network infrastructure.  The problem is that the network administrators all use the same account and password.  You can understand their rationale, particularly those networks where you are administering thousands and thousands of devices.  

There are a number of ways to address this situation, but these are my favorite two.  The first is to implement 802.1X authentication using a RADIUS server.  Under this scenario, every network administrator has their own unique account and password to access the network devices.  Those unique accounts should be different from the network administrator’s account they use to get email and network access like every other user.  A lot of organizations already have the RADIUS server implemented for remote access, so adding in network administration access control is relatively easy.

The second way to address network administration access is to use a “jump box.”  In a “jump box” implementation, two or more “jump boxes” are placed at strategic points on the network and all network administration access is conducted through a “jump box.”  The “jump box” is fully instrumented in that all keystrokes, applications, etc. are logged and those logs are reviewed at least daily to ensure that network administrators are not changing things they should not be changing.  That means comparing service tickets for the network against the logs from the “jump boxes” and ensuring that only what was required to be changed was changed.  “Jump boxes” can also be used to control access for server administration.

A level of defense that usually gets little recognition is operating system (OS) hardening.  What some people seem to forget is that any computerized device has an OS whether it is a firewall, router, switch or server.  Requirement 2 talks not only to the hardening of wireless, but also firewalls, switches, routers and servers.  Every vendor publishes a guide that explains how to securely implement their OS.  Where things can get sticky is with third parties that argue that their product or software will not function if you follow the vendor’s OS hardening recommendations.  In my experience, testing a vendor’s product or software in a hardened environment typically does not have an adverse result.  However, the key is to conduct a test.

Another level of defense is anti-virus and anti-malware software.  This solution also usually includes a personal firewall on mobile devices such as notebooks, netbooks and smartphones.  Requirement 5 of the PCI DSS talks to anti-virus and anti-malware while a requirement in 1 talks to personal firewalls.  Nothing gets some people wound up more than anti-virus software.  The requirements in 5 can have compensating controls, but implementing those compensating controls consistently on mobile devices is usually just about impossible.  So while you may not have anti-virus/malware on your e-Commerce servers, you should have it on all of your desktops, notebooks, netbooks and other systems.

A level of defense that most organizations poorly manage is their collection and analysis of log data from their network devices and servers.  Requirement 10 speaks to the importance of log data.  As I have written before, log data is IT’s version of commercial aircraft’s flight data recorder.  If you want to why a problem occurred, log data from your devices can usually point you to the reason.  The problem most IT professionals have with log data is that they do not want to log everything because that generates too much data in their opinion.  However, until you have an incident, you do not know what log data will be important in identifying why the incident occurred therefore you need all of it.  The last thing you want to have happen is to tell management that you could not determine the cause of an incident because you did not record the critical information required in identifying the incident.

The final defense most people think of is application development which is covered by requirement 6.  If you are going to get push back, this is the most likely and consistent place you will get push back to the PCI DSS or any other security program.  Application developers are very protective of their environments, so when you start infringing in their area, they can get rather upset.  As a result, you hear the typical lament from developers that security, “restricts their creativity.”

In today’s rush to get things done, application developers usually do not have security at the front of their minds.  As a result, by the time anyone knows that there is a security issue, it is too late for it to be fixed and the application goes into production with the fix to be part of version 2.  That was the whole point the PCI DSS addresses in requirements 6.4, 6.5 and 6.6; avoid putting susceptible software into production.  The whole point of these requirements is to build a certain amount of security into the development process to minimize the amount of security issues that end up in production.

The real final defense is an organization’s policies, standards and procedures.  Yes, that paperwork that everyone thinks is “make do” work, really does have a purpose.  An organization’s policies, standards and procedures are the rules that everyone is to follow to ensure security.  Those rules also provide a way to measure people’s compliance so that, in the event of an incident, those people that did not follow the policies, standards or procedures can be shown their mistakes so that they can correct their actions in the future.  These rules also provide an organization’s framework for explaining to personnel as to how the organization is protecting their information assets and defining those assets.

There are a lot more options for defense in depth, but I think you get the idea.  Now that you understand how defense in depth works, you should now also understand what happens when security personnel are asked to open ports for an application or change configurations that reduce the number of levels in an organization’s defenses.  The fewer levels involved the higher the likelihood that a lapse in control can result in a breach, particularly when a number of lapses in controls are occurring simultaneously.  This is how supposedly PCI compliant organizations end up breached.

02
May
11

Draft PCI DSS v2.0 “Scorecard” Released

It has been a long wait, but the PCI SSC has finally given us a look at the new “scorecard” for v2.0 of the PCI DSS.  For those of you that never knew about the “scorecard,” it was given to QSAs to assess Reports On Compliance (ROCs) to ensure that QSAs have properly conducted PCI assessments.  I have not had a chance to get through all 112 pages of this document, but I have gotten through the first part of it and I wanted to share my thoughts.

The first change to the “scorecard” is its name.  It is no longer the “scorecard,” it is titled ‘ROC Reporting Instructions for PCI DSS v2.0’.  The naming seems to indicate that once the QSA review period is over it will be posted to the PCI SSC’s Web site in the Documents Library.

Overall, the document is similar to the scorecard for v1.2.1, but no longer documents the scores that the PCI SSC QA team will use to assess QSAs.  However, from the way it is written, I would assume that if a requirement in the ROC does not contain everything documented in the Reporting Instructions, that it is considered to have not met the QA requirement.

Another general comment I have is that it is woefully lacking in examples.  While there seems to be a significant amount of guidance provided for what to write in the ROC, there are also ambiguous or unclear references that could be explained if the PCI SSC provided relevant examples of what they desires the QSAs to write.

The biggest change I have found thus far is the removal of the requirement to observe network traffic as the Network Monitoring column is gone from the Reporting Instructions.  Prior to this point, QSAs were required to obtain network traffic via WireShark or similar tool to prove that network traffic is encrypted.  I reviewed requirements 1.2.1.a, 1.2.1.b, 3.2.1, 3.2.2, 3.2.3 and 11.4.a that had the Network Monitoring requirement in the v1.2.1 scorecard.  Based on the training for the 2011 QSA recertification, networking monitoring testing is still something needed for confirming compliance with requirement 1, so even though it has been removed as a column, it appears to still be required.  However, from the Reporting Instructions, the network monitoring is not explicit, so this is one of those areas where the PCI SSC will definitely need to clarify things.

The section in the Executive Summary at the front of the ROC that discusses how a network is segmented to minimize scope will now require a fairly detailed discussion regarding that segmentation.  All network segments need to be described along with their purpose as well as a discussion of how the segments are architected and whether the segments carry cardholder data (CHD).  If access is provided to the cardholder data environment (CDE), that access needs to be described and that description needs to document how access is controlled.  It is very clear from the write up surrounding this section that QSAs and their clients will have to put much more work into this section to satisfy the PCI SSC.

Another clarification area is with the review of system configurations done as part of requirements 1 and 2.  The guidance now given by the PCI SSC is that they no longer want the documentation to be a list of configuration files that were reviewed by the QSA.  However, in the next breath, the Reporting Instructions tell the reader that a QSA must provide enough detail to prove that configuration files were reviewed.  So what is an acceptable level of detail?  Can we say that we reviewed 5 or 25 firewall configuration files?  In the past, we were told that this sort of approach was unacceptable.  The PCI SSC will need to provide one or more examples of language that they will accept.

Of all of the things I have read thus far, the one that just gets me seething is from the “Dos and Don’ts” page.  One of the “Don’ts” is “Don’t copy responses from one Testing Procedure to another.”  Further down on the list is “Don’t cross reference between responses”.  After going through our QA assessment and remediation, we were told by the QA person that we needed to do a better job of putting all of the information from earlier requirements that was relevant into every requirement as each requirement needs to be able to stand on its own.  But now, according to the Reporting Instructions, you cannot bring all of that documentation to the new requirement by using cut and paste.  What a bunch of “make do” work.

But this “make do” work is all because the PCI SSC is basically implying that it cannot trust its QSACs to do the work that is required to ensure an organization is complying with the PCI DSS.  However, just because a QSA writes something in a ROC does not mean they actually did the work.  It just means that the QSA knows how to write what the PCI SSC wants to read.  And to make matters worse, the PCI SSC provides the Reporting Instructions to provide guidance on just what to write as well as telling QSACs to develop ROC templates to speed the writing process.

A prime example of this is a new requirement in the section of the ROC where the QSA documents the list of people interviewed and/or observed.  The PCI SSC now requires the QSA to document what these people were interviewed about or were observed doing.  The purpose of this new requirement is to provide even more “proof” that a QSA did their job.  Another minor example of the PCI SSC trying to get “proof” of a QSA’s work effort is the increase in the level of detail being asked to document is the dates that QSAs were on-site for fieldwork.  In the past, QSAs were only required to document the period covered by the assessment.  However, QSAs are now being required to also document all of the dates of their fieldwork as well as the duration of their fieldwork and review period.

This is one of my biggest issues with the ROC process.  The PCI SSC refuses to adopt a more intelligent and cost effective reporting process of documenting requirement exceptions.  Instead, the PCI SSC requires QSAs to document their fieldwork process in the report.  As a result, an inordinate amount of time, paper and hence money is spent on what is really, in my humble opinion, a totally worthless effort.

I understand why this was required.  When the PCI SSC did not have the right to review a QSA’s work papers and other documentation, having such documentation in the ROC was the only way the PCI SSC, card brands and acquiring banks could assess whether or not a QSA had done their job.  Now that more than a year has gone by since the PCI SSC required all QSACs to include verbiage that allows the PCI SSC to review a QSAC’s work papers, putting all of this effort into a response writing requirement should no longer be required.  QSAs should be able to mark a requirement either ‘In Place’ or ‘Not In Place’.  If a requirement is ‘Not In Place’, then the QSA should document why the requirement is not in place and what the organization is doing to remediate the problem and when the remediation will be complete.  Such an approach would make the creation of the ROC much faster and would make the ROC much quicker to read and easier to understand.  This is the approach used in the accounting industry for their SSAE 16 reports and there is no reason why the PCI SSC could not adopt the same approach.

The PCI SSC continues to cling to this inane reporting requirement because it apparently is relying on the readers of the ROCs to “rat out” those QSACs that are producing inadequate reports.  I hate to be the bearer of bad news, but based on my review of ROCs from other QSACs that I have encountered over the last year, the “bad eggs” are not being weeded out.  Based on my interaction with acquiring banks and various card brands, there are a lot of ROCs that are not being read in detail.  And even those ROCs that are being read, most comments surround anything that is determined to have been ‘Not In Place’.  Occasionally we get a question about an ‘In Place’ item.  Obviously the current approach is not working and as long as the PCI SSC continues this approach, we are not going to build trust between the PCI SSC and QSACs.

I know that this is a dilemma for the PCI SSC, but it is something that needs to be addressed and soon.  Organizations that have to go through the ROC process are pressuring QSAs to reduce costs as much as possible not only due to our current economic conditions but also because of the thin margins retailers live on.  In order to keep the PCI compliance process relevant, the PCI SSC needs to get out in front of this issue.  The PCI DSS assessment process is very labor intensive, so the only cost savings to be obtained will be in making the process less labor intensive.

UPDATE: On the morning of September 20, 2011, the PCI SSC released the final version of the Reporting Instructions along with an FAQ.  These documents can be obtained from the PCI SSC Document Library under the Addition Documents – QSA heading.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response to too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

August 2014
M T W T F S S
« Jul    
 123
45678910
11121314151617
18192021222324
25262728293031

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 915 other followers


Follow

Get every new post delivered to your Inbox.

Join 915 other followers