Archive for February, 2010

28
Feb
10

Above and Beyond

Here is a topic that gets very little discussion but is a big sticking point between a lot of organizations and their QSAs.  I think there is an assumption that since there is a fairly detailed discussion regarding this topic in the PCI DSS that everyone understands the topic.  However, for those of us out in the field, either people are not reading the dissertation in Appendix B of the PCI DSS or there is still confusion.

To review, a compensating control needs to:

  • Meet the intent and rigor of the original PCI DSS requirement;
  • Provide a similar level of defense as the original PCI DSS requirement;
  • Be commensurate with the additional risk imposed by not adhering to the PCI DSS requirement; and
  • Be “above and beyond” other PCI DSS requirements.

To be ‘above and beyond’ requires the following attributes.

  • Existing PCI DSS requirements cannot be considered as compensating controls if they are already required for the item under review.
  • Existing PCI DSS requirements may be considered as compensating controls if they are required for another area, but are not required for the item under review.
  • Existing PCI DSS requirements may be combined with new controls to become a compensating control.

To explain these three principles, I think some real world examples are the best.

An organization has six character long passwords on one of their PCI in-scope systems.  As a result, they needed a compensating control for PCI DSS requirement 8.5.10.  One of their compensating controls was the fact that they required passwords to contain special characters.  A lot of people would say that requirement 8.5.11 would rule out specifying special characters in passwords, however they would be wrong.  Requirement 8.5.11 states, “Use passwords containing both numeric and alphabetic characters.”  There is nothing in 8.5.11 requiring special characters, so therefore requiring special characters in a password is ‘above and beyond’.  That said, this organization in their compensating control for 8.5.10 also stated that they required password changes every 90 days, which is already required by requirement 8.5.9.  As a result, the organization either had to change their passwords more often than every 90 days or remove that from their compensating control document.  They chose to remove the 90 day change requirement.

An organization does not have anti-virus on their Windows-based Web servers and therefore needed a compensating control for PCI DSS requirement 5.1.  One of the areas where this organization was original was in their using their very strict monitoring of their network traffic, critical file monitoring console and their real-time event log analysis as ways they determine if malware is put on these systems.  They also conduct daily vulnerability scans and monthly penetration testing.  All of these go ‘above and beyond’ because while all of these items are required by the PCI DSS, the execution of these activities was at a greater frequency than required by the PCI DSS.

In an earlier post on wireless scanning compensating controls, I give a couple of examples of combining and using existing PCI DSS controls to provide ‘above and beyond’ compensating controls.  One of the controls I mentioned was enabling MAC address filtering on the switch ports.  Since MAC address filtering is not required by the PCI DSS, this control will go above and beyond.  In addition, I suggested that there should be monitoring the network for any devices that do not respond to SNMP inquiries using your organization’s SNMP private community string.  Monitoring of this sort is not even called out in requirement 10, so I would say that this sort of monitoring would go above and beyond.

These examples should give you some clarity on what is considered “above and beyond.”

Advertisement
21
Feb
10

What Is Penetration Testing?

This question comes up more than I would like so it is probably a good discussion topic.  And it pairs up nicely with my previous post regarding passing vulnerability scans.

First, let us get the obvious out of the way.  A penetration test is not a vulnerability scan and a vulnerability scan is not a penetration test.  Yes, I know that the lines are blurring between vulnerability scanning and penetration testing with certain tools.  However, in the true definition of a penetration test, a penetration test is one where you take the output of a vulnerability scan and using the vulnerabilities identified, you attempt to leverage those vulnerabilities to obtain control of the device and ultimately gain control of your objective.  In the case of PCI compliance, the objective is to obtain cardholder data.

Next, penetration testing is not all about the tools.  I do not care whether you use tools like Metasploit, SAINTexploit or Core Impact.  The best penetration testers use very few tools.  But the one tool that all good penetration testers share in common and rely on most is the world’s most high powered computer on the face of the Earth, their brain.  With penetration testing it is not always about the tools, but tools can help.  The key to successful penetration testing is being able to connect the dots based on what vulnerabilities you have available.  You have to change your mindset from one of being a “good guy” to one of “what kind of opportunity do I have and how far can I take it?”  In most cases, the target has very, very few or even no vulnerabilities, but other devices around the target may have vulnerabilities that ultimately may lead to the target being compromised.  It is the ability of the penetration tester to put the path together that is important as the path to a compromise is never a straight line.

Here is a real world example of what I am talking about.

I was working with a company and one of the things that they had not performed was an internal penetration test.  As their QSA, I obviously asked them to conduct a penetration test.  The first question their Director of Security asked was if it was legal for his group to conduct such a test.  This is the best first question to ask and something you should always ask whether you are a consultant or an employee.  Regardless of whether you are a consultant or employee, you should always have a letter signed by an officer of the organization that states you are allowed to conduct the penetration test.  Under federal law, it is a felony to conduct such testing without such permission.  There are a number of examples where well meaning people have been arrested and put in jail because they did not have such approval.

The next question from the Director was what tool they should use.

I stopped the Director right there and said, “Your next question should have been, is my staff qualified to conduct a penetration test?”  Just because people have certifications such as CISSP or CISM does not mean that a person can qualify as a penetration tester.  If a person has a current GIAC Certified Penetration Tester (GPEN) certification, I would consider that person qualified to be a penetration tester.  By the same token, just because someone is a great hacker also does not necessarily qualify them to be a penetration tester.  A good penetration tester needs to not only have the skills, but also needs to document everything about how they got in.  I worked with a great hacker a number of years ago that was probably one of the best at getting into whatever system he put his mind to get into.  However, he was a lousy penetration tester because he failed to document how he did what he did to compromise systems.  As a result, once he was done, he had no documentation to show for all of his work other than the compromised target and a very vague memory of how he got there.  It is the documentation of the compromise that is worth its weight in gold and what you need as a result of a penetration test.  Without such documentation, there is no way to address the shortcomings in security that was used to compromise the target.  In reviewing the Director’s staff, he had a couple of people that I deemed qualified, so we moved forward.

Back to the tool question.  The first tool out of the Director’s mouth was Nessus.  Nessus is a great vulnerability scanner and can do some penetration-like testing, but it is not a true penetration testing tool.  So we used Nessus to get a list of potential vulnerabilities of the in-scope PCI devices and systems and they downloaded a copy of Metasploit to use as their penetration testing tool.  One of the things I do not like about Metasploit is that not all exploits are necessarily available under Metasploit.  Another thing about Metasploit that troubles me is that a lot of the Metasploit exploits are “live” exploits and if successful, compromise the target.  If you want to “neuter” Metasploit exploits, it is up to you and your programming expertise to identify the problem areas and then remove them and still have a valid test.  Regardless, once that target gets compromised, the only option to correct the problem is rebuild that device.  As a result, another set of vulnerability scans and penetration testing have to be done that could cause the whole process to start over again.  Some of the commercial penetration testing tools use “widgets” that get installed in memory for conducting their compromises.  Since these “widgets” are only memory resident, the target only needs to be rebooted to clear them out of the system.  The key thing to note though is that regardless of approach, once a penetration test is done, there is clean up afterwards that must be done in order to ensure security.

Planning an attack is very important.  Anyone can use a tool and get nowhere.  The art in penetration testing is how an attack is constructed.  There needs to be a good analysis of the vulnerability scan to see what opportunities are available.  In the case of my client, there were a very limited number of vulnerabilities with which to work.  There were a couple of low rated vulnerabilities that showed some promise.  One of the penetration testers asked, “Low rated vulnerabilities, how can those be used?”  It all depends on what those low rated vulnerabilities are.  In this case, there were a couple of SMB and NetBIOS vulnerabilities that while rated low, could be used to escalate privileges.  And that is exactly where I suggested they start.  It took the penetration testers a couple of days, but ultimately they were able to leverage those original vulnerabilities to the point that they were able to escalate their privileges to where they were able to penetrate a server that stored cardholder data.

The first piece of good news is that the data stored in the compromised server is encrypted and they could not get to the keys, so the data remained secure even though the server was compromised.  Another piece of good news is that these penetration testers kept good notes on everything they did and had plenty of information on what needed to be fixed to improve their security posture.  The final piece of good news was that no devices were harmed (i.e., crashed) during the conducting of the penetration test.  There were two devices that required replacement after the test because the escalation of privileges left them in an unsecure state.  This was not too onerous since the environment is redundant and the backups were used for testing.

But there was also bad news during the penetration test.  The worst piece of bad news was that even though the penetration test set off a number of alerts, those alerts were not addressed by the network administration group.  The common refrain we heard during the debriefing was that they knew it was a penetration test and therefore just ignored them.  When asked how they knew it was the penetration test and not a valid alert, we got looks of incredulity as though it was just painfully obvious.  However, given that the attack was launched outside of the organization, management found it hard to believe that these people knew this was not a real attack.  As a result, management is taking a hard look at this group as well as looking at ways to make the alerting more effective and require that all alerts be addressed.

So, what are the lessons learned from this exercise?

  • Vulnerability scanning is not penetration testing.  Even if the vulnerability scanner allows for the use of credentials and can correlate internal and external vulnerabilities, it does not replace a properly conducted penetration test.
  • Not everyone can be a penetration tester.  Certifications do not necessarily matter.
  • Penetration testing is not a license to crash every device you test.  Penetration testing is to prove that a compromise can occur and that an objective achieved, not that devices can be crashed.  Crashing devices only proves that a denial of service can be conducted and anyone can prove that.
  • Regardless of the risk rating on vulnerabilities, you cannot discount their value in a penetration test.
  • Planning of the penetration test is like planning a campaign during a war.  Not only does the path to the ultimate objective need to be plotted, but contingencies planned in the event the path to the object is blocked at any point.  Planning includes considering any likely points where the compromise might be noticed.
  • Penetration testing will likely result in service outages and those outages should be taken into consideration during the planning process.  If possible, the penetration test should be conducted in a replica of the production environment.  If the penetration test is conducted in production, then management needs to understand and approve of the likely service outages that will occur as a result of this testing.
  • Penetration testing is not just running some tools and producing a report.  Even with great tools, penetration testing takes time and can take a lot of time compared to vulnerability scanning.  Patience is required.  However, if you are spending more than five days conducting a penetration test, you are likely taking too much time.
  • Regardless of whether you inform others in the organization that a penetration test is being conducted or they are not informed, you should expect that any alerts that are generated are addressed and notification of management occurs just as it should based on your incident response plan.

UPDATE: Here is a great article on 10 tips for successful penetration testing.  http://www.csoonline.com/article/636040/penetration-tests-10-tips-for-a-successful-program

14
Feb
10

“Passing” Vulnerability Scans

What is a “passing” scan?  The PCI DSS requirement 11.2.b defines a passing scan as a scan that does not report any urgent, critical, or high vulnerabilities, i.e., any vulnerability with a CVSS base score of 4.0 or greater.  So what is the big deal and why is a passing scan so difficult to obtain?

The National Vulnerability Database (NVD) maintained by the National Institute of Standards and Technology (NIST) has 40,029 vulnerabilities cataloged from 1999 through the end of 2009.  Of those vulnerabilities, almost 94% (37,523) have a CVSS base score of 4.0 or greater.  As a result, statistics say that there are going to be times when the vulnerability scan comes back with an urgent, critical or high vulnerability.  While requirement 11.2.b allows a minimum of quarterly scanning, requirement 6.1.b requires that all urgent, critical or high patches must be applied within a month.  As a result, once a vulnerability is identified by your scan, you essentially have 30 days to apply a patch and you must rescan to ensure that the patch has been applied.

Under a quarterly scanning program, when a non-passing scan occurs, you must now schedule a scan at least 30 days later to prove that the urgent, critical or high vulnerability was patched.  Given statistics say that 94% of all vulnerabilities have a CVSS base score of 4.0 or greater, it is highly likely that you will have to scan at least eight times during the year.  The four quarterly scans plus four more remediation scans.  However, given those previous statistics, it is also highly likely that those four remediation scans will reveal new vulnerabilities meaning that you will have to scan at least four more times.  That means at least 12 scans, possibly more.  This is why a lot of organizations just do monthly scans.

But, this is not the entire patching story.  Most of the time, vendors have a patch within days or a week or two of identification.  However, there are instances where vendors have taken months or even years to deliver a patch.  As a result, in certain instances, patches may simply not be available from a given vendor.  In some rare and bizarre scenarios, we have seen certain patches from vendors remove patches thus reintroducing old vulnerabilities.  When systems were reviewed, the system patch records still indicated that all patches had been applied however the vulnerability had reappeared and had to be patched again.

Addressing vulnerabilities can get even more delayed when we talk about packaged software.  For organizations running packaged solutions, they typically do not have the option to patch their software within the 30 day window required in 6.1.b.  This is because packaged software vendors need to test operating system patches and other system patches with their software prior to telling their customers that the patch is compatible with the packaged solution.  In some cases, the software vendor issues their own service packs on a quarterly, semi-annual or other periodic basis that contain compatible system patches as well as any updates to their own software.

This is where the experience of the QSA comes into play.  An experienced QSA understands the realities and that scans showing new vulnerabilities are a fact of life.  As a result, I recommend the following guidelines to determine if an organization is meeting their PCI compliance obligations regarding patching and scanning.

  • Determine that vulnerability scanning, penetration testing and patch management processes are documented.  Obtain and review all policies, standards and procedures related to vulnerability scanning, penetration testing and patch management.
  • Determine that there is proof that supports that the patch management process works as documented.  Proof that a patch management process is working includes any reports from tools such as Microsoft WSUS, Big Fix, Lumension, GFI LANguard, Shavlik HfnetchkPro and the like as well as reviews of system patching records from the systems themselves and the vulnerability scanning and penetration testing reports.
  • Determine that the vulnerability scanning and penetration testing processes are functioning by reviewing all available reports from those processes for the PCI compliance reporting period.  Confirm that any new vulnerabilities identified are either addressed in the 30 day window or are documented as to why they were not addressed.  Determine that rescanning and retesting are performed after any patching has been completed.  Remember, only your external quarterly scans need to be done by an ASV.  Any other scanning done can be done by qualified internal resources, so you do not have to incur additional costs of an ASV for scans outside of the quarterly scans.
  • Review change control records and determine if any significant changes have been made to either PCI in-scope applications or the in-scope networks.  If significant changes have occurred, match the completion dates of those changes to vulnerability scans and penetration tests to ensure that scanning and testing was performed after those significant changes were implemented.
  • If a vulnerability is not patched, obtain documentation explaining why the vulnerability is not patched, the risk presented by the vulnerability, what has been implemented to mitigate any additional risks presented by the vulnerability and, if possible, when is the vulnerability expected to be addressed.  Determine that management has approved that the vulnerability has not been patched and they accept any additional risk presented.

If an organization can provide all of this documentation and proof, in my opinion they are meeting their PCI compliance obligations regardless of what their vulnerability scans and penetration tests document.

10
Feb
10

Extremely Mobile Payment Processing

In a previous post I discussed mobile computing and PCI compliance.  In the last couple of weeks I have been questioned about using mobile devices such as smartphones and Wi-Fi enabled PDAs as payment terminals and I thought this particular incarnation of mobile computing deserved an in-depth look.

Pay attention to that Apple iPhone advertisement.  If you notice in one of their advertisements they show a person processing a credit card payment on their iPhone.  As Apple likes to say, “There’s an app for that.”  However, it is not just Apple that has a payment application for a mobile device; there are also payment processing applications for Windows Mobile environments.  There are also proprietary solutions from VeriFone and the like.  Some of these applications are PABP and/or PA-DSS certified.  Devices from VeriFone and the like are PCI PTS certified, but the iPhone and other cellular phones as well as PDAs are not PCI PTS certified devices.

So when the pizza delivery person shows up at your door and wants to swipe your credit card through their mobile device, how do you know that it is safe?  You likely will not know.

The security surrounding the telecommunications used by these devices is the easiest thing to discuss.  All of the devices I have been able to examine use telecommunications methods that are encrypted either by SSL v3 or TLS.  The cellular network and Wi-Fi are just used as the conduit and are not relied upon to provide any security.

Do not assume that VeriFone and the like are meeting all of the PCI standards.  While their mobile payment terminals are PCI PTS certified, the application software in those devices is not PA-DSS certified.  I pointed to the flaws in these devices in a previous post.

But there are bigger problems lurking with the iPhone.  Ask any computer forensic examiner about the iPhone and they will talk at length about the fact that the iPhone has a number of “features” that make security and privacy things of the past.  From a PCI compliance perspective, some of the more problematic issues are as follows.

  • Deleted information does not physically get deleted.  In some cases, deleted data can remain on an iPhone for up to six months or even more depending on use.
  • The iPhone has a built-in keyboard logger, so anything typed into it is recorded.
  • While it is not certain that card swipes would be retained on the iPhone, given all of the other information it retains, it is highly likely that such information would also be retained.

As a result, using the iPhone as a payment processing platform is probably not a good idea until it is certified.

So what, if anything, are the PCI SSC and/or the card brands doing about this situation?  As much as they can, given that these solutions are popping up faster than they can identify them.  The problem is that the developers of these applications are usually unaware that they are required to comply with various PCI standards.  And since the developer is responsible for certifying their solution unless they get ‘ratted out’, the solution will not get certified.  So it is up to the application developer and the merchants to ensure that an application is properly certified.  If that is not worrisome enough, the cost involved in certifying such an application would likely raise the cost of that solution to a point where it would not be economical to the merchant or salesperson.

07
Feb
10

Wireless Scanning Compensating Control

I got a comment regarding my post titled “Wireless Security – Random Thoughts On How To Fix” asking what sorts of compensating controls would address requirement 11.1.  Since I have been looking for a topic, I thought I would address this as well as provide people with an example of how you develop a compensating control.

In order to construct a proper compensating control, you must first answer a couple of basic questions.

  • Define the objective for the original requirement; and
  • Identification of how the compensating control addresses the aforementioned objective.

In order to be able to define the objective for the original requirement, you need to understand the requirement.  Requirement 11.1 states, “Test for the presence of wireless access points by using a wireless analyzer at least quarterly or deploying a wireless IDS/IPS to identify all wireless devices in use.”  On the face of it the purpose of requirement 11.1 is to ensure that all wireless access points are accounted for at each location using a wireless scanner or WIDS/WIPS, so that any potential rogue access points are identified and therefore can be removed from the network.  But is that the “real” objective of the requirement?  Before you go running off for alternative ways of identifying rogue access points, let us ruminate a bit further about the objective of requirement 11.1.  What is the “real” objective of requirement 11.1?  The real objective is to make sure that rogue access points do not end up on your network and if they do, you can identify and remove them as soon as possible.

Remember the criteria for compensating controls.  Compensating controls must:

  • Meet the intent and rigor of the original PCI DSS requirement;
  • Provide a similar level of defense as the original PCI DSS requirement;
  • Existing PCI DSS requirements CANNOT be considered as compensating controls if they are already required for the item under review;
  • Existing PCI DSS requirements MAY be considered as compensating controls if they are required for another area, but are not required for the item under review; and/or
  • Be “above and beyond” other PCI DSS requirements.

With all of this in mind, what are controls you can put in place to keep rogue wireless access points from being placed on your network in the first place?  The following should not be considered a complete list of controls you can put in place, but it does provide enough controls to create a compensating control for requirement 11.1.

  • Disable all unused switch ports.  If unused switch ports are disabled, the installation of a rogue access point cannot be accomplished by plugging it into an open port on the switch.  This approach is usually not appreciated by network administrators as it requires the re-enabling of a port whenever a device needs to be added at a location.  It can also be disliked by remote facility personnel as it means there can be additional delays in getting another network device on the network.  While this control is referenced in requirement 9.1.2, it is not germane to requirement 11.1, so we can use this control for our compensating control for requirement 11.1.
  • Enable MAC address filtering on the switch ports.  This control will restrict what device can be plugged into an active switch port.  Essentially tying a single device to a single switch port.  As with the first bullet, MAC address filtering creates a network management issue when you want devices added or changed.  So you will be adding to the effort required to change any devices out in the field.  Since MAC address filtering is not required by the PCI DSS, this control will go above and beyond.
  • Monitoring of switch ports and generating alerts whenever a device is unplugged or plugged into a switch port.  Most switches will generate log entries when a device is either unplugged or plugged into a switch port.  Those events should be investigated immediately.  Such events can be the first sign of someone trying to plug in any sort of rogue device from a wireless access point to their own laptop.  Monitoring of this nature is required as part of requirement 10.  However, how you use monitoring to meet requirement 11.1 is not considered part of that requirement, so you can use the information you gather and monitor in requirement 10 to meet your requirement for 11.1.
  • Monitoring the network for any devices that do not respond to SNMP inquiries using your organization’s SNMP private community string.  Again, if a device does not respond to SNMP inquiries using your organization’s SNMP private community string it is either mis-configured or does not belong.  So if you cannot communicate with the device, it needs to be investigated and either fixed or removed.  Monitoring of this sort is not even called out in requirement 10, so I would say that this sort of monitoring would go above and beyond.
  • Disable dynamic host configuration protocol (DHCP).  DHCP is a wonderful service, but it is also a dangerous service.  It is dangerous because it allows any device, whether it belongs on your network or not, to obtain an IP address immediately when it connects to the network.  If any device can obtain an IP address when connected, then it has full connectivity to your network.  By not implementing DHCP, you remove that risk by requiring all devices to be preconfigured for their segment of the network.  Yes, your networking people will likely not appreciate this, but it provides a good security feature.  DHCP is called out as part of requirement 9.1.2, but again, it is not germane to 11.1, so we can use it here for our compensating control.

Because these controls can have a significant impact on your network and computing environment, we need to discuss a few more items.

First, you only need to disable DHCP at your retail locations, not your corporate office.  Typically, an organization has a lot more control over their computing environment at their corporate office, than at their retail locations.  As a result, enabling DCHP at the corporate office is not as risky, but you should still avoid enabling unused switch ports and unused jacks throughout any of your facilities.

If you are going to utilize wireless in any of your organization’s locations, make sure to properly isolate the wireless network from the rest of your network.  Even though you have implemented all of the compensating controls above to secure your network from rouge access points, you still need to ensure a secure wireless networking environment.  If you need wireless for operational purposes, make sure to secure it properly so that it cannot easily be compromised.  Such measures typically include not broadcasting the SSID and using WPA2 Enterprise security.

If you wish to provide wireless access to guests at your corporate office, make sure that their traffic is separated from any other wireless traffic and that they are only granted access to the Internet and no other internal resources.  A lot of organizations today require even guests to authenticate to their wireless as an extra security measure to keep people from using the available bandwidth at will as well as providing a mechanism to have guests acknowledge their responsibilities when using the wireless network.

Once you have the aforementioned information documented, you still have a few more things to cover in your compensating control.  You still need to implement your own feedback loop on these controls to ensure that they are functioning as designed.  Unlike Ron Popiel’s  miracle TV  oven, you cannot just ”set it and forget it” with these controls.  You need to have a plan in place to periodically follow up on all of these controls to ensure that they are functioning as designed.  Typically, that means tracking statistics collected from the monitoring controls.  It also means periodically observing these controls in action to ensure that monitoring is taking place and that ports really are disabled.  This sort of follow up is easily implemented as part of your financial field audit work.

In addition to your own follow up.  Your QSA also has to document what they did to confirm that these controls were in place and functioning as designed.  Their work is going to be similar to your own internal follow up work, but will likely be less extensive than your internal work as long as no exceptions are found.

You should now have a compensating control for requirement 11.1.  But better than that, you should now have a much better understanding of how a compensating control is developed and documented.

06
Feb
10

Non-Compliant ROCs

There really is such a thing, but you rarely ever see or hear of one.  But unlike the Loch Ness Monster or Big Foot, they can and do exist.

There is no reason that an organization cannot file a Report On Compliance (ROC) that is not compliant.  The topic came up again because we have a client that is addressing some issues related to complying with v1.2 of the PCI DSS.  Their remediation efforts will not be done for another five or six months, but their PCI ROC needs to be filed in one month and they do not think they can put in place compensating controls to address the remaining issues.  As a result, there will be a couple of items on their PCI ROC that are in the dreaded ‘Not In Place’ column.

The first thing everyone needs to be aware is that there is nothing in the PCI DSS that says an organization must file a compliant PCI ROC.  It is just that filing a compliant PCI ROC makes for much less work for the acquiring bank and the merchant or service provider involved.  But there are those out there that believe that a merchant or service provider must file a complaint ROC and that is just false.

So, what happens if an organization files a non-compliant PCI ROC?

If an organization needs to file a non-compliant PCI ROC, then they need to be prepared for the additional scrutiny required by their acquiring bank and/or the card brands.  When a merchant or service provider files a non-compliant PCI ROC, the organization that receives the PCI ROC must initiate an effort to track the requirements that are Not In Place.  They need to periodically follow up on the Not In Place requirements and report the status of any Not In Place requirements to the card brands.  The term ‘periodically’ is left to the acquiring bank to determine.  But how often they follow up can be as little as quarterly and as often as weekly.  The most common timeframe seems to be monthly meetings, but your experience will likely vary.  This process is required to continue until all Not In Place requirements are deemed in place.

So, how does the acquiring bank determine that your organization’s Not In Place items are now In Place?  Well that is where things are not so well defined.  What is defined is that the merchant or service provider informs the acquiring bank or card brands during the follow up meeting/call that the Not In Place requirements are now In Place.  What is not well defined is what happens after being informed that requirements are no In Place.  Since there are no procedures documented in the PCI DSS, by the PCI SSC in an FAQ or by the card brands, what happens next varies from acquiring bank to acquiring bank.

In most cases, the acquiring bank requests the merchant or service provider to get their QSA to update the ROC by reflect the changes in the Not In Place requirements.  My Firm’s problem with this approach is that in updating the PCI ROC, we are only looking at those requirements that have been updated from Not In Place to In Place.  We are not re-conducting all of the testing in the PCI ROC.  As a result, we only update those requirements that have changed and we place a disclaimer in the PCI ROC that states what items were updated and when those updates occurred.  We do not update the date of the report as the entire report was not updated.

Our preferred approach is to issue a letter with an attachment that contains the individual requirements that are now In Place.  The letter documents the scope of the re-review and the approach taken to test the updated requirements.  This approach allows for the updating of the PCI ROC, but only those items that changed and does not alter the original PCI ROC that was issued.  In this way, anyone reviewing the original report and the update has a clear understanding of what changed and why.

04
Feb
10

PCI Check Box Compliance – Volume 2

I want to shed some light on a troubling practice that I think people should be aware.  The practice of which I am referring is QSAs that apparently spend little time on-site conducting their Report On Compliance (ROC) fieldwork.  We are hearing a little too often from new clients that the amount of on-site fieldwork we are scheduling is significantly more than the time spent on-site by their last QSA.  As a result, I wanted to take this opportunity to discuss why a certain amount of on-site work is required if a QSA is to successfully perform their duties.  I am hopeful with the advent of the PCI SSC’s QA program that this practice will come to an end, but you never know.

About a year ago, the PCI SSC, as part of their QA program, issued a scorecard for evaluating a PCI Report On Compliance (ROC).  The scorecard calls out five verifications that may be required by each individual PCI DSS requirement.  Not every requirement requires all five areas be covered but a number of requirements do.  Those areas are:

  • Verified by review of documentation;
  • Verified by interview;
  • Verified by observation of system setting or configuration file;
  • Verified by process, action or state; and
  • Verified by network traffic monitoring.

Verified by review of documentation is just what you think.  The QSA is required to obtain and review all relevant documentation related to PCI DSS compliance.  Documentation is best reviewed before conducting any interviews and observations.  The allows the QSA to minimize ramp up time on the organization’s cardholder processing environment and gives the QSA a better understanding of the environment to minimize clarifications and questions.  Just as an FYI, based on my analysis, there are, at a minimum, 256 discrete documents required to complete a ROC.

As with documentation, verified by interview is also straight forward.  A QSA  is required to interview all relevant personnel that have knowledge required to complete the ROC.  According to the scorecard, there are 129 interview topics that are required to be discussed.  Interestingly, some PCI DSS requirements that you would think do not have an interview component actually do.  This is an area that caught a lot of QSAs when they went through their QA reviews.  Interviews can typically be conducted via conference calls and LiveMeetings, so there is no requirement to be on-site for these.

Verified by observation of system, setting or configuration file is defined as the assessor observed the device, component or server configuration file parameters, system setting parameters or any other parameters to prove that these parameters were set to produce the outcome specified  To accomplish this, the assessor may use local system administrators, database administrators and application support personnel as needed.  Based on the scorecard, there are 124 observations that are required.

Verified by process, action or state is one that trips a lot of us up.  What the PCI SSC is requiring here is that the QSA observe a process, an action or the state of a device so that they can have proof that what the documentation and the interviews state is in fact what is actually executed by the personnel involved.  There are 301 observations in this category required by the PCI SSC to be performed for a ROC.

Verified by network traffic monitoring is another tough one.  It is tough mostly because a lot of organizations do not have a way to monitor network traffic that is acceptable.  Yes, they are monitoring their network traffic, but they are not inspecting it with a tool like WireShark which is what is needed here.  What the PCI SSC requires is that the QSA observe the network traffic to make sure it is encrypted and that inappropriate cardholder data is not present.  There are 9 observations of network traffic required.  These can be done remotely in most cases.

The QSA must truly observe all of these items.  Given the number of items that must be observed, it just seems unrealistic to get all of it done in one or two days.  It is possible to do some of these observations using conference tools such as WebEx or LiveMeeting.  But when asked, our new clients tell us that no such meetings occurred for making these observations.  So we are stumped as to how the previous QSA got the observations completed in such a short period of time.

Regardless of what the PCI SSC requires, there is a huge amount you can learn about an organization by spending time on site.  An auditor can tell a well run operation just be looking around and talking to people.  Looking around and talking to people takes time.  In a lot of cases, these sorts of observations of an organization’s operations can point you in the direction of potential compliance issues that just require a bit of digging.  You cannot get that kind of time if you are doing back to back to back on meetings to complete your observations.  As a result, we have some concerns about whether or not these QSAs are identifying potential compliance issues that may exist.

In the end, these QSAs may be saving you money at the expense of your security.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

February 2010
M T W T F S S
1234567
891011121314
15161718192021
22232425262728