Archive for January, 2013

23
Jan
13

Compliance Is Security – Final Answer

I am really tired of this dodge that compliance does not equal security.

I was recently on a Webinar by a major security vendor and one of their points was that executive management is finally starting to realize that compliance does not equal security.  And I realized what the problem is; people are confusing those PCI ROCs, internal audit reports and the rest of compliance assessments as an accurate assessment of an organization’s security.

Organizations develop and implement security policies, standards and procedures to protect the networks and systems used by the organization.  Those policies, standards and procedures were not created in a vacuum; they were developed based on what the organization has learned over time to protect itself as well as items taken from ISO, PCI, HIPAA, FISMA, etc. which are derived from the lessons learned by lots of people in the information security arena.  In order to be as secure as possible, everyone in the organization must follow (i.e., comply) with all of the organization’s policies, standards and procedures.  The idea being that ISO 27K and FISMA are complete security frameworks whereas PCI, HIPAA and the like are security frameworks focused on specific types of information.

Where the wheels keep coming off in this discussion is the confusion between compliance testing and reporting and the act of compliance by your employees and business partners.  Compliance is an ongoing, 24x7x365 effort.  Compliance assessments are a snapshot of compliance at the time the reports were written.  This is no different than going to the doctor for your annual physical which results in a snapshot of your health at that point in time.  It is not that those compliance reports are worthless; they just need to be referenced and used properly based on the fact that they are a snapshot.

A prime example of this is the PCI ROC.  Other than vulnerability scanning, penetration testing and change management testing, there is nothing in the PCI ROC that covers a period of time and even the aforementioned topics are limited as to the time frames used for testing.  The ultimate result of this is that you get a snapshot of compliance as of the date of the report.  Unless something was out of compliance at that point, you will never know.  This is not anyone’s fault, it is just how the system works because no organization is willing to pay for their QSA/ISA to assess them 24x7x365.

What all of these frameworks assume and expect is that the organization has put into place monitoring that does test compliance with their security policies, standards and procedures 24x7x365.  For small organizations this can be done manually by monitoring event logs and syslogs on the small number of systems involved.  

For everyone else, automated toolsets are going to be required such as security incident and event management (SIEM), configuration management, intrusion detection/prevention, wireless monitoring, network performance and traffic monitoring, help desk, change management and similar tools are required to do the job properly.  This is because of the volume of information involved that requires analysis and correlation cannot be completed without some form of automated tools.

But just having these tools does not ensure compliance either.  The tools need to be properly configured, maintained and monitored so that appropriate personnel are notified of out of compliance conditions and that those issues are addressed in a timely manner.  If that analysis and notification process were truly going on, most of the breaches that have occurred to this point would likely have not occurred or would have been stopped early on.

But the real lesson here is that the technology side of security is the easy part.  Firewalls, intrusion detection/prevention, routers, switches, servers, security tools, etc. can all be configured to ensure 100% 24x7x365 compliance with all relevant security policies, standards and procedures.  

The hard part of security is people for a variety of reasons.  The most obvious reason the people side of security is hard is the fact that most information security people are loathe to deal with people.  That is an issue that is important and needs to be addressed in another post.

The larger people problem is that, at the end of the day, people are not 100% 24x7x365 compliant because they are fallible.  And no matter how much training you provide people, you are not going to change the fact that they are fallible.

But that is what all of that technology is for is to let you know when people are fallible and to stop or minimize the risks generated because of their fallibility.

Advertisement
13
Jan
13

Bring Your Own Device And PCI Compliance

Bring your own device or BYOD is all the latest rage.  I believe that the reason for that exuberance is the consumerization of technology.  It is that exuberance through BYOD that has made everyone an “IT expert.”  Just ask any user of a smartphone or tablet and they will baffle you with vendor lingo regarding BYOD.  However, regardless of what these people think, there is a big difference from consumer use and enterprise use, the first of which is security.  In this post I am going to look at the minimum PCI DSS requirements BYOD will have to comply in order for your organization to maintain PCI compliance.

From a PCI perspective, requirement 12.3 is very relevant in the BYOD discussion.

“Develop usage policies for critical technologies (for example, remote-access technologies, wireless technologies, removable electronic media, laptops, tablets, personal data/digital assistants (PDAs), e-mail usage and Internet usage) and define proper use of these technologies. “

Under 12.3, there are eight sub-requirements that require:

  • Explicit approval by authorized parties.  Just because you have a smartphone or tablet, does not imply you get an automatic pass to connect, you must be approved for that privilege.  In most organizations, that means you need to provide a business need to have such a privilege granted.  While access to email through a smartphone or tablet is one thing, access to cardholder data should be another and be granted very judiciously, if at all.
  • Authentication for use of the technology.  If you think you are using a PIN, think again.  You still have to log onto any PCI in-scope system using a password that meets the PCI DSS requirements.  By the way, two-factor authentication is required to access cardholder data remotely, so BYOD does not get you a pass there either.
  • A list of all such devices and personnel with access.  For organizations issuing BYOD, this is not a problem.  For organizations allowing any Apple iOS, Android or Windows device to connect is problematic.  While it can be done, you will need to document who was granted access, for what reason/purpose, and what they are using to obtain that access (i.e., make, model, etc.) even if your organization is not providing the devices.
  • Labeling of devices to determine owner, contact information and purpose.   I take issue with identifying purpose of the device as that, in my opinion, could just make someone ever more curious as to what is on the device.  However, identifying the owner, giving their general business address and general voice and facsimile telephone numbers for their location is what I would recommend.
  • Acceptable uses of the technology.   Email is one thing, doing something that involves cardholder data is another.  I think if companies think this through, there is probably little, if any, reason for BYOD to be even near the cardholder data environment (CDE).  However, if for some bizarre reason you can come up with a valid reason, then all requirements for remote access of cardholder data apply such as personal firewall, no way to disable the firewall, strong passwords, two-factor authentication, encrypted connection, etc.  And remember that these devices have keyboard loggers, so all data input is recorded so keep that fact in mind when designing your information security requirements for BYOD.
  • Acceptable network locations for the technologies.  In the case of BYOD, this is anywhere outside of the organization’s network perimeter.
  • List of company-approved products.  For those organizations issuing the BYOD, this is not a problem.  For those of you that allow anyone with anything to connect as long as it runs iOS, Android or Windows 8 RT/Pro, your list is going to say “Any iOS, Android or Windows 8 RT/Pro device.”
  • Automatic disconnect of sessions for remote-access technologies after a specific period of inactivity.  With BYOD, this can be problematic as most BYOD do not use traditional remote access connectivity and inactivity can all be in the eye of the beholder, so to speak.  As a result, inactivity timeouts can be difficult, if not impossible, to enforce in some instances.  As a result, you may have to be either creative or use a compensating control to comply with this requirement.

Going through the above, I think most organizations would see BYOD for what it is – a fad.  Do not get me wrong, BYOD has its uses in the corporate environment, but they are fairly limited and likely does not include cardholder data.

However, if you have come up with a business justification for processing, storing or transmitting cardholder data using BYOD, there are number of other requirements you are going to have to address.  I know there are other potential requirements that could be involved, but these are the most well known that will need to be complied with under the PCI DSS.

  • Requirement 1.4 – Install personal firewall software on any mobile and/or employee-owned computers with direct connectivity to the Internet (for example, laptops used by employees), which are used to access the organization’s network.   While this can be accomplished by a number of security vendors, it is the enterprise management of those solutions that is currently lacking and the ability to globally enforce policies.
  • Requirement 3.4 – Render PAN unreadable anywhere it is stored (including on portable digital media, backup media, and in logs).  Most cell phones and tablets do not support device encryption.  As a result, how will you protect any cardholder information stored on the device?  Remember that these devices have keyboard loggers, so any cardholder data input on the device is collected and stored by the device whether you like it or not.  The bottom line is that you will have to restrict BYOD to only those devices that can support whole device encryption.
  • Requirement 4.1 – Use strong cryptography and security protocols (for example, SSL/TLS, IPSEC, SSH, etc.) to safeguard sensitive cardholder data during transmission over open, public networks.  You would not think that this would be a significant problem, but it can turn out to be very significant and I will speak about this later on.
  • Requirement 8.3 – Incorporate two-factor authentication for remote access (network-level access originating from outside the network) to the network by employees, administrators, and third parties.  This capability is available, but if you are using any of the solutions that run on the BYOD, your users will likely have issues trying to connect and get their token value from that device.
  • Requirement 8.5.10 – Require a minimum password length of at least seven characters.  This is likely a deal breaker for users.  Most like their PIN or swipe and will not want to give them up for a seven character, strong password.  In addition, some security solutions may create a situation where the phone cannot be answered without unlocking the device and, a seven character password may cause calls to be missed.
  • Requirement 8.5.11 – Use passwords containing both numeric and alphabetic characters.  This can be problematic when some virtual keyboards require flipping between three or more screens to get to certain special characters.  With phones with physical keyboards, there may be limitations to the number of special characters available that could create problems with the password uniqueness requirement in 8.5.12.
  • Requirement 8.5.13 – Limit repeated access attempts by locking out the user ID after not more than six attempts.  Some security systems may not be able to enforce this without wiping the device.
  • Requirement 8.5.14 – Set the lockout duration to a minimum of 30 minutes or until administrator enables the user ID.  I am not aware of a security solution currently available that can enforce this on a smartphone or tablet.  You will have to meet this with a compensating control.
  • Requirement 8.5.15 – If a session has been idle for more than 15 minutes; require the user to re-authenticate to re-activate the terminal or session.  As with 8.5.14, I am not aware of a security solution that can enforce this on a smartphone or tablet.  You will have to meet this with a compensating control.
  • Requirement 10.2 – Implement automated audit trails for all system components to reconstruct events.  This is not an issue with Android and may be an issue with Windows 8 RT, but is definitely an issue with Apple iOS.  There is no utility I am aware, outside of forensic utilities, which can meet this requirement in an Apple iOS environment.  In addition, I am not certain how you get this log data back for any sort of analysis without chewing up a tremendous amount of data bandwidth or device memory.  At the end of the day, this will also likely be satisfied by a compensating control, if you can actually come up with enough controls that go above and beyond the PCI requirements.
  • Requirement 10.3 – Record audit trail entries for all system components for each event.  If meeting requirement 10.2 is a gymnastic event, then there is the configuration of the log data to ensure that all of the necessary log information is collected.  I am sure that under Android and Windows there is probably some way to ensure that the necessary log data is required.  But with Apple, iOS, Apple will have to be able to provide this capability.  And knowing how stubborn Apple can be about having their hand forced in these matters, getting access to configuration of log data let alone log data will likely be a battle.  Again this will also likely be satisfied by a compensating control, if you can actually come up with enough controls that go above and beyond the PCI requirements.
  • Requirement 10.4 – Using time-synchronization technology, synchronize all critical system clocks and times and ensure that the following is implemented for acquiring, distributing, and storing time.  For smartphones, they get their time synchronization from the carrier.  However, that time does not likely correlate to the time used by enterprise systems.  It will be close, but not necessarily the same thus complicating any forensic examination if one is required.  This can obviously dealt with in a compensating control.
  • Requirement 10.5 – Secure audit trails so they cannot be altered.  Given that users have administrator access on their BYOD; this could be a problem that cannot be easily solved.  Yes, there are security solutions available that can lock down a device, but they can also lock them down so far that users can begin to wonder what the point of having the device is.  As a result, you will find you have a very fine line to tread in this area.

In the end, you start to understand why BYOD is a difficult thing to justify when you need to comply with all of the aforementioned PCI requirements.  But there are a few other considerations that you will still need to address.

The first situation that should concern any organization considering BYOD is the loss of that BYOD.  It is virtually guaranteed that BYOD will result in lost devices and you will need policies, standards and procedures to address that eventuality.  The way most organizations address this issue is providing a remote device wipe capability that can be invoked whenever a device is reported lost.  Not that such capability ensures that every lost device is wiped, but it is better than nothing, but not by much.  This is usually backed up by the bad password entry policy that wipes the device after six incorrectly entered passwords.  So even if the remote wipe does not destroy the data, the bad logon attempts will.

BYOD brings up an information ownership issue as, according to the latest statistics, 70% of all BYOD are owned by the employee, not the organization.  As a result, you are allowing the organization’s information to be processed or stored on a device not owned or necessarily even controlled by the organization.  While you can have an employee sign an agreement regarding the organization’s ownership of the information and the employee’s responsibilities for protecting that information, the issue of enforcement of such an agreement can be very problematic depending on platforms and other technology issues.  You can have a remote wipe capability, but that brings up the potential legal issue of can you also wipe an employee’s personal information such as contacts, music and pictures as well?  Just remember the old saying, “Possession is nine tenths of the law.”  When you finally get through court proceedings, does it really matter if you won when the data has already likely been disseminated?

Then there is the question about whether your applications actually capable of dealing with BYOD?  Most internally facing applications that employees desire access are not engineered for secure, remote access from BYOD.  As a result, a lot of organizations are turning to VPN technology to solve the remote access issue.  However, organizations using VPN are finding out that, while VPN clients are free for download to the BYOD, licensing for the BYOD VPN client is over and above the licensing the organization has already purchased for notebooks.  In addition, depending on the device, running the VPN client can make devices run very slow to the point of being worthless when connected.

When VPN is not a solution a lot of organizations are using remote desktop (RDP), Citrix virtual desktop or similar remote control environments to provide secure access to internal applications.  Having worked with a few of these in a smartphone and tablet environment, I can tell you their use is haphazard at best due primarily to screen size and lack of a mouse and a real keyboard.  In addition, we also find that some of these secure remote desktop solutions are not using secure communication methods.

In addition to VPN and remote control, a lot of organizations are implementing HTTPS for secure connectivity to their applications.  However, this creates all sorts of new security issues related to authentication and protecting the applications which are typically not engineered to be externally facing.  We are finding that in the rush to enable applications for HTTPS, there are numerous security vulnerabilities that are being introduced.  We also see vulnerabilities as well with applications developed specifically for mobile devices.  In the haste to get BYOD up and running, the security vulnerabilities are not corrected before the applications are put into use (a violation of PCI DSS requirement 6.6) which puts the applications and, potentially, the internal networks at significant risk of compromise.

And finally, data input from smartphone and tablets can be highly erroneous not just because of human typing errors but also because of auto-correction systems that are implemented on these devices.  Anyone considering significant data entry through BYOD is just asking for trouble as this erroneous data input could result in legal issues later on due to mis-spellings and mistakes in addresses and other information.

Most executives do not understand the security and privacy issues of BYOD because they have not encountered them and are not aware of them even when time is taken to educate them.  Unfortunately it usually takes an executive losing their BYOD to help management appreciate the issues with BYOD and to slow down the drive to integrate BYOD until their concerns regarding security and privacy are addressed.

As you can see, using BYOD is not as simple a process as your end users might think.  This is even truer when that BYOD will be part of your cardholder data environment (CDE).  There are a number of innovative solutions for BYOD that are secure, but those solutions are expensive and make the BYOD only a display device.  However, if you want to be able to sleep at night, I would highly recommend looking at those purpose built solutions.

06
Jan
13

Security And Compliance

I have written a lot about this topic over the years and was recently reviewing my Compliance Is Not Security – Busted! post and the comments that came in regarding it.

A theme of a number of the comments was that compliance does not equal security.  DUH!

I have never once said or even implied that compliance equaled security as – yes, here it comes – security is not perfect!  However, if you are complying with any security program/framework such as the PCI DSS, ISO 27K, etc., then you are likely more secure than those who are not.

Security technology such as firewalls, routers, servers, applications, etc. can all be set up with rules that are complied with 100% of the time, day in and day out, no exceptions.  The problem comes down to people who are fallible.  Their compliance is never 100% and you are probably lucky to have anyone above 90%, no matter how much security awareness training you do.  As a result, in organizations that are truly complying with the PCI standards, this is where the security breach starts, with people for one reason or another.

No, I am not necessarily talking about social engineering, although social engineering is growing because of the fact that organizations have invested a lot in security technologies yet people are fallible.  People can be the root cause because of any or all of the following.

  • How dare you do that to me!  This is the most obvious of the people issues that comes to mind.  Face it, when backed into a corner, people lash out just like a trapped animal.  The supposedly wronged party wants their proverbial “pound of flesh.”  They get that pound of flesh by hurting the organization that has just hurt them.  This can be as minimal as taking office supplies to downloading databases to a USB drive as they empty their desk.  Obviously, a database, network or system administrator’s access is much different than a clerk’s.  However, if your security is minimal on the inside as it is in most organizations, the clerk may actually have better access than the administrators when it comes to sensitive information.  Such a situation may not be the fault of the administrators, that old version of POS or ERP may not have the ability to be more granular regarding access to information.
  • Over inundated with alerts and cannot identify real alerts from false positives.  This typically occurs when an automated tool is implemented but never tuned to the organization’s environment.  In this sort of an environment, finding real alerts can be like finding a needle in a haystack when there are thousands of alerts an hour scrolling by on the screen.  This usually makes management wonder why the tool was needed in the first place.
  • Saw an alert and ignored it.  We see this most often coupled with the aforementioned inundation issue.  The other most common version of this issue is with internally used SSL certificates that were generated incorrectly or use a default certificate supplied by the application.  Users then see the “There is a problem with this Website’s security certificate” or similar error message in their browser whenever these flawed certificates are encountered and become conditioned to ignore the error message.  Over time, they become conditioned to ignore all of these sorts of messages, including those for malware infected Web sites and, surprise, you have been compromised.  I have lost count how many people have said to me, “We just ignore those alerts because we know they are false positives.”
  • Saw the alert but got side tracked and never came back to it.  This is a problem we see all of the time.  For example, the person that monitors the network is also the person that manages the network and configures the network.  An alert comes in and the person begins a root cause analysis (RCA) only to get pulled away because a remote facility is offline.  The offline issue gets resolved, but other issues come up as well as meetings and telephone calls and the person never gets back to the RCA for the alert because there is no “tickler” to remind them to go back and complete the RCA.  In the meantime, the attacker has gained their beachhead and is probing the network for whatever value it may contain.
  • Just did not put together all of the pieces to know they were compromised.  Like the reasons 9/11 occurred, most organizations do not correlate all of the potential incidents occurring in their networks and therefore do not understand that there is an active effort to compromise their network or that they have already been compromised until well after the incident has caused damage.  The reason this is important is that once an attacker is inside your organization’s security perimeter, it is typically game over because there are few controls to prevent access and identify that data is being taken.

If you have read the Verizon Business Services Data Breach Investigations Reports (DBIR) over the years you know how the bulk of attacks get inside, they are the result of people.  For the last two years, the DBIR has used the VERIS Event Threat Grid to show how breaches occur.  Across the top of the grid are the categories; Malware, Hacking, Social, Misuse, Physical, Error and Environmental.  The Social, Misuse and Error categories imply mistakes or deliberate acts of people.  If you read the definitions on the VERIS Web site, Malware is also very people centric as is hacking.  Surprisingly to some will be that the Physical and Environmental categories also have a good number of people errors.  Based on just a quick read, it looks to be that about 60% to even 70% of all of the incidents categorized by VERIS has some form of people error component.

Since we are not going to get rid of people in our organizations any time soon, what are you to do?

  • Admit that people are the problem and focus your security measures accordingly.  Every 12 step program says the first step is to admit the problem which, in this case, is that people are fallible.  As a result, we need to construct our security measures such that this fallibility is minimized as much as possible.  One of the best solutions is to integrate alerts into your help desk or change management system so that a ticket is generated.  Those tickets need to have an escalation process behind them so that if they are not investigated within a period of time, they are bumped up to the next higher rung of management and that escalation continues until the tickets are finally addressed.  This way there is visibility for the alerts should they slip through the cracks.  As a side benefit of this approach, you gain statistics to reinforce why you need more staff and/or more/better tools.
  • Strengthen your internal security measures.  As things stand, once inside most organization’s security perimeter, there is very little that stands in the way of an experienced attacker getting the data they desire.  Regardless of whether it is an insider attack or an attacker has managed to get inside, there is already justification for organizations to beef up their internal security measures.  To address this problem, I would recommend the security architectures as documented in my Fort Knox approach, Forrester’s Zero Trust Model or McGladrey’s Ultra Secure Network.  But most organizations do not have the infrastructure architecture, the application architecture or even the will to take such approaches.  But that does not excuse an organization from just saying they cannot do anything.  If anything, most organizations could vastly improve the monitoring they do on their internal networks.  Monitoring needs to be coupled with reducing the total number of ports that are open between network segments.  Most internal networks do a terrible job of this because of a variety of factors including applications people that cannot tell what ports need to be open to avoiding operational issues by just leaving things open.  Another area of improvement is reviewing user access rights on all systems and applications, not just those in-scope for PCI compliance.
  • Constantly tune your alerting system(s).  Just as attack methods are not static, neither are networks, systems and applications.  Changes are occurring all of the time in an organization’s IT environment, yet if you ask the people running the SIEM about changes, nine times out of ten, nothing seems to be changing other than requests to look for a new signature or anomaly.  There is a belief in the SIEM user community that a SIEM’s update process is making the necessary changes in the policies that ship with the SIEM.  To a certain extent SIEM solutions are similar to anti-virus and malware solutions.  However, because a SIEM monitors log data and the log data provided varies greatly from organization to organization, each organization needs to periodically review and adjust their alerting criteria to make sure that it reflects the organization’s operating environment and not just some template from the SIEM vendor.  If an organization is not reviewing its SIEM alerting rules based on the changes made, at least quarterly, then it is highly likely that the SIEM is not alerting properly.
  • Establish separate consoles from your SIEM for network, system, security and application administrators.  What a network administrator is looking for is vastly different from what an application administrator is looking for and what any particular group might be looking for to generate an alert.  As a result, to have only one console is really silly and non-productive.  Yet time and again, we see SIEM implementations with just that, one console and everyone being driven by email or SMS alerts.  The people alerted then have to get to the SIEM to find out what exactly triggered the alert and then determine what to do about it.  Having your own console view simplified things by only listing that viewer’s alerts and no one else’s alerts.  This allows people to focus on their problems and not the whole organizations problems.  The idea behind the single console is that if everyone knows what is going on overall, then correlation would occur because everyone sees everything.  While you would think that would be the case, in reality, people just want to fix their problem and move on, not the entire organization.  Which leads to my last point.
  • Watch the overall alerting picture so that correlations can be made.  According to most sources, today’s attacks are becoming more sophisticated and multi-pronged in their approach.  For example, while most DDoS attacks are just to be a pain in the posterior to the target and disrupt access to the target’s Web site, there are those DDoS attacks that are used as cover so that people inside are blinded to the real attack(s).  Whether or not the DDoS was a decoy depends on what other events or incidents occurred during the DDoS attack, if your alerting system did its work.  Higher end SIEM solutions can provide basic correlation rules, but most SIEM solutions require the end user to develop those correlation rules.  It is these correlation rules that help organization identify these more sophisticated attacks.  That said, these correlation rules do not have to be very sophisticated.  For example, during a DDoS attack, you really only need to look for malware attacks, failed authentication attempts and other anomalies that would be likely indicators of the DDoS attack being used to mask the real attack.

Is all of this going to address your security issues?  Sorry, not a chance.  None of the above stops all breaches, it merely minimizes the possibility that a breach goes on for months or years.  Hopefully it minimizes a breach down to weeks, days, maybe even hours in some cases but it will never totally eliminate them.  Security is not perfect.

There is a side benefit to all of this and that is it will assist you in doing RCA.  RCA is very effective in getting rid of those nagging operation issues that occur from time to time and mess up the delivery of your organization’s goods and services.  All of the information you collect for security purposes can also be used to find the needle in the haystack that is causing a database to corrupt, a network connection to drop or a server to fail because now you have information as to what was going on that led up to the problem.

The reason an organization is not secure is that there are so many areas of improvement needed that the full control triad is no longer functioning and holes exist that will allow an attacker to operate without the knowledge of the organization.  Until the controls are implemented and operating properly, it will be impossible to determine if they are secure or not.  The recommendations I have made will hopefully give you a better picture of what you face and reacting to issues that need attention before your organization is the next one to be breached.

01
Jan
13

How The PCI Standards Will Really Die

Welcome to the new year.  I hope the holidays have been treating you well and the coming year is good as well.

There have been a number of articles written about why and how the PCI compliance process will die.  It is not that I look forward to the PCI standards dying as they have brought a needed visibility to information security and privacy as well as the fact that PCI keeps me gainfully employed.  However if things stay on their current trajectory, the PCI standards will eventually die, but not for the reasons being quoted in today’s articles.  The real killers of the PCI compliance process will be the card brands and the PCI Security Standards Council.  Yes, the very folks that brought us the PCI standards will bring the ultimate demise of their precious set of standards.

The first death knell I see is that it is very easy to issue edicts from on high when you do not have to implement them.  Over the years, clarifications have been issued, quality assurance reviews performed, forensic examinations conducted and a host of other activities have resulted in “enhancements” to how the PCI standards are assessed and enforced.  Do not get me wrong, a lot of what has been done was needed and appreciated.

However, by the same token, some of what has come down has been a nightmare to implement.  Any QSAC not using some sort of automated system to conduct their PCI assessments will find it impossible to meet the current and any future documentation and tracking standards now required by the PCI SSC’s QA process.  Under the current standards, QSACs need to document who they interviewed and what the persons were interviewed about as well as tying documentation and observations to the tests performed.  Without some sort of automated process, these requirements are just too intensive to perform manually.

Documentation received and reviewed needs to have its file name, date of issue and a description of its purpose in the PCI assessment process documented.  The basic PCI DSS has a minimum of around 200 discrete documents that are required for the PCI assessment process.  The average we see for most of our engagements is over 600 documents which also include not only policies, standards and procedures, but configuration files, interview notes and observations such as screen shots, log files and file dumps.  You really have to question any QSAC that tells you they manually manage the process.  They either have an amazing and magically efficient project management process, they have very, very inexpensive staff (i.e., overseas labor) or they are short cutting the processes and producing a work product that does not comply with the PCI SSC QA program and have yet to be assessed by the PCI SSC (the most likely scenario).

Even using simple SharePoint or Lotus Notes solutions are not cheap when you consider the cost of the server(s) and the storage of all of documentation collected, which can be around 5 to 10GB per project, as well as all of the requisite system maintenance.  Servers and storage may be cheap, but it all adds up, the more clients you assess.  And speaking of the storage of documentation, the PCI SSC requires that documentation related to PCI assessments be stored for at least three years.  For those of us with electronic work paper management systems, this is not a problem.  However, given the amount of paper generated by these projects, those QSACs using the traditional paper filing methods will find a lot of shelf space taken up by their PCI engagements if they are truly following the procedures required by the PCI SSC.

All of this drives up the cost of a proper PCI assessment, more than I think the card brands and the PCI SSC are willing to admit.  It is not that I think the card brands and PCI SSC do not care about this situation, but more related to they do not have an understanding of the operational ramifications of their edicts.  The card brands and PCI SSC tread a very fine line here and to this point they have been heavy handed in the issuing of their edicts.  Going forward, the PCI SSC needs to ask the QSACs, Participating Organizations and ASVs to assess the cost and time impacts of these edicts so that they can be weighed against their benefits versus what is done now which is more of a procedural and proofing review.  If this is not done, there will soon come a point where merchants and service providers will push back hard and refuse to go through the process due to the cost and the amount of time involved to be assessed.

The next death knell is the inane process that is called the PCI Report On Compliance (ROC).  When the PCI SSC did not have access to the QSACs’ work papers, the current ROC writing process made some sense as there was no other way for the PCI SSC or the processors and acquiring banks to know if the QSACs had really done the work they were saying they had done.  However, all of that changed a number of years ago when the PCI SSC required QSACs to add a disclaimer to their contracts stating that the PCI SSC had the right to review all work products.  Yet even with this change, we continue to have to write an insanely detailed ROC, typically numbering in a minimum of 300+ pages for even the most basic of ROCs.

Unfortunately, there are QSACs out there that apparently have not been through the PCI SSC QA process and that dreaded of all states – Remediation.  As a result, they have much lower costs because they are not documenting their assessment work as completely as they need to and are not sampling, observing or interviewing like QSACs that have been through the QA process.  In addition, based on some work products we have seen, they also do not care about the quality of the resulting ROC as it looks entirely like a ‘find and replace’ of a template and makes no sense when you read it.  In talking to other large QSACs that have been through the QA process multiple times, the PCI SSC has indicated that they are monitoring the large QSACs more than the little QSACs because there is more risk with the large QSACs.  While true to an extent, we have encountered a number of smaller QSACs that perform assessments for large clients due to their much lower cost structure and their willingness to ‘overlook’ compliance issues.  If the PCI SSC does not go after these QSACs soon, there will likely be a number of breaches that occur due to the QSACs’ lack of diligence in performing their assessments.

I know of a number of QSACs that would like to see Bob Russo and the representatives of the various card brands to actually work as staff on a few PCI assessment engagements so that they can better appreciate the inordinate amount of work involved in generating a ROC.  I think they would be shocked at the amount of work effort they have driven into a process that is already too complicated and prone for error.

As it stands today, the ROC writing, review and proofing process is probably 50% to 60% of a good QSAC’s project costs.  To address this, the PCI SSC QA group tells QSACs to develop one or more templates for writing the ROC which, from what we have seen from some other QSACs, means a lot of mass ‘find and replace’ to speed the ROC writing process.  For the last few years, a number of QSACs have brought the ROC writing process up at the Community Meetings.  However the card brands continue to shoot down any sort of changes to the process.  As a result, the cost of producing a ROC is driven by the size and complexity of the merchants’ or service providers’ cardholder data environment (CDE).  These costs will only continue to rise as long as the PCI SSC does not allow QSACs to mark items as ‘In Place’ with only a check box and rely on the QSAC’s work papers versus the verbosity required now.  If this sort of process can work for financial auditors, it can work here as well.

A third death knell is the PCI SSC and card brands continuing to quote that the majority of breaches are the result of organizations not complying with the PCI DSS.  In discussions with a number of the PCI forensic examination companies, I am hearing that the card brands cannot believe the fact that more and more organizations were PCI compliant at the time of their breach.  The PCI SSC and card brands have apparently convinced themselves that the PCI standards are “perfect” and they cannot imagine that an organization could be breached unless that organization was not complying with the PCI standards.  There is no security standard that I am aware that totally prevent breaches.  So while the PCI standards are good baseline security standards, the card brands and PCI SSC seem to have forgotten that security is not perfect and that any security standard only minimizes the damage done when a breach occurs if the standard is truly followed.

And as organizations have gotten the PCI “religion,” the effort required to compromise them from the outside via traditional attacks has increased significantly.  As a result, successful attackers have changed strategy and work on social engineering their way past the bulk of an organization’s security measures.  The PCI DSS only has a little bit on social engineering in requirement 12.6 regarding security awareness training.  And even those organizations with the most robust of security awareness programs will tell you that, even after extensive security awareness training, human beings are still fallible and that some people still do very questionable things that continue to put organizations at risk, sometimes significant risk.  Even when you have the most diligent of employees, they still make mistakes in judgment from time to time.

Until the human element can be totally removed, there will always be a certain amount of risk that will never go away.  Again, the PCI SSC and card brands seem to not want to acknowledge the failings of the human element and appear to believe that technology is the savior based on the focus of the PCI standards.  However time and again, every security professional has seen very sophisticated security technologies circumvented by human error or just plain apathy towards security (i.e., “it always happens to someone else, not my organization” or “we’re too small to be a target”).

Until the PCI SSC and the card brands drop the “holier than thou” attitude toward the PCI standards and stop the public pillory of organizations that have been breached, there will continue to be editorial commentary regarding the pointlessness of the standards and ever more serious push back to complying with the standards.

These are the reasons why the PCI SSC and the card brands will be the ones that will kill the PCI standards.  At the moment, they are so far removed from the process; they do not understand how complicated and expensive the process has become which is why merchants and service providers are complaining about the ever increasing costs and effort related to the PCI assessment process.

The PCI SSC and card brands also seem to have forgotten that QSACs have to make money doing these assessments and, when you pile on clarifications and edicts that do nothing to streamline and simplify the process; you are only driving the costs of the process higher.  And higher costs only make merchants and service providers, who are on thin margins to being with, even more incentivized to use the much lower cost QSACs, driving the diligent QSACs out of the market, thus increasing the likelihood of breaches.

Again, it is not that I want the PCI standards to go away as I think they have brought a real benefit.  However, if these issues are not addressed, the PCI standards will end up going away.  I fear that, with them gone, there will be no carrot to ensure the security of cardholder information and we will end up back where we were before the PCI standards existed.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

January 2013
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031