Archive for December, 2009


GSM Hack Shows What Happens When You Use Weak Keys

If you saw the article this week on the GSM telephone hack and did not make a connection to protecting your cardholder data, shame on you.  One of the things that security people need to do is to look around and see all of the potential threats that might exist.  This sometimes means thinking like the people that are out to attack your organization.  So, what does this week’s announcement that GSM encryption has been broken?  Everything, if you are relying on encryption to protect your cardholder data.

What the researchers proved this week is that, no matter how strong your encryption algorithm, if the key length is not long enough, it is very easy to break.  The initialization key for GSM is a 64-bit key (8 characters) of which the first 10 bits are set to zero, meaning that the initialization key is actually only 54-bits long.  All remaining keys are 114-bits long (14.25 characters), the same length as the data stream it protects.  To make things better yet, the key is used to exclusive OR (XOR) the data stream it is protecting.  To strengthen the encryption method, the key is adjusted at pre-determined intervals.

There have been a number of attacks generated against the GSM encryption technique since it was made public in 1994.  However, what makes this attack unique is that the researchers used a cluster of computer systems to generate a ‘rainbow table’ of approximately 2TB in size.  With such a table, it is child’s play to break the encryption.

I am sure there are some of you reading this and are saying, “So what?”

The first ‘so what’ is that this issue is why the PCI DSS requires you to use a recognized encryption algorithm such as AES, TwoFish, BlowFish or similar algorithm approved and reviewed by a body such as the US National Institute of Standards and Technology (NIST).  The algorithm used for securing GSM was just barely strong enough when it was originally introduced in 1988.  However, given the limits of the processors in cell phones at that time, it was the best they could handle and maintain call quality.  A lot has changed since 1988 in regards to processor capability and encryption.  Unfortunately, the cell phone industry did not keep up with the advances in encryption algorithms and ended up with a not so secure solution.

The second ‘so what’ is that this is why the PCI DSS requires the use of strong encryption keys.  The purpose of strong keys is to make the generation of a ‘rainbow table’ as impossible as possible.  If you use 8 character long keys using upper and lower case characters along with numerical and special characters, that is slightly less than 6.1 quadrillion possible key combinations.  Generating that number of keys is not a big deal with today’s multi-threaded microprocessors running in a cluster.  Worse yet, enterprising programmers have provided methods for generating such keys using video processors and video game units in clusters.  The bottom line is that it only takes time to generate a rainbow table and that is exactly what attackers have a lot of.  As a result, if you think that generating strong keys is a waste of time, think again.

So, what are the take aways from this discussion?

  • Use a recognized and proven encryption algorithm.  If the algorithm has not been through a rigorous examination by an independent body such as NIST, do not use it.
  • Yes, 3DES can still be used as long as you use the 168-bit variant and strong keys.  But, if you have the option of using AES instead of 3DES, why use 3DES?  NIST has said it is just a matter of time before 168-bit 3DES is cracked.  Why put your organization at risk?
  • Use strong keys consisting of a minimum of 32 characters (256-bits) or greater with split knowledge.  Make sure that the keys use upper and lower case alphabetic characters as well as numeric and, if possible, special characters.  Use one of the random character generators available on the Internet to generate these character strings.  They may not be perfectly random; but they are good enough for this purpose.

MasterCard Takes A Giant Step Sideways

As you may recall, MasterCard International revised their Site Data Protection (SDP) program earlier this year to require Level 2 merchants to conduct an on-site assessment of PCI compliance, aka Report On Compliance (ROC).  On December 15, MasterCard released a bombshell on their Level 2 merchants by backing away from the ROC requirement.  However, this change overshadows some other significant changes that you need to be aware.

For most, the big news in the December 15 pronouncement was that, effective immediately, MasterCard has gone back to only requiring Level 2 merchants to fill out a Self-Assessment Questionnaire (SAQ) instead of a ROC.  This was somewhat anticipated after Visa did not change their merchant level reporting requirements accordingly.  Conducting a ROC is now optional.

The original move by MasterCard was to try and level the playing field since MasterCard typically has fewer transactions than Visa at most merchants.  MasterCard was trying to reduce their risk by getting their Level 2 merchants that would likely be Level 1 if the merchant’s Visa transactions were aggregated with their MasterCard transactions to do a ROC instead of an SAQ.

The biggest and probably the best news in my opinion is that, as of June 30, 2011, any Level 1 or Level 2 merchants that want to create their ROC or SAQ using their internal audit staff are now required to have those personnel attend PCI SSC training and become certified in the ROC or SAQ process.  As a QSA that has come into an organization a year or two after companies have conducted their own assessment and created their ROC, I can tell you that without training, internal auditors are not equipped to conduct such a project.  The biggest issue they have is that they do not interpret the PCI DSS correctly because they have not been given the insight that QSAs are given at training.  While this might be a potential threat to my livelihood, I applaud MasterCard for mandating this requirement.

However, there is a twist in the directive.  MasterCard states that if Level 2 merchants do not get their internal audit staffs trained and certified in approved PCI SSC programs, then their SAQ or ROC must be completed by a QSA.  So, while MasterCard backed away from the mandatory ROC for Level 2 merchants, Level 2 merchants either train their internal audit staffs or use a QSA.  So my livelihood may not be as adversely affected as I may have thought.

And finally, as of July 1, 2012, all merchants and service providers that use third party developed software can only use that software if it is PA-DSS compliant. Let us be clear, this is only relevant to third party developed software, not software that is developed in-house.  However, MasterCard seems to have created a potential issue depending on how they define ‘third party’.  I am assuming that MasterCard is referring to third parties such as Micros, Oracle, IBM and similar software vendors that sell point-of-sale (POS) solutions and not the hired consultant that creates an eCommerce Web site for the local donut shop.  However, this definition needs to be clarified by MasterCard so that we are all on the same page.

UPDATE: The PCI SSC’s Web site indicates that they will be offering training to basically anyone willing to pay for it.  The 2010 Training Schedule is supposed to be released on Friday, January 15.  So keep checking their Web site for the training schedule.


Decommissioning Applications

Here is a question that comes up from time to time.  Particularly because a lot of my clients are remediating their PCI compliance issues by replacing older applications with PCI compliant new ones.

What do I need to do in regards to PCI DSS compliance if I’m replacing an application?

There is no guidance in the PCI DSS regarding the decommissioning of applications that are in-scope.  So what should an organization do when they are getting rid of an in-scope application?

The first problem is the application’s cardholder data.  Cardholder data usually ends up everywhere, particularly with systems that are not PCI compliant.  Cardholder data is not only on hard disks and disk arrays; it is also on backup tapes and other backup media.  In the case of point-of-sale (POS) systems, cardholder data can end up on every POS as well as the POS servers.

The bottom line is that you need to track down all of this cardholder data and make sure that you properly dispose of it.  The key problem is making sure you have located all of the cardholder data.  You should use this opportunity to scan all of the systems to be decommissioned with a tool to locate cardholder data.  While this is not necessarily a perfect technique, it will identify all of those systems that likely have cardholder data and those that do not.  Those that do have cardholder data will be remediated first.  Those that do not have cardholder data will be remediated last.

Since these non-compliant applications typically did not securely store cardholder data, you need to make sure that the data that remains is properly disposed.  That means performing Department of Defense (DoD) grade erasing of data from hard disks and tapes.  If the hard drives are old and are not going to be reused, then I would recommend contracting with a reputable DoD certified firm to have them degaussed with your tapes.  Industrial strength degaussing will usually damage the electronics of the hard drive, so if you intend to reuse the hard drive, do not have it degaussed.  If you are going to reuse the hard disks, then they should be erased with a DoD grade disk wiping utility.  There are plenty of these available on the Internet.

The next issue is proving that the application is decommissioned.  Make sure to document all of the steps you took to ensure that the cardholder data has been removed from all systems.  Have management sign off on this documentation so that they are aware of what was done and how it was done.  This documentation will be useful for your filing of a Report On Compliance or Self-Assessment Questionnaire as well as should anything happen in the future that comes back to try an haunt you.

Hopefully this will assist those of you that are going through such a process to become PCI compliant.


The Threat Landscape Has Changed

In an earlier post, I discussed how the PCI DSS changes based on changes in the threat landscape.  Attention, Robert Carr at Heartland Payment Systems and anyone else processing, storing or transmitting cardholder data, here is a change in the threat landscape that will radically affect the PCI security standards, most definitely the PA-DSS, but also the PCI DSS.

Verizon Business Services recently announced that they are seeing a rise in RAM scraping attacks.  RAM scraping is an attack that utilizes a program to go through the memory of a device looking for certain information.  RAM scraping has been around for quite a while, but only recently was this technique adapted for finding cardholder data.  With the push for end-to-end encryption, Verizon Business Services predicts that RAM scraping attacks will continue to rise.

Now if you are Robert Carr, you are likely questioning how RAM scraping attacks can gain access to cardholder data when end-to-end encryption has been implemented.  After all, you have been told that end-to-end encryption is the answer to protecting cardholder data.  Yes, once the data is encrypted, as long as strong keys and proper key management practices have been implemented and are followed, the data is protected.  However, the data is not protected at the end points where the data is encrypted and decrypted.  It is at these endpoints where RAM scraping targets end-to-end encryption.

At some point, the data must be in a format that can be processed and that is what the people that develop these RAM scrapers rely on.  But the cardholder data is only in memory for such a short time, possibly only a millisecond.  Do not be so sure.  Unfortunately, with the advent of high-level languages and cheap memory, memory management is not as well managed as you might think.  We have seen instances of cardholder data being stored in the memory of card terminals and PCs for hours, days and in some very rare cases, forever.  As a result, if you can gain access to the memory of these devices, you can hit the mother-lode in cardholder data.

Now before you go off in a panic, the PCI DSS does have some protections that are already provided.

  • Requirement 1 – Firewalls.  There are a number of requirements in this section regarding the regular review of firewall rules, control of network traffic and other topics.  However, the purpose of these requirements is to respond to the changing threat environment.  In addition, firewalls generate alerts and log data, so these should be regularly reviewed to make sure that a RAM scraper has not been introduced into the environment.
  • Requirement 3.5 – Encryption Key Management.  As long as you follow the recommendations in this set of requirements, your encryption keys should be safe.
  • Requirement 5 – Anti-Virus.  Most of these RAM scrapers are installed through some sort of attack that uses virus and/or malware techniques.  If your anti-virus is up to date and it protects against malware, you will likely be notified of such an instance.  The attack is likely not going to be direct, so you need to make sure that all systems that interact with your cardholder data environment need to be monitored to ensure that a RAM scraping payload is not delivered anywhere in your network.  The key is monitoring.  If you are not actively monitoring attacks, you will likely miss the introduction of the RAM scraper.
  • Requirement 10 – Log Analysis.  If you are properly reviewing your logs, the alerts from your anti-virus and critical file monitoring should be contained in your log data.  The key is that you are conducting log reviews at least daily, if not more often.  If you have a centralized log collection system, I would recommend that you create queries that use criteria to meet various PCI DSS requirements.
  • Requirement 11.5 – Critical File Monitoring.  RAM scrapers are executable files.  If you have configured your critical file monitoring correctly, this new executable will trigger an alert.  However, not all end points can be monitored this way, so you may have to find another approach to protect those devices.  Again, monitoring is the key.

If you are religiously following these points, you should be addressing this new threat as best as you can.

As I have constantly stated, security is not perfect.  So, just because you are following the PCI DSS does not mean that you will not have an incident.  It only means that the impact of an incident will be minimized.


Disaster Recovery Sites and PCI

Here is something that troubles me.  Mainly because I think a lot of QSAs are not looking at cold and warm disaster recovery sites since they are technically out of scope in the PCI DSS if they do not process, store or transmit cardholder data.  However, this is something that needs to be looked at because under the PCI DSS, a disaster recovery site is in-scope once it is processing, storing or transmitting cardholder data.

First, let us get the terminology straight as to what we are talking about.  A disaster recovery ‘cold site’ is defined as a site that has physical and environmental controls, but does not contain any equipment.  Essentially, it is an empty data center.  It may have racks installed with electrical connections, but the racks are empty or only contain enough infrastructure to allow for basic connectivity from the racks to the telecommunications point-of-presence (POP).  An organization may have backup circuits installed to this facility, but they too are not in use.  As a result, a cold site is not in-scope because it does not process, store or transmit cardholder data.

At the other end of the spectrum, a disaster recovery ‘hot site’ is ready to go at a moment’s notice.  Applications and data are replicas of what is running in the production data center.  Should a failure occur at the production data center, the hot site will immediately step in and take over usually without users knowing that a failure has occurred.  Obviously, a hot site is always in scope for PCI compliance as cardholder data is processed, stored or transmitted at the hot site just as it is at the production data center.

And in between cold and hot sits the disaster recovery ‘warm site’.  Warm sites have servers, data storage and infrastructure all ready to go, but the equipment is not running and no data is available.  It may or may not be a carbon copy of the production data center from an equipment standpoint, but it will have enough servers and infrastructure to be ready to process as quickly as possible.  The applications may or may not be already installed on the servers.  And data is usually expected to be restored from backup media.

Where things get messy is with just how much preparation is being done at the warm site.  Remember, disaster recovery sites are only in-scope if they process, store or transmit cardholder data.  And that is where some QSAs get in trouble; they neglect to ask the critical question of whether or not the warm site has cardholder data.  A lot of organizations are now replicating data between their production data center and their warm site.  It is cheaper, safer and faster to have a replica available than to try and recover data from backup media.  Particularly when you have invested in storage area network (SAN) farms at both locations.  SANs make the replication process easy and seamless with the only limitation being the time it takes to replicate over the connection between the two sites.

Regardless of what the PCI DSS states, I believe that all disaster recovery sites need to be assessed for PCI compliance regardless and here is why.  The PCI DSS states that disaster recovery sites are not in-scope unless they process, store or transmit cardholder data.  However, in the same breath, the PCI DSS states that once a disaster recovery site is activated, the site is in-scope and is required to comply with the PCI DSS requirements just as the production data center complied.  So, how does one know that their disaster recovery cold or warm site will comply with the PCI DSS if it is never assessed?  You do not know it will comply unless you assess it.  How is that for a Catch-22?  That is why I contend that all disaster recovery sites should be assessed whether they process, store or transmit cardholder data or not.

The bottom line here is that if you are not in compliance when you activate your disaster recovery site, you can be fined for non-compliance with the PCI DSS.  And if you think that you can get a ‘by’ on compliance by saying you were in the midst of recovering from a disaster, think again, particularly if a breach occurs during your recovery.  So, I highly recommend that all organizations make sure that their disaster recovery sites are and will be PCI compliant once they are active and that they are assessed by your QSA.

UPDATE:  A number of people pointed out to me that it might be unreasonable to assess the disaster recovery site annually and I would agree.  However, you should assess it at least once or if any changes occur that might effect the control environment at the site.


Louisiana Restaurants Follow Heartland’s Example

If you missed it, a half dozen restaurant companies based in Louisiana have sued their point of sale (POS) vendor and the POS vendor’s reseller over a security breach that was identified in 2008.  There are a number of troubling facts involved in this complaint that I think deserve to be discussed.  Unfortunately, these merchants are taking a chapter out of the Heartland breach playbook by saying it is entirely someone else’s fault.

My first problem with this case is the spelling errors and misrepresentations of some of the facts.  It is tough to take a lawsuit seriously when the law firm involved did not take the time to spell check and proof what they were writing or ensure that the historical facts are accurate.  With that said, let us take a look at this wonderful legal mess.

From the filing we get the start of their terrible tale of software gone awry.

“Notably, Visa identified [vendor]’s POS software as a payment application that stored prohibited data, such as full magnetic stripe data (including card verification and PIN data), after a transaction authorization was completed.

Around the same time, [vendor] was advertising that the [vendor]’s POS system was enhanced to comply with the PCI Data Security standards.

Plaintiffs all purchased the [vendor]’s POS systems from [reseller].”

Later in the complaint, they state:

“Plaintiffs would not have purchased the [vendor]’s POS systems had they known of the security defects contained in the software.”

They already knew that the software was not compliant and they admit it in the first part of the complaint.  Visa is the certification body and had listed the software from the vendor as non-compliant.  What more could you want?  Yet they still purchased the software.  This is why you conduct a formal software selection so as to avoid these types of problems.  Buying software is a sophisticated process requiring research and documentation of one’s requirements and matching those requirements to the various vendor offerings.  First key requirement in this case should have been is the software PCI compliant.  If not, it is not included in the selection process.  This just shows terrible due diligence and lousy business acumen on the merchants’ part.  Strike one against the merchants.

But it just gets better.  In 2008, these restaurants all get contacted by their local law enforcement agencies and are told that they are the source of a credit card compromise.

“Thereafter, Plaintiffs undertook their own research to determine whether their systems had been compromised.

Plaintiffs ultimately learned that keyloggers had been installed on their POS stations.”

As they say down South, “This dog just don’t hunt” and it does not hunt for a number of reasons.

First, after they do not do proper due diligence, I am supposed to suspend belief and believe that these merchants had the technological prowess to figure out that a keylogger had been installed on their systems?  The reason I have a problem with this is that later on in the filing it states that they hired outside consultants to remove the keylogger software.  Therefore I have to assume since it is not explicitly stated that no outside assistance was obtained in identifying the keylogger.  Bottom line – the merchants violated a key incident response principle by not getting a forensic examiner involved immediately.  Strike two against the merchants.

However, this points out another troubling issue.  Does this sound like a group of organizations that have an incident response plan in place as required by the PCI DSS?  Based on the complaint, it sure does not read that way.  So, guess who is not PCI compliant?  Strike three against the merchants.  Unlike baseball, PCI players are not out after three strikes, the strikes just continue to accumulate.

Let us assume that the vendor’s software these merchants installed was truly PABP or PA-DSS compliant and the keylogger claim is also accurate.  If the application stores cardholder data encrypted, then the only way that the keylogger could have allowed anyone access to the data is that one or more of the accounts had access to the data encryption keys.  There are a couple of things that could have caused this; (1) the reseller used the same administrative password for all systems they install, (2) the application is designed such that administrators have access to the encryption keys, and/or (3) the merchants assigned a number of users as administrators that had access to the encryption keys.  Everyone could have fault here and without extra information there is no way to know who is at fault.

Another thing that troubles me is the fact that every plaintiff had a keylogger on their POS system and I am assuming it was the same keylogger since the complaint does not say differently.  It just seems a little too coincidental that they all were attacked the same way.  If I were a betting man, I would say this is an inside job because the facts all point that direction as well as the statistics say that it is the most likely scenario.  Insider meaning someone that had knowledge that all the systems were configured the same.

Finally, where is law enforcement in all of this?  All of these systems are technically a crime scene, yet nowhere does the complaint mention anything about law enforcement seizing the compromised systems for a forensic examination.  You would have expected law enforcement to have seized these systems and the reseller having to supply new systems to replace them.  Apparently that never happened.  Strike one against the law enforcement agencies for not treating this for what it was, a crime.

As if it is not bad enough, things get worse.  The reseller was unable to remove the keylogger, so the merchants had to go out and get help.

“These consultants discovered that [reseller]’s standard practices violated numerous provisions of the PCI Data Security Standards.

In addition to incurring substantial costs for additional technicians, consultants, software, and network connections, news of the credit card number thefts were reported by local media, which resulted in loss of business to Plaintiffs.”

Well, here is a shock, the ‘hired guns’ find that the reseller wasn’t complying with PCI requirements.  This confirms an earlier strike against the reseller, so they get another strike as well for this lapse.

Here is something that any reseller or vendor that is providing managed services should worry about.

“Defendants failed to notify Plaintiffs that the [vendor]’s system was breached”

I am not sure how the software vendor was to have known this.  But, if the reseller was also providing managed security and/or network monitoring services to the merchants, then they are really on the hook for this one.  However, if the reseller was not providing any managed services, I would seriously question how a breach would be known by the reseller.

Finally, the complaint states:

“After the problems were identified and corrected, Plaintiffs were fined by credit card companies for failing to comply with the PCI Data Security Standards and/or their Payment Application Best Practices (PABP).”

At a minimum, these merchants were likely covered by SAQ C.  Requirement 12.5.3 in SAQ C very clearly states that the merchant is required to have developed a documented incident response plan.  As a result, these merchants were also likely not compliant with the PCI DSS.

The complaint also seems to indicate that the vendor’s software solution was not really PABP compliant.  This is another strike against the merchant for their lack of due diligence.  It may also be a strike against the reseller and/or vendor if they misrepresented the PCI compliance of the software.

In the end, the merchant wants the reseller and the vendor to take all of the blame for their poor decision making process.  If that sounds familiar, it is.  This is exactly the same rationale that Robert Carr used to justify the Heartland breach.  It was everyone else’s fault.  As with the Heartland incident, there is more than enough culpability for everyone involved.  But I leave the bulk of the blame to the merchants who created their own problem in the first place.

So, what are the lessons learned from all of this?

  • Conduct a formal software selection regardless of how ‘simple’ you may think the solution and process may be.  Formally document and prioritize your requirements.  If there are any ‘drop dead’ requirements, make sure that you use those to qualify all of your candidates before including them in your process.  Since most organizations do not regularly conduct formal software selections, this is a good project for an outside consultant who does such projects regularly.  Yes, it costs real money to hire an expert, but it pays dividends down the road by keeping you from making poor decisions that you will regret later.
  • Never take anyone’s word when dealing with compliance.  If the certification body’s documentation indicates that a given product or service is not compliant, then it is not compliant.  If you really must, get the vendor to provide you with a copy of their documentation that they have filed with the certification body and when it was filed.  Then contact the certification body and confirm that the documentation was received from the vendor and when the certification body will likely act.  If the granting of certification is too far out or you cannot go through this process, then the certification does not exist.
  • Make sure your reseller is complying with all relevant requirements when installing and maintaining your hardware and software.  This involves not only getting it in your contract with the reseller, but also setting up milestones that require your review and approval before the reseller goes to the next step.
  • Make sure you have an incident response plan.  It seems like ‘make do’ work until you really need it.  Incident response plans are very much like disaster recovery and business continuity plans, they are insurance policies against bad decision making during a crisis.
  • Obtain experienced assistance when you need it.  There is a lot of truth in the old adages, “You can pay me now or pay me later” and “Penny wise but pound foolish.”  I am sure these merchants thought that they were saving themselves a ton of money by skipping a formal software selection process.  But in the end, it came back in spades when they suffered the breach and the resulting drop in business.

Six Sigma, PCI And Security

First, this is not a security metrics article.  So, if you are looking for that sort of thing, this is not it.

Do you remember Six Sigma?  It has gone a bit underground, but is still big in manufacturing and distribution.  Six Sigma is defined as executing a given business process with only as much as 3.4 defects or errors per million executions of the process.  That is 99.99966% accuracy or higher.  I have stated in previous postings that security requires 100% compliance.  To come as close to 100% as possible, security is structured in layers (i.e., defense in depth) so that as long as each layer operates at Six Sigma levels or better and that those layers overlap, you should be able to achieve close to 100%.

One of the complaints you hear about the PCI standards is that a lot of it is focused on policies, standards and procedures and that documentation does not lead to security.  Six Sigma experts will point out that if you do not have formally documented policies, standards and procedures, there is no way to achieve the necessary levels of consistency to ensure your organization’s security.  Such documentation is the foundation on top of which you build everything else.  Without a solid foundation, Six Sigma cannot be achieved.

Then there is the training involved.  Six Sigma has taught organizations that training is another critical component if you expect to achieve it.  If you are not training your personnel in all of your policies, standards and procedures and the rationale of why those are important, your employees are just going to blow them off.  And if you are not training them regularly, then they will very quickly forget all about them.  These people are key to the success of your security program because, for the most part, they are the root cause of why security has failures.  Statistics point to the fact that at least 65% of all breaches were the result of human errors or other human causes.  If you are not addressing the human factor in your security program, then you are doomed to fail.

I like to use the airline industry as a prime example of how well documented policies, standards and procedures can make a significant difference.  Airlines have policies, standards and procedures for everything regarding the flying and maintenance of an airplane and they rigidly enforce them, they have to, to stay in business.  Over time, airlines found that human error was reason for most of the devastating crashes.  By instituting very rigid policies, standards and procedures, the airline industry was able to make air travel safer than driving your own car.  It is the same with security.  If you create a highly documented set of policies, standards and procedures just like the airlines and you rigidly enforce the following of that documentation, you likely reduce your risk of suffering a breach or other security incident almost down to zero.  I say almost, because there are people out there that, if they set their mind to breaching your security, they will do whatever it takes to get the job done, no matter what barriers you put in their way.

So what typically causes security failures?  There are a number of issues that lead to security failures, but these seem to be the most common.

  • Someone cuts corners to get something done to meet a deadline.
  • Someone disables a security measure or mis-configures it.
  • Someone does not understand why a particular process is important and therefore just ignores it.
  • Someone encounters an incident and does not know what to do, so they wing it.

The first three issues are all the result of limited or no training.  If people do not understand their role and the reason why their role is important, they will very easily regard their part as inconsequential and therefore not important.  And while you have defense in depth, if enough people take this attitude, the depth of your security does not matter.

This leads to the problem of keeping people engaged.  This is one of the biggest problems in security these days.  Do you realize that airports have been at the security advisory level of High since 2003?  That has been over six years.  Does anyone remember why or, for that matter, care?  No, because people are no longer engaged.  This problem is particularly true for people that monitor for security alerts.  A lot of the reason that security technology initiatives fail is not due to the technology, it is due to the fact that the technology was not tuned properly to weed out enough of the chaff so that the real alerts would shine through.  As a result, people start to ignore all the alerts because there are so many false positives to research before they get to the real issues .

The last bullet is a real sticky issue as it is exceptions to those well defined processes where every organization runs into trouble.  It is the lack of definitive procedures for handling every exception where organizations fall apart.  The rationale you hear for this time and again is, “we cannot anticipate every possible exception.”  While this statement is very true, you can have a group of very well trained personnel that can handle those exceptions on a case-by-case basis.  If this sounds familiar, it should.  This is exactly how help desks are structured.  For security, Level 1 researches basic security issues such as locked accounts, denied access requests, service failures and the like,  Level 2 researches items that Level 1 is unable to resolve or get answers as well as notifying users of new threats.  And those Level 3 people – they are the “propeller heads” that can do anything related to your security infrastructure.  Typically, Level 3 people are the ones that implement and maintain your security infrastructure.

At the end of the day, all of your security technology is only as protective as the people that interact with it.  A lot of organizations keep searching for technological solutions to solve all of their security problems.  Unfortunately, they miss the human part of the equation and the fact that it is the humans that are fallible and will be the most likely reason that all of their precious technology gets defeated.  So, get your documentation in order, train the staff until it hurts and enforce everything.


Framework Versus Standard

I think one of the biggest problems with the PCI DSS is that the PCI SSC chose to use the word ‘Standard’ in its name and proscribed that they are a standards setting body.  The word standard is defined by Merriam-Webster’s Dictionary as “something established by authority, custom, or general counsel as a model or example.”  Standards dictate what someone or something should do in a given situation.  Look at the IEEE for example.  They are a true standards setting body and the standards they issue are very proscriptive.  You are not to vary from the IEEE standard without becoming non-compliant.

When you look at the PCI DSS, it is more of a framework than a standard.  A Framework is defined as “a basic conceptual structure.”  Frameworks document boundaries as to what are acceptable for addressing particular problems but do not proscribe specific solutions.  In my opinion, the PCI DSS is more of a framework, not a standard.  I think that is why a lot of people and organizations struggle with complying with the PCI DSS.  If it were a true standard, then it would tell them exactly what and where to do everything.

That is the problem with security.  One size or solution does not fit all.  What works in one situation, may not work in another situation.  Even in the same organization, you can have different security solutions for the same problem.  Over time, while an original solution may be working fine, a newer solution will be implemented to resolve a similar situation because either the original solution is no longer available or it has changed and is no longer viable for the new requirement.

So let us stop getting hung up on the word ‘standard’ and move on.  The PCI DSS is not a standard it is a framework. A framework that is a baseline of what, at a minimum, is required to protect cardholder data.  If you can execute the framework on a consistent basis, then you will be ahead of the game.  If you cannot execute on a consistent basis, then you should do everything you can to not store cardholder data.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

December 2009