Archive for February, 2011

26
Feb
11

If Not The PCI Standards, Then What?

I have just read a couple of articles as well as attended a couple of meetings where the topic du jour was the PCI standards.  They were a bash fest of the highest order.  Frustrated, I asked the participants at my last meeting, “If not the PCI standards, then what standard do you want to follow to ensure the security of cardholder data?”  Roaring silence.

This is the frustration that I and others have with people who complain about the PCI standards or any standards.  People complain and complain, yet they offer no solutions to address their complaints.  One thing I have always stressed with and required of people who work with me, if you are going to complain about something, you better have an idea for a solution.  Constructive criticism is fine, but if you do not have any ideas on how to make things better, then all you are doing is whining.  Children whine, adults have solutions.

But then you have the complainers who do offer a solution but that solution is to allow the marketplace to address the problem.  Hello!  How long was it going to take before merchants and service providers got a clue about securing cardholder data?  If it was such a priority, why did it take the card brands to come out with the standards?  For merchants and service providers, cardholder data security was not a priority, it was some other merchants’ or service providers’ problem.

The other problem with the marketplace approach is that each organization learns from its own incidents and possibly from incidents suffered by their business partners, not from the incidents experienced by all.  Under the marketplace approach, security protection only improves as each individual institution suffers a particular incident.  As a result, organizations reinvent the wheel with the majority of incidents.

Standards allow organizations to learn from the collective experience of all organizations, not just their own organization.  For example, if your organization does not have wireless networking but decides to implement wireless, a standard provides a guideline as to how to implement wireless securely.  Without a standard, you are on your own to do the best you can.  On your own you will likely get some things right, but you will also get some things wrong.  It is those mistakes due to lack of experience that come back to bite organizations.  With a standard to follow, the chance of getting bitten after the fact is often greatly minimized.

However, standards are not a guarantee.  Going back to wireless, just look at how things went wrong with WEP.  WEP is a standard and was well documented on how to implement it; supposedly securely.  WEP was also known to have the potential for security problems, but those problems were not widely publicized until organization began to have security incidents.  So a stop gap standard was provided called WPA which turned out to have its own security issues.  Ultimately, WPA was replaced by WPA2 which is the secure, permanent solution.

This is why early adopters of technology can end up getting burned.  When an organization decides to hop onboard the latest and greatest technology, there is a high risk that the security learning curve is not very far advanced.  As a result, the organization will be at a higher risk of suffering a security incident than an organization using a more tried and true approach.  As a new technology matures, typically its security posture matures and with a more mature security posture, the lower the likelihood that a security incident will occur.  However, the time it takes for that security maturity to occur can take quite a while and it is where things take quite a while where organizations are at the highest risk.

Unfortunately, in some instances, a new technology gets quickly usurped by an even newer technology and the original new technology never matures.  The bad news is that the early adopters get stuck with a solution that will never have its security shortcomings addressed, leaving the early adopters to either convert to the newer technology or find another alternative.  Many a career has been ended over such technology leap frogging events.

The PCI standards were not developed in a vacuum.  They are a consolidation of a lot of other security standards and guidance gained through root cause analysis of security incidents gathered over the years with the express purpose of protecting cardholder data.  If you follow another security standard such as ISO 27K or FISMA, a lot of what is in those standards is also in the PCI standards.  But there are also a lot of requirements in the PCI standards that are not in other standards as well.

The bottom line is if you do not like the PCI standards, then get involved in the process to make things better and stop whining.

Advertisement
20
Feb
11

If They Want You, They Will Get You

Over the last few years, card brand executives have implied that the PCI standards are the ‘Holy Grail’ and that only by following these standards can cardholder data be protected.  To add insult to injury, the House of Representatives’ Subcommittee on Emerging Threats, Cybersecurity, and Science and Technology held hearings on the PCI DSS and its ineffectiveness in stopping terrorism funding.  In the end, all of this bluster just added fuel to the fire around security and in particular cardholder data security.

What all of these people have missed is that regardless of whatever security standard you follow, sensitive data, cardholder or otherwise, is always at risk.  There will always be a market for private information and there will always be someone willing to take the risk to obtain that information, regardless of the barriers put in their way.  If they want you, they will get you.

Do not believe this to be true?  Over a week ago, it was announced that HBGary Federal, an obscure subsidiary of Internet security firm HBGary, was attacked by “Anonymous” and their internal emails and other documents were posted on the Internet.  To add insult to injury, Twitter and LinkedIn accounts were also compromised and postings were made under those compromised accounts.  But the most embarrassing thing about this was that the documents posted showed that HBGary Federal is in the business of corporate espionage and discrediting corporate rivals.

What the HBGary incident highlights is how different a dedicated attacker is from your everyday, annoying attacker.  Dedicated attackers are hunters.  They research their prey conducting detail reconnaissance of their target.  They know about the defenses of their target and they develop plans to defeat those defenses or at least keep them at bay.  These are people skilled in their craft.  These are people that take a job as part of the night cleaning staff at the building where their prey is located.  They use this as an opportunity to scope out their quarry and determine where the weaknesses are located.  If they need other expertise, they will go and acquire that expertise either through training or by teaming with someone that has that expertise.  In the end, if they want you, they will get you.

And that is where the ‘Holy Grail’ status falls apart.  Security relies on human beings either to configure, manage or monitor the process.  Unfortunately, humans make mistakes either deliberately or accidentally.  It is those mistakes that more times than not create the problems the result in breaches.  Decisions are made to short cut a process to save time.  Alerts or warning messages are ignored because they always are generated.  Commands are mis-keyed resulting in an unforeseen configuration change that opens a hole.  Whatever it is, mistakes occur and sometimes organizations pay the price.

The late David Taylor at PCI Knowledge Base was quoted as saying, “It’s easy to find somebody to be in noncompliance if that is the primary goal.”  What Mr. Taylor is pointing out is that ‘witch-hunts’ are always successful given enough resources.  No matter how well you think your organization is run, there are always enough ‘rocks’ that can be turned over to reveal a less compliant side of the organization.  Forensic examinations are looking at the underside of all of those ‘rocks’ to determine which ones resulted in the breach.

Unfortunately, for most organizations, the forensic process becomes a witch-hunt because the media and public demand it.  Why?  Because thanks to the card brands and the PCI SSC holding out the PCI DSS as the ‘Holy Grail’, the public’s expectation is that a breach should never happened.  That is not the message that should be being delivered.

What the card brands need to do is explain to the public the actual realities of the PCI standards.  Particularly the fact that even if the PCI standards are followed, breaches are still going to occur.  Now those breaches that occur should be much smaller and less costly, but they are still going to occur.  That is the stark reality of security because, as I know some of you are tired of hearing, security is not perfect.

UPDATE: After the comments I have received, I want to clarify this point.  I am not suggesting that security is a worthless endeavor because it is not and cannot be made perfect.  Security is a necessary activity that all organizations need to participate in at some level.  What people need to realize is that security is not perfect, it will stop the great majority of incidents if properly implemented and managed, but it will not stop everything.  The problem is that there are sales and marketing types, as well as security “experts” that imply that their solutions or ideas will result in a “perfect” solution.  It is these things that concern me because the unknowing believe that they are absolutely protected and then are dumbfounded when an incident occurs and then blame the security industry for misleading them.

12
Feb
11

More On Mobile Payments

As I have found out, the definition of “mobile payment” is defined by to whom you are talking.  For consumers, mobile payment means using their smartphone to pay for goods and services.  For merchants it includes the consumer definition as well as using smartphones or similar mobile devices to process payments.

Last year I wrote a post regarding mobile payments and the use of smartphones, primarily the iPhone, for use as credit card terminals.  When I wrote that first post, Apple was running an advertisement for the iPhone that showed it being used to process a credit card payment with the ubiquitous tag line, “There’s an app for that.”  Shortly after that post, the advertisement dropped the iPhone as a credit card terminal.  I am not aware that the PCI SSC or any of the card brands complained about that advertisement, but I found it interesting that those images of it processing a credit card were removed particularly given that a number of security and privacy issues that were and still are being discussed regarding the iPhone.

That is not to say that iPhone credit card adapters have not continued to be developed.  It is just that they are nothing like the one shown in that original Apple advertisement.  The first one that I came into contact with was Verifone’s PAYware Mobile solution and the fact that it is PA-DSS certified.  Whoa!  In my previous post I talked about all of the issues with the iPhone that make it almost impossible to be PCI certified.  How did Verifone create a PA-DSS certified application on the iPhone?  What Verifone did was to create a digital back to the iPhone.  All of the operations that need to comply with the various PCI standards are done through the digital back, not the iPhone.  The iPhone is just used as a display.  In the event that a credit card will not swipe through the digital back, the customer must go to a standard register.  I have also been privy to a number of similar iPhone applications.  All of them avoid the iOS interfaces as iOS is the problem in achieving PCI compliance.

While iPhone is the “Big Kahuna” of smartphones, it does not mean that Android and Windows Phone devices are not also used for credit card payments.  Unfortunately like the iPhone, Android and Windows Phone devices have similar issues that make them difficult, if not impossible; to have PA-DSS certified applications.  So from a merchant perspective, iPhone, Android and Windows Phone all have to be treated very carefully when they are used to process credit card payments.

But security concerns have not stopped merchants from rolling out mobile payments.  Starbucks recently introduced an iPhone and Android application that allows the customer to put their Starbucks cash card on their phone.  The application creates a 2D bar code with the cash card’s number.  The Starbucks POS system reads the bar code and automatically deducts the purchase from the account’s balance.  Within a week of releasing the application, it was determined that if you take a picture of the screen containing the bar code, anyone with the bar code can use the account until it cannot pay for a purchase.  So much for secure mobile payments.

If we expect to secure payments, the traditional credit card is just not going to get the job done.  EMV, aka Chip and PIN, is a short term technological fix but also a back up payment method for where I think we are really headed.  I truly believe that the future in payments is smartphones and other mobile devices with software that generate one-time transaction codes for paying for goods and services.  Whether those codes are displayed as a 15/16-digit number or bar code on a screen or transmitted via Wi-Fi, Bluetooth or RFID, a consumer will not need a traditional credit card.  A 15 or 16 digit number will be necessary to use so that POS systems do not have to be re-engineered to support the new payment method.  Scanners are already capable of reading bar codes from smartphone screens, so that much of the solution is already in place.  Wi-Fi, Bluetooth and RFID technology is coming as we speak so it is only a short matter of time before the infrastructure is in place to support such a solution.  All that is needed is the software.

Such an approach not only will secure card present transactions, but would also tackle the security issues we face with card not present transactions.  If done right, mobile payments can become the solution to our PCI compliance problem.

09
Feb
11

The “Magic” Vulnerability – Revised

What started this post is that I have recently received a number of calls and messages from clients and colleagues.  The conversations have all gone basically the same.  They were calling me and telling me that their ASV had failed their vulnerability scan because the OS detected was unsupported and they are wondering whether or not I have encountered this before.

My first question usually was along the lines of; “So, what vulnerabilities did they detect?”

“None,” was the confused answer at the other end of the line.

“What?  They must have detected at least one high, severe or critical vulnerability?  That is the only way you can fail,” I would ask, now also confused.

“Nope.  Nothing.  Just the fact that the OS is unsupported,” I was told.

Do not get me wrong.  I am not advocating the use of unsupported operating systems, particularly unsupported versions of Windows.  The risk of course is that one or more vulnerabilities show up that the vendor will not fix because the OS is no longer supported.  So there is good reason to avoid this situation.  However, there are also situations when you just get no other choice either due to your own organization’s issues and politics or software vendor issues.

This situation got me thinking and doing some research since I did not remember ever seeing or being told that an unsupported OS was an automatic vulnerability scan failure.  I no longer do external vulnerability scanning, so my recollections of training and working on the ASV side of our business is a bit fuzzy and very rusty.  However, I had never failed a client for an unsupported OS.  So when this issue came up, my only action was to determine what had changed.

The first thing I did was review the latest version of the PCI ASV Scanning Procedures, v1.1.  I searched for terms such as ‘old’, ‘unsupported’, ‘out of date’, ‘OS’ and ‘operating system’.  No matches.  So there is nothing in the ASV scanning procedures that fail an organization for running an unsupported OS.  Even the PCI DSS does not call out unsupported software, so procedurally; I am thinking there is nothing explicit regarding unsupported OSes causing a failed vulnerability scan.

So when I made the original posting, I got a comment from one of my readers pointing me to the ASV Program Guide.  Low and behold on the top of page 16 is the following:

“The ASV scan solution must be able to verify that the operating system is patched for these known exploits. The ASV scanning solution must also be able to determine the version of the operating system and whether it is an older version no longer supported by the vendor, in which case it must be marked as an automatic failure by the ASV. “

So there is no “magic” vulnerability I was missing as the PCI SSC does specify that a scan automatically fails if the OS is unsupported.

But that is not the entire story.  The key to this whole process is that the vulnerability scanner used must be able to verify the operating system.  While all vulnerability scanners attempt to identify the operating system, the reliability of this identification process is suspect at best.  I am not aware of any vendor of security testing tools that makes a claim that they will identify an operating system 100% of the time.  This is because of the fact that there are many, many things that can influence the OS signature that the tools cannot control and therefore can greatly affect the ability of the tool to identify the OS, particularly when talking about external scanning.  And if an organization follows the OS security hardening guidelines, a lot of unsupported OSes will not be properly or reliably identified by vulnerability scanners.  As a result, I find it hard to believe that the PCI SSC intended to have ASVs only rely on the results of a vulnerability scanner, but that seems to be the case.

So with this clarification, I contacted our ASV personnel and they have told me that they too have been failing vulnerability scans if they run across unsupported operating systems.  I ask if the OS signature is inconclusive, then there is not a failure?  Yes, if the scan comes back and does not identify the OS, then they have nothing to go on to fail the scan and the scan passes.  Given the difficulties vulnerability scanners can have identifying the target operating systems such as when scanning through network firewalls, Web application firewalls, load balancers and the like, I now ask if they feel that these identifications are reliable enough to fail a scan.  I am told this is why they confirm the information with the client before issuing the report so that the report is accurate.  So if a client is not honest, they could influence the results of their scan?  I am reluctantly told that is probably true.

Then there is the issue that not all operating systems are created equal.  Operating systems such as MVS, VMS and MCP are nowhere as risky, if they are even risky to begin with, as Windows and Linux.  A lot of ASVs would argue that they never come across these operating systems running Web services.  However, all of them have the capability of running Web services and I personally know of a number of organizations that run their Web services from such environments.  Organizations are running these older versions of operating systems mostly because of the financial considerations of migrating to something else.  However, I can guarantee that none of the dozens of vulnerability scanners that I have used in the last 10 years would accurately identify any of these operating systems, let alone tell you the version unless some service message header information was retrieved by these tools.  And even then, most tools do not parse the header to determine the OS so it would take human intervention to make that determination.

Regardless of the failure, most ASVs do have a review or appeal process that allows organizations to dispute findings and to submit compensating controls to address any failures.  So for organizations that cannot get rid of unsupported OSes, they can use a compensating control.  Like compensating controls for the PCI DSS, the organization is responsible for writing the compensating control and the ASV needs to assess the compensating control to ensure that it will address the issues identified by the vulnerability scan.

So, if you can fail an organization over an unsupported OS, why is it that you do not automatically fail on unsupported application software?  I went through the Program Guide and there are all sorts of other criteria for applications but nothing regarding the fact of what to do if they too are unsupported.  Applications such as IBM Websphere and Oracle Commerce can become unsupported just as easily as their OS brethren.  And in my experience, use of unsupported application software is even more prevalent than unsupported OSes under the idea that if it is not broken and does not have vulnerabilities, why upgrade?  When I asked our ASV group if they fail organizations on unsupported applications I got silence and then the response that they will fail an application if the vulnerability scanner provides a high, severe or critical vulnerability.  To tell you the truth, while vulnerability scanners regularly return text header information for a lot of applications, I would be hard pressed without doing a lot of research to find out if the version being reported was unsupported.  However, scanners could provide this feedback if they were programmed to provide it.

Then there are all of the conspiracy theories out there that say the PCI SSC and technology companies are working together to drive software sales by forcing organizations to upgrade and there would appear to be a lot of anecdotal evidence that would seem to support this argument.  In reality it is not that the software companies are working together with regulators such as the PCI SSC so much as software companies operate this way in order to focus development and support resources on fewer, more current versions.  As a result, it is just happenstance that regulations cause organizations to have to update their software.

The bottom line in all of this is that you have options to avoid a failing vulnerability scan because of an unsupported OS.  The best method, and the one I most recommend, is do not use unsupported operating systems in the first place.  However, as a former CIO, I do understand the real world and the issues IT departments face.  As a result, I recommend all of the following which may or may not require you to develop a compensating control.

  • Implement not only a network firewall, but also a Web application firewall (WAF) and make sure that the rules are extremely restrictive for servers running unsupported operating systems.
  • Configure your firewalls to block the broadcasting of any OS signature information.  Masking the OS signature will provide the benefit of not advertising to the world that the OS running whatever application is unsupported.  This is not a perfect solution as, 9 times out of 10, the application itself will likely advertise the fact that the underlying OS is unsupported.  It is very important to note that this is only a stop gap measure and you should still be actively in the process of migrating to a supported OS.
  • Implement real-time monitoring of firewalls, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Implement a host-based intrusion detection/prevention solution on any servers that run the unsupported OS.  If using a HIPS solution, you may also want to consider using its preventative capabilities for certain critical incidents.
  • Implement real-time log analysis for firewall, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Actively use your incident response procedures to address any incidents that are identified with any unsupported OS.
03
Feb
11

Intent Of Requirements – 11.2

As I discussed in my earlier post on the intent to requirement 6.1, requirement 11.2 is another requirement where there is a hard and fast metric of four “clean” quarterly vulnerability scans.  Nice idea, but tough to execute.

To remind everyone, the intent of vulnerability scanning is to; (a) ensure that if a patch has been applied, it is in fact in place and the vulnerability has been addressed and, (b) identify any new vulnerabilities.  11.2 is in response to at least two issues.  The first is that organizations that do not vulnerability scan have no way to assess the larger risk their technology infrastructure poses to the potential loss of data.  The second is to ensure that updates and patches are in fact truly correcting vulnerabilities.  I have personally seen situations where patches have removed other patches thus leaving a system at risk of compromise.  Granted these situations are rare, but without vulnerability scanning, the removal of these patches would have never been known.

Requirement 11.2 requires that vulnerability scanning is performed at least quarterly.  However, given the 30 day patching rule, the fact that most vendors have a monthly release of patches and the fact that scanning must be performed after all “significant” changes, an organization really needs to conduct monthly vulnerability scanning at a minimum just to stay compliant.  As a result, if you have an organization doing anything less than monthly scanning, you really need to question if they understand the vulnerability scanning requirement.  However, from a ROC reporting perspective, you really only need to report the results of four quarterly scans even though 12 or more scans are available.

Then there is the problem of the definition of a “clean” scan.  According to the PCI SSC, a “clean” scan is one that contains no severe, high or critical vulnerabilities.  You hear complaints about this time and again from organizations, “How do you ever get a ‘clean’ scan when new vulnerabilities are introduced almost every day?”  This is a very fair question when most security professionals know that the odds for obtaining a “clean” scan are against them because 94% of all vulnerabilities are rated severe, high or critical.  Let alone the fact that tests for vulnerabilities can show up in scanning engines long before the vendor has a patch just exacerbates this situation.  Is it any wonder security professionals are frustrated with this requirement?

This is an area that the PCI SSC has really failed to address and bring some reality to the requirement.  For a lot of large organizations, vulnerability scans that have no severe, high or critical vulnerabilities documented cannot physically be achieved without some rare alignment of the planets, stars and moons.  They could be scanning 24×7 and still could not have a complete, truly “clean” scan of their environment.  As a result, a “clean” scan is usually all in the eye of the beholder.

But that is not the entire problem.  A lot of organizations run packages such as Oracle Commerce, IBM Websphere or MICROS Simphony.  Oracle, IBM and MICROS typically have issues with some OS or system software vendor patches such as those for Windows, Linux or Apache.  They usually issue advisories to their customers indicating that these patches cannot be installed.  In these cases, severe, high or critical vulnerabilities may not get patched for months as these vendors develop an update to their solution as well as packaging the other vendors’ patches.  As a result, organizations with these solutions can have severe, critical or high vulnerabilities in their scans for quite a while.  In these cases, a QSA needs to determine if the organization has followed the vendors’ recommendations or used any other recommendations for mitigating the risks of having these vulnerabilities.  QSAs also need to see proof that once Oracle, IBM or MICROS issue their fixes for these situations, that these organizations implement them in a timely manner.

As a result of these constraints, I have developed my own definition of a “clean” scan.  Again, I am not advocating that vulnerability scanning does not have to result in “clean” results.  I am advocating that QSAs bring reality to the process.  In my world, a “clean” scan is a scan that results in severe, critical or high vulnerabilities that were not documented in any of the previous scans.  If there are repeated severe, critical or high vulnerabilities, those vulnerabilities should only be the result of vendor patching issues and the organization must have documentation that proves that management understands that they exist, what measures the organization has taken to mitigate their existence and when the vendor anticipates to have the vulnerabilities addressed.

Here is my rationale for this definition.  There are just too many factors that go against getting what the PCI SSC defines as a “clean” vulnerability scan.  Real world experience says that getting a “clean” scan is near to impossible, particularly in large environments.  I can count on one hand the number of organizations that I have worked with that have truly four quarters of “clean” scans and none of them are very large.  Therefore, as long as I can prove that severe, high or critical vulnerabilities are being addressed in a reliable and as timely a manner as possible, then the scans are clean.

All of you anal-retentive types I am sure are going to flame me as not following the standard.  However, what is a standard worth if it cannot be applied equitably to all?




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

February 2011
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28