Archive for February, 2012


PCI DSS Compliance Certificates

In this month’s PCI SSC QSA Newsletter, the FAQ of the Month is about so called ‘PCI DSS Compliance Certificates’.  I started to hear about these a couple of years ago, but it got really big last year when I started running into processors and acquiring banks demanding them.  I had a particularly troubling conversation with a processor who demanded that we produce one for one of our clients.  When offered the PCI DSS Attestation Of Compliance (AOC), this processor acted as though we were trying to put something over on them.  When I asked him where I was supposed to get such a certificate when it does not exist on the PCI SSC Web site, he accused me of not being a QSA because all QSAs know what the certificate looks like and where to get it.

As a result, a lot of QSAs must have submitted a question regarding these certificates like I did.  Here is the PCI SSC’s response.

“In addition to the official PCI SSC reporting forms and templates, some QSA or ASV companies provide certificates, letters or other documentation as confirmation that an organization is PCI DSS compliant. The PCI SSC does not prevent QSAs or ASVs from producing this type of documentation, as it is considered an additional service which the assessor company may elect to provide and is therefore outside of the purview of the Council.  However, in accordance with the ethical requirements for QSA and ASV companies, any such certificates, letters and other documentation must be accurate and not be in any way misleading.  Additionally, these certificates, letters and other documentation should be clearly identified as supplemental materials provided by the QSA or ASV; they should not be presented as documents endorsed by the PCI SSC, nor should they be considered replacements for the official PCI SSC templates and forms which have been approved by the payment brands.

The PCI SSC website contains reporting templates and forms which have been approved by all payment brands, including ROC templates, Attestations of Compliance, Self-Assessment Questionnaires, and Attestations of Scan Compliance for ASV scans. Compliance validation and reporting requirements are determined by the individual payment card brands and, irrespective of whether an organization is performing a self-assessment or has an onsite review completed by a QSA company, acceptance of a validation method outside of those listed on the Council website is ultimately up to the entity accepting the validation (that is, the acquiring bank or payment card brand). In many cases, certificates, letters or other documentation issued by QSA or ASV companies outside of the official PCI SSC templates may not be accepted by acquiring banks or payment card brands. ASVs and QSAs should encourage their clients to check with their acquirer or the payment brands directly to determine their compliance reporting requirements, including whether the submission of such certificates is acceptable.”

So all of you processors and acquiring banks that seem to think the only acceptable proof of PCI compliance is some mystical PCI DSS Compliance Certificate, stop demanding them.  They do not exist and never have existed.  The document you need for proof of PCI compliance is the Attestation Of Compliance (AOC), period.  All self-assessment questionnaires (SAQ) contain the AOC and there is a separate AOC form for those submitting a Report On Compliance (ROC).

And all of you QSAs and ASVs out there differentiating yourselves because you produce these nice, but essentially worthless, certificates, stop misinforming merchants, processors and acquiring banks by implying that QSAs and ASVs not producing such a certificate are somehow doing something wrong or worse, dishonest.

Now that the PCI SSC has clarified this situation, hopefully, this marketing ploy will stop.


Database 2012 Threats

I attended a Webinar recently put on by Application Security Inc. regarding the threats to databases for the coming year.  If you did not attend it, you missed a good session.  But the most disturbing thing brought up was their top 10 list database vulnerabilities and misconfigurations.  Their top 10 list is:

  1. Default or weak passwords
  2. SQL injection
  3. Excessive user and group privileges
  4. Unnecessary DBMS features enabled
  5. Broken configuration management
  6. Buffer overflows
  7. Privilege escalation
  8. Denial of Service
  9. Unpatched RDBMS
  10. Unencrypted data

If you look at my post a while back on the 2011 Verizon Business Services’ reasons for why organizations were breached, there is a great correlation between Verizon’s report and what Application Security Inc. is saying.

Their first point about weak or default passwords is very clear and should not need to be discussed.  In this day and age, we should all be ashamed that this is even on the list, let alone the first item on the list.  The bottom line here is that, if you use default or weak passwords, you deserve to be breached.

They brought up and interesting point about SQL injection attacks that a lot of organizations do miss or underestimate and that is the internal SQL injection.  Most organizations are so focused on the external threat that they forget about the threat from the inside.  Worse yet, most security professionals and DBAs are unaware of the threat SQL injection poses even without the Web.  Since most of today’s attacks are perpetrated once past the perimeter, the protection from the inside attack is very relevant and very important.  Because once an attacker is on the inside, it is relatively trivial to use SQL injection or other techniques to obtain data.  More and more organizations are beginning to understand the insider threat and are firewalling all of their database servers away from the general user community as well as minimizing the number of users that have direct SQL access to those servers.

Excessive privileges cannot always be addressed at the DBMS level.  In today’s packaged software world, a lot of the rights are managed and maintained at the application level and that security matrix is maintained in a database table.  The granularity that can be granted is usually where things go awry because the application’s security system only provides an “all or nothing” approach.  Application vendors are getting better with this because of SOX, HIPAA, PCI and the like.  However, organizations typically need to be on the most current releases to have access to such enhanced security granularity.  Unfortunately, there are very few organizations that can afford the most current release or can implement the most current release due to their extensive modifications.  The simplest way to address this issue is the periodic reviews of database privileges and minimizing those users that have excessive privileges.  In the longer term, I expect we’ll see the return of the data dictionary with the addition of user rights and roles that will manage this problem.

Unnecessary features enabled are a vendor and DBA issue.  In some cases, vendors make changing features impossible or near to impossible once the RDBMS is installed.  In some cases, there are physical reasons as to why a feature must be enabled at installation.  However, there are also instances where features could be enabled or even disabled at anytime, but because the vendor only wanted to do that at installation, that is the only time you can deal with the feature.  This results in a lot of DBAs installing the RDBMS with every feature/function available, whether it is needed or not, just in case they might need it later on.  Do not get me wrong as I understand the drivers for this practice.  In today’s “I needed it yesterday” world, it is tough to be responsive when something will require an entire re-install of the RDBMS and migration of existing data in order to get something done.  It is time for IT people as a whole to start explaining to non-IT people that there are just some tasks that take time to do properly no matter how quickly anyone needs them completed.  Our infrastructure has become susceptible to attack in large part because of this rapid response desire.  If we intend to make things secure, we need to stop and think things through before creating even larger issues.

The previous issue feeds directly into the next; broken configuration management.  Configuration management is broken because the vendors make it virtually impossible not to break it.  And even when configuration and changing configuration is easy and manageable, DBAs are not always as structured as other IT operational disciplines.  As a result, whether talking about the configuration of the RBDMS or the client that loads on the workstation, configurations are too broad because of the “just in case” factor.  I know it is a pain to only enable what needs to be enabled and then six months later have to reinstall everything just for a particular feature, but that is the correct way to do things if you intend to be secure.

Buffer overflows, privilege escalations and denial of service are all common vulnerabilities that organizations will have differing levels of success in mitigating.  I will tackle the easiest to address first, privilege escalation.  If there is any area where security can always be addressed it is with privilege escalation.  The reason privilege escalation exists is because someone, usually a developer, created the issue because they decided to allow users to perform a task that the user should not be allowed to perform.  Because if they were allowed to perform the function, then they would not need their privileges escalated to perform it.  The easiest thing to do is to disable those functions that require privilege escalation.  However, in some cases, that approach will create operational issues that will be unacceptable.  In those cases, monitor the daylights out of things so that you can be sure that the privilege escalation did not result in a different outcome.

In a lot of cases, there can be little done to address a denial of service (DoS) attack short of blocking the offender(s).  Denial of service does not compromise information; it just makes the information stored in the database unavailable.  And for most organizations, that is an important distinction.  If no information has been or can be compromised, then DoS is an annoyance and should be treated as such.  However, some DoS attacks can be used to defeat security measures in the RDBMS by causing the RDBMS to fallback to a basic operational state.  It is in these situations that one has to be careful because information can be lost.  The easy fix is to put a firewall in front of the database and enable DoS attack protections.

Buffer overflows are the most insidious attacks because, in some cases, there is little that can be done to stop them.  A lot of security professionals make the success of buffer overflow attacks sound like they are all the result of sloppy coding practices.  And while there is some truth to that view, the amount of which depends on the skills of your programmers, the success of buffer overflow attacks is also the result of embedding too much flexibility into our applications and leveraging the capabilities of the RDBMS.  In today’s world of open constructs such as SQL and RegEx, we have effectively made everyone a potential database programmer all in the sake of expediency.  Yes, customer service is highly improved, but at what cost?  Web application firewalls can minimize buffer overflows by “learning” how your SQL calls are typically structured, but they are not a complete answer nor do they completely remove the risks.  The way to fix the problem is to reduce functionality and make applications more complicated and difficult to use.  For most organizations that is not an option.  As a result, we must minimize the risks but be willing to accept the risks that remain as a result of our desire for ease of use and flexibility.  Minimizing the risk may mean implementing that Web application firewall internally as well as externally.

While I was glad to see that unpatched RDBMS software low on the top 10 list, I was very disappointed that it was still in the top 10.  One would think with all of the discussions about the importance of patching software, this would not occur in the top 10.  I understand the issues of compatibility and testing that make patching difficult, but really?  Maybe you need to invest in more than one or two instances of the RDBMS.  This is the cost of doing business the correct way.  If you are not doing things the correct way, then do not complain when you have a breach.  So while you saved yourself money on licensing costs on the front end, you likely paid for that cost savings a hundredfold on the back end.

I also understand the issues and fears with encryption.  For a lot of people, encryption is this mystical science that only certain “geeks” practice and practice well.  For others, the problem with encryption is the perceived loss of ready access to their data.  As time goes on, I would say that unencrypted data will rise to the top of the top 10 list.  Why?  Because the information age is all about the control of information.  The more information you control and can use to your advantage, the more power and control.  If your information can be readily obtained through public sources or the lax security surrounding your information systems, then you have little, if any, power or control.  The next 10 years will likely be spent by most organizations figuring out what information is critical to their business model and implementing the necessary protections around that information.  Critical information will be protected like the gold at Fort Know because, to that organization, that is their “gold” and it must be protected accordingly.  And that protection will likely involve encryption for some or all of it.

I know that people have a lot on their plates these days.  However, if you are a security person or a DBA, you need to leverage these surveys to your advantage and address the top 10 issues.  If more companies did this, the less data that would be breached.


Cannot Say It Better

If you read nothing else this week, you need to read this posting by Daniel E. Geer, Jr., Sc.D.

People in the Loop: Are They a Failsafe or a Liability?


Why The Push For EMV Adoption In The United States?

Have you noticed all of the press lately regarding the Europay, MasterCard and Visa (EMV) card coming out of Visa?  It has been very hard to miss.  As a result, I started wondering about the purpose of this full court press for EMV.

Before getting into my post, I need to be clear that EMV only refers to the chip in the EMV card.  In the past I have gotten a lot of feedback from Visa when I referred to EMV as “chip and PIN” even though the world almost universally refers to EMV as “chip and PIN.”

With that disclaimer, since last August, Visa USA has been making a concerted effort to get merchants to adopt EMV.  Just a week or so ago, there was another push by Visa USA to entice merchants to support EMV.  So what is the driver behind this push?  That is the $64,000 question and the more you talk to processors and merchants, the more confusing it gets.

Merchants are just as puzzled as I am regarding Visa USA’s EMV push.  In the case of a number of large merchants I have spoken with, they do not get it as they refreshed their card terminals and POS equipment over the last three years and there is no way they are going to swap all of that new gear for EMV-capable equipment.  These merchants are not even looking at contactless terminals.  Such an equipment swap this soon would not be cost effective.

But merchants question what EMV would do for them.  EMV was developed in response to the fall of the Iron Curtain when fraud ran rampant in Europe.  Credit cards were being cloned at an obscene rate and card present fraud was huge.  When EMV was fully implemented, card present fraud in Europe went to levels close to or a little lower than in the United States and EMV card present fraud has remained around those rates since.  Given where card present fraud rates are currently in the United States, introducing EMV would have a limited effect on card present fraud and that would not be enough to offset the costs of implementing EMV or contactless terminals.

So if it is not card present fraud, it must be card not present fraud that Visa USA wants to address right?  Card not present fraud, particularly on eCommerce Web sites is running almost out of control.  I would like to say that this increasing fraud rate that is the reason for Visa USA’s push.  However, EMV does nothing to address the rapidly rising rates of card not present fraud.  The reason is that in order for EMV to address card not present fraud, there would have to be some sort of interface written that would produce codes, single use transaction numbers or similar that could be used by the consumer online.  But no such solution exists, so card not present fraud cannot be the driver either.

Back in August Visa USA announced that merchants using EMV or contactless could avoid filing a PCI Report On Compliance (ROC) with Visa USA, so that must be the reason for the push.  At this year’s PCI Community Meeting in Phoenix, Arizona, PCI SSC General Manager Bob Russo made it very clear that regardless of what Visa USA was saying about filing a ROC; all merchants were still required to prove that they are in compliance with the PCI DSS.  Other card brands also reinforced this statement by reaffirming that they still required the merchant’s ROC and/or AOC as proof of compliance.  As a result, merchants save themselves very little by not having to file a ROC/AOC with only Visa USA.

What about EMV being more secure?  While that is typically true for small and mid-sized merchants, large merchants that switch their own credit card transactions would still likely have card data in their switch systems if not elsewhere in their computer systems.  So claims by some, including at times Visa USA, that PCI compliance is easier with EMV are not totally true.  Large merchants in Europe will back this up.

So after 15 years of EMV, what is Visa USA trying to prove with this push of EMV?  Apparently only Visa USA can tell us because, for the rest of us, there are no business cases we can construct to justify the switch to EMV.  Obviously, Visa USA knows something that the rest of us do not.  Or do they?  I have consistently said that without any card not present fraud solution; EMV is just a solution looking for a problem.

But wait, maybe there is something here that we have been missing.  Is it possible that Google Wallet and similar current and future applications make Visa USA feel threatened?  There may be some factual basis in that statement.

At the PCI Community Meeting last fall, I spoke with a number of processors that seemed to have an idea of why Visa USA was finally pushing EMV.  These processors indicated that the EMV push was being driven by Visa USA to get EMV into the United States market before Google Wallet and similar applications could take the advantages of EMV away.  After all, the United States is the largest credit card transaction market in the world and if EMV was not in the United States, there is no driver to get worldwide adoption pushed.

When I quizzed these processors about the supposed “advantages” of EMV, they said that was the real problem.  With the advent of smartphones and applications such as Google Wallet, EMV has no advantages.  As a result, merchants and banks have no incentive to implement EMV with these new technologies just on the horizon.

When I went back and talked to a couple of key merchants, they all said that they are waiting out the technology race to see what wins from a smartphone perspective.  If Google Wallet and the contactless approach win, then that is where they will head.  However, a lot of merchants are betting on one-time use transaction codes displayed as bar codes to win out as they do not typically require any technology changes at their POS.  American Express went down the one-time use transaction code (15 digit number that appears like a credit card number) around five years ago, but only had limited success with it for online transactions.  However, maybe the time has come for another try.

In the end, it is the consensus of merchants and processors that Visa USA has missed the window for EMV in the United States.  Most organizations believe that if Visa USA wanted EMV in the United States, they should have pushed it long ago.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

February 2012