Posts Tagged ‘PA-DSS



03
Apr
13

Merchant Beware – New Mobile Payment Solution Out In The Wild

Merchants need to be aware of a new mobile payment solution – Square from Square Inc.  A colleague pointed me to the Square site with the question, “Is this PCI compliant?”

Square appears to be a hardware/software solution for iPhones, iPads and Android devices.  It has a cute, square magnetic stripe reader for swiping cards, but also appears to provide the capability to manually enter cardholder data through these devices’ virtual keyboards.  This all appears to be similar to the iPhone that used to appear in the first Apple iPhone commercials that, for reasons that will become obvious, magically disappeared from their commercials very quickly and quietly.  It is also why Apple no longer uses iPhones or iPod Touches in their stores to process payments.

In referencing the PCI SSC’s PTS certification database, I could not find Square’s certification for the PTS standard.  Although, given the pictures on Square’s Web site, I really did not expect to find it certified to the PTS standard as there is no way it could meet the PTS standard.  Has Square submitted their solution for PTS certification?  It may have, but since the PCI SSC PTS certification database only lists those devices that have completed the certification process, there is no way for anyone to know if it has submitted Square until it is certified.  However, since the use of PTS certified devices is a requirement of all of the card brands, until Square is PTS certified, use of a Square device for processing of credit cards violates a merchant’s merchant agreement.  Game over.

While not complying with the PTS standard is a deal breaker in my opinion that is not the only PCI compliance issue.  In referencing the PCI SSC’s PA-DSS certification database, I could also not find the Square software application listed.  That situation was also not unexpected as the PCI SSC announced in a press release on June 24, 2011 that it was suspending the PA-DSS certification review of all mobile payment applications indefinitely.  As a result, there is no way Square’s software will be PA-DSS certified for the foreseeable future whether they submitted it for PA-DSS certification or not.  Not that the PA-DSS certification is a deal breaker for merchants to use the Square software, but it means that merchants using the Square software to process payments will have to have the Square software assessed to ensure it meets all of the PCI DSS requirements regarding payment applications.

And knowing what I know about all of these devices, I can guarantee that the Square software will not be PCI DSS compliant because all of these devices will store the cardholder data unencrypted for an untold amount of time until it is written over.  Even if Square’s software encrypts the data, the underlying OS will also collect the data in cleartext.  Forensic examinations of these devices have shown time and again that regardless of what the software vendor did, the data still existed in memory unencrypted.  And that unencrypted data in memory can exist in these devices for days, weeks to even months depending on transaction volume and other applications loaded on the device.  It is this surreptitious OS data collection activity, the security issues with other applications as well as other security concerns that caused the PCI SSC to suspend their PA-DSS certification activities of these applications.

There is only one solution that uses an iPhone or iPod Touch that is PTS and PA-DSS certified at this time and it is from Verifone.  The reason that Verifone’s PAYware solution is certified is that: (1) Verifone submitted it for the PCI certifications prior to the June 24 suspension and, the bigger reason in my book; (2) it relies on a digital back separate from the iPhone/iPod that performs the card swipe and all of the card data processing/transmission in a secure manner.  The iPhone or iPod Touch are used only as a display and cellular/Wi-Fi conduit for network connectivity.

The only other mobile payment solutions I am aware that are PTS compliant are purpose built credit card terminal using Wi-Fi or cellular communications.  These are considered terminals by the PCI SSC, so their underlying software is not required to be PA-DSS certified at this time, but they are required to be PTS certified.  In addition, these terminals have been in use in Europe for quite some time, so they are a proven secure solution.

The bottom line is that it is the merchant’s responsibility to ask vendors the right questions and weed out non-PCI compliant solutions.  The card brands and the PCI SSC are not in the business of regulating vendors, they leave that to the marketplace.

If you are looking for a PCI compliant mobile payment solution, talk to Verifone, Equinox, Ingenico or other recognized card terminal manufacturers as those are going to be your only PCI certified mobile payment processing options at this time.

Advertisement
01
Jan
13

How The PCI Standards Will Really Die

Welcome to the new year.  I hope the holidays have been treating you well and the coming year is good as well.

There have been a number of articles written about why and how the PCI compliance process will die.  It is not that I look forward to the PCI standards dying as they have brought a needed visibility to information security and privacy as well as the fact that PCI keeps me gainfully employed.  However if things stay on their current trajectory, the PCI standards will eventually die, but not for the reasons being quoted in today’s articles.  The real killers of the PCI compliance process will be the card brands and the PCI Security Standards Council.  Yes, the very folks that brought us the PCI standards will bring the ultimate demise of their precious set of standards.

The first death knell I see is that it is very easy to issue edicts from on high when you do not have to implement them.  Over the years, clarifications have been issued, quality assurance reviews performed, forensic examinations conducted and a host of other activities have resulted in “enhancements” to how the PCI standards are assessed and enforced.  Do not get me wrong, a lot of what has been done was needed and appreciated.

However, by the same token, some of what has come down has been a nightmare to implement.  Any QSAC not using some sort of automated system to conduct their PCI assessments will find it impossible to meet the current and any future documentation and tracking standards now required by the PCI SSC’s QA process.  Under the current standards, QSACs need to document who they interviewed and what the persons were interviewed about as well as tying documentation and observations to the tests performed.  Without some sort of automated process, these requirements are just too intensive to perform manually.

Documentation received and reviewed needs to have its file name, date of issue and a description of its purpose in the PCI assessment process documented.  The basic PCI DSS has a minimum of around 200 discrete documents that are required for the PCI assessment process.  The average we see for most of our engagements is over 600 documents which also include not only policies, standards and procedures, but configuration files, interview notes and observations such as screen shots, log files and file dumps.  You really have to question any QSAC that tells you they manually manage the process.  They either have an amazing and magically efficient project management process, they have very, very inexpensive staff (i.e., overseas labor) or they are short cutting the processes and producing a work product that does not comply with the PCI SSC QA program and have yet to be assessed by the PCI SSC (the most likely scenario).

Even using simple SharePoint or Lotus Notes solutions are not cheap when you consider the cost of the server(s) and the storage of all of documentation collected, which can be around 5 to 10GB per project, as well as all of the requisite system maintenance.  Servers and storage may be cheap, but it all adds up, the more clients you assess.  And speaking of the storage of documentation, the PCI SSC requires that documentation related to PCI assessments be stored for at least three years.  For those of us with electronic work paper management systems, this is not a problem.  However, given the amount of paper generated by these projects, those QSACs using the traditional paper filing methods will find a lot of shelf space taken up by their PCI engagements if they are truly following the procedures required by the PCI SSC.

All of this drives up the cost of a proper PCI assessment, more than I think the card brands and the PCI SSC are willing to admit.  It is not that I think the card brands and PCI SSC do not care about this situation, but more related to they do not have an understanding of the operational ramifications of their edicts.  The card brands and PCI SSC tread a very fine line here and to this point they have been heavy handed in the issuing of their edicts.  Going forward, the PCI SSC needs to ask the QSACs, Participating Organizations and ASVs to assess the cost and time impacts of these edicts so that they can be weighed against their benefits versus what is done now which is more of a procedural and proofing review.  If this is not done, there will soon come a point where merchants and service providers will push back hard and refuse to go through the process due to the cost and the amount of time involved to be assessed.

The next death knell is the inane process that is called the PCI Report On Compliance (ROC).  When the PCI SSC did not have access to the QSACs’ work papers, the current ROC writing process made some sense as there was no other way for the PCI SSC or the processors and acquiring banks to know if the QSACs had really done the work they were saying they had done.  However, all of that changed a number of years ago when the PCI SSC required QSACs to add a disclaimer to their contracts stating that the PCI SSC had the right to review all work products.  Yet even with this change, we continue to have to write an insanely detailed ROC, typically numbering in a minimum of 300+ pages for even the most basic of ROCs.

Unfortunately, there are QSACs out there that apparently have not been through the PCI SSC QA process and that dreaded of all states – Remediation.  As a result, they have much lower costs because they are not documenting their assessment work as completely as they need to and are not sampling, observing or interviewing like QSACs that have been through the QA process.  In addition, based on some work products we have seen, they also do not care about the quality of the resulting ROC as it looks entirely like a ‘find and replace’ of a template and makes no sense when you read it.  In talking to other large QSACs that have been through the QA process multiple times, the PCI SSC has indicated that they are monitoring the large QSACs more than the little QSACs because there is more risk with the large QSACs.  While true to an extent, we have encountered a number of smaller QSACs that perform assessments for large clients due to their much lower cost structure and their willingness to ‘overlook’ compliance issues.  If the PCI SSC does not go after these QSACs soon, there will likely be a number of breaches that occur due to the QSACs’ lack of diligence in performing their assessments.

I know of a number of QSACs that would like to see Bob Russo and the representatives of the various card brands to actually work as staff on a few PCI assessment engagements so that they can better appreciate the inordinate amount of work involved in generating a ROC.  I think they would be shocked at the amount of work effort they have driven into a process that is already too complicated and prone for error.

As it stands today, the ROC writing, review and proofing process is probably 50% to 60% of a good QSAC’s project costs.  To address this, the PCI SSC QA group tells QSACs to develop one or more templates for writing the ROC which, from what we have seen from some other QSACs, means a lot of mass ‘find and replace’ to speed the ROC writing process.  For the last few years, a number of QSACs have brought the ROC writing process up at the Community Meetings.  However the card brands continue to shoot down any sort of changes to the process.  As a result, the cost of producing a ROC is driven by the size and complexity of the merchants’ or service providers’ cardholder data environment (CDE).  These costs will only continue to rise as long as the PCI SSC does not allow QSACs to mark items as ‘In Place’ with only a check box and rely on the QSAC’s work papers versus the verbosity required now.  If this sort of process can work for financial auditors, it can work here as well.

A third death knell is the PCI SSC and card brands continuing to quote that the majority of breaches are the result of organizations not complying with the PCI DSS.  In discussions with a number of the PCI forensic examination companies, I am hearing that the card brands cannot believe the fact that more and more organizations were PCI compliant at the time of their breach.  The PCI SSC and card brands have apparently convinced themselves that the PCI standards are “perfect” and they cannot imagine that an organization could be breached unless that organization was not complying with the PCI standards.  There is no security standard that I am aware that totally prevent breaches.  So while the PCI standards are good baseline security standards, the card brands and PCI SSC seem to have forgotten that security is not perfect and that any security standard only minimizes the damage done when a breach occurs if the standard is truly followed.

And as organizations have gotten the PCI “religion,” the effort required to compromise them from the outside via traditional attacks has increased significantly.  As a result, successful attackers have changed strategy and work on social engineering their way past the bulk of an organization’s security measures.  The PCI DSS only has a little bit on social engineering in requirement 12.6 regarding security awareness training.  And even those organizations with the most robust of security awareness programs will tell you that, even after extensive security awareness training, human beings are still fallible and that some people still do very questionable things that continue to put organizations at risk, sometimes significant risk.  Even when you have the most diligent of employees, they still make mistakes in judgment from time to time.

Until the human element can be totally removed, there will always be a certain amount of risk that will never go away.  Again, the PCI SSC and card brands seem to not want to acknowledge the failings of the human element and appear to believe that technology is the savior based on the focus of the PCI standards.  However time and again, every security professional has seen very sophisticated security technologies circumvented by human error or just plain apathy towards security (i.e., “it always happens to someone else, not my organization” or “we’re too small to be a target”).

Until the PCI SSC and the card brands drop the “holier than thou” attitude toward the PCI standards and stop the public pillory of organizations that have been breached, there will continue to be editorial commentary regarding the pointlessness of the standards and ever more serious push back to complying with the standards.

These are the reasons why the PCI SSC and the card brands will be the ones that will kill the PCI standards.  At the moment, they are so far removed from the process; they do not understand how complicated and expensive the process has become which is why merchants and service providers are complaining about the ever increasing costs and effort related to the PCI assessment process.

The PCI SSC and card brands also seem to have forgotten that QSACs have to make money doing these assessments and, when you pile on clarifications and edicts that do nothing to streamline and simplify the process; you are only driving the costs of the process higher.  And higher costs only make merchants and service providers, who are on thin margins to being with, even more incentivized to use the much lower cost QSACs, driving the diligent QSACs out of the market, thus increasing the likelihood of breaches.

Again, it is not that I want the PCI standards to go away as I think they have brought a real benefit.  However, if these issues are not addressed, the PCI standards will end up going away.  I fear that, with them gone, there will be no carrot to ensure the security of cardholder information and we will end up back where we were before the PCI standards existed.

24
Jul
12

PA-DSS Validation Clarification

On July 23, 2012 we received the following communication from James Barrow, Director of AQM Programs, with the PCI Security Standards Council.  I found it worthy of posting so that everyone understands the procedures their QSA needs to follow regarding applications that are supposedly PA-DSS validated.

The Council has recently received inquiries related to the Validation of Payment Applications process and there seems to be some confusion related to the PCI SSC listing of validated applications.  The Council’s website is the authoritative listing of applications that have been accepted by the Council.  This is the listing that should be checked by the assessors during each engagement with a merchant.  If the merchant’s application (both name and version number) are not on this list, it cannot be considered validated.

There are some instances where a merchant might provide you with a document (not issued by the Council) stating that the application has undergone some type of review and has been deemed compliant.  However, if the payment application is not listed on the PCI SSC website, it cannot be considered validated.  If such an instance of this arises during one of your engagements, you as the assessor must perform your due diligence in determining if the application is capable of meeting all of the DSS requirements.

For the PA-DSS community we realize that some applications are not applicable to the PA-DSS program.  The eligibility for the program is contained in the document entitled “Which Applications are Eligible for PA-DSS Validation?  A Guiding Checklist” available at the Council’s document library.  If an application is not eligible for the program, it does not preclude you from performing an assessment.  However, at the end of the assessment you cannot communicate to your client that per the assessment the application has been “validated”, nor can the client (vendor) expect the assessment to have any bearing on a merchant’s ability to achieve DSS compliance.

Following the above guidance should help to remove any miscommunication or misunderstanding in the payment ecosystem as to what applications are considered validated, and the steps that need to be taken should a non-validated application be identified in the field.

The key here is that if the version of your payment application is not on the PCI SSC’s PA-DSS list, it is not considered PA-DSS validated and your QSA must assess it accordingly.  I cannot tell you how many merchants we encounter where they have a different version of the application, yet the merchant insists that we treat it the same as the version that is PA-DSS validated.

We also run into software vendors that insist that the version the merchant is running is not significantly different from the version that is PA-DSS validated.  While this could be an accurate statement, the vendor needs to have submitted the version to a PA-QSA for validation of that fact.  The PA-DSS has a procedure that the PA-QSA can follow to determine that version changes have not affected cardholder data processing and the application’s PA-DSS validation.  Without that validation, as a QSA, our hands are tied and we must conduct a full assessment of the application under the PCI DSS.

Much to the chagrin of a lot of merchants, a PA-DSS validation does not imply that they are PCI DSS compliant.  There is also this mistaken belief by merchants that a PA-DSS validation implies that the QSA does not have to assess the application.  Under the PCI DSS, a QSA still must assess the application’s implementation and ensure that it was implemented per the vendor’s instructions to maintain its PA-DSS validation.  The trouble is that this implementation assessment may not save much, if any, time for the QSA.

28
Apr
12

Is Security Broken? And How I Propose To Fix It

Dennis Fisher has a blog post entitled ‘The Security Game Needs To Change’ out on ThreatPost.  The premise of this post is that the practice of securing networks and applications is broken.  Then we have the CEO of RSA, Art Caviello, saying that security models are inadequate.

While I think Mr. Fischer and Mr. Caviello are correct in stating that security is broken, I think they have missed the point as to why it is broken and how to fix it.  Mr. Fisher quotes Jeff Jones of Microsoft’s Trustworthy Computing Initiative for his suggested solution.  Mr. Jones states, “What we really need is to get more smart people thinking about the problems we haven’t solved yet.”  Really?  Anyone remember the episode of ‘The Big Bang Theory’ where the guys try to help Penny build the multimedia system from IKEA?  Talk about available brain power.  Yet rather than assist Penny with the assembly of the unit, they go off on a tangent developing an over engineered and sophisticated solution for a non-existent problem.

That is where I believe information security is at today.  We seem to be like Don Quixote, off on tangents such as understanding the motivations of the enemy, anticipating the next attack and other windmill tilting.  We keep trying to adapt military approaches to a problem being conducted in a very non-military way.  In a true war, organizations would be investing in creating an offensive capability of cyber-armies to go into cyber-battle with the enemy.  And while there are discussions about organizations having offensive capabilities, security professionals are still in a defensive posture protecting the organization.

If we are going to fix security, then what we need is a serious paradigm shift.  If we will always be in a defensive posture, then the paradigm we should be using is the Fort Knox approach.  We focus on what information is important to our organization and go about the business of building a ‘Fort Knox’ to protect that information.  Once we begin focusing our efforts on protecting our organization’s critical information, we will find that the rest of our security tasks become much easier.  After all, Fort Knox is predicated on a defensive posture, not an offensive one.

I am sure a lot of you are asking, “So doing all of this will perfectly protect my information?”  Not even close.  As I consistently say, security is not perfect and never will be.  No matter how much we try, there will always be people involved somewhere in the process and people are fallible.  The concept is that if an incident does occur, you will recognize it quickly, stop it in its tracks and minimize its impact.  Will you lose information?  Hopefully not, but any information loss will not be significant because you recognized the problem almost immediately and dealt with it.

If you are frustrated with security, change your approach.  Until you do that, you will continue to have a broken security model.

17
Mar
12

When Will The PCI SSC And Card Brands Stop The Mobile Payment Insanity?

This week PayPal introduced Here, their mobile payment processing application for Apple iOS and Android devices.  The good news is that PayPal Here at least appears to encrypt cardholder data, but that appears to be the extent of the good news.

If you pay attention to the latest Suzie’s Lemonade Verizon Wireless advertisement, you will see Intuit’s GoPayment solution on a tablet computer processing a payment.  In another bit of good news, the Intuit GoPayment solution is also encrypted.

But what is most disconcerting is that this is just another in a string of mobile payment processing solutions to be introduced with no way of knowing whether or not the security claims are accurate or can even be trusted.  This is because the PCI Security Standards Council halted last year the certification of any of these mobile payment processing solutions through the PIN Transaction Security (PTS) and Payment Application Data Security Standard (PA-DSS).  Do not get me wrong, the Council was in a tough spot and did not have a plan as to how to deal with these solutions, so the called a halt while they went back and reassessed things.

However the marketplace is not waiting.  So while the marketplace continues to deliver solutions, the merchant is left to their own devices to know whether any of these mobile payment processing solutions can be trusted.  And what I am fearful of is that small merchants, who are the marketing target of these solutions, will be put out of business should the device somehow be compromised or they lose a device.

So what are the questions surrounding the PayPal Here?

First, what is the encryption algorithm that is used?  I am guessing that they are using DUKPT, as that is common with these sorts of devices, but it is something that should be explicitly identified in their FAQ.  What PayPal currently says is, “… the PayPal Here card reader is encrypted and is backed by over a dozen years of experience of providing buyers and sellers the highest levels of security.”  If they are using DUKPT, can the key be changed?  Most of the mobile solutions I have seen do not allow for the key change under DUKPT which could be a problem.

Then there is the fact that you can manually enter a cardholder’s information through the iOS/Android application.  Given the fact that these systems are notorious for having keyboard loggers of industrial strength that means that any manual input will be recorded in the device.  I am guessing that would include cardholder name, PAN and expiration date.  However, if PayPal Here collects the CVV/CVC/CID value in manual entry that would be an extremely bad thing as the device will retain that until it overwrites it months later.  Again, there is no documentation to determine what is allowed to be manually entered, so we cannot assess how much risk this might introduce.

But the scariest feature of the PayPal Here is unrelated to credit cards; it is the remote capture of checks.  PayPal Here allows the merchant to scan a check and then submit it to their financial institution.  Again, given the data retention of these devices, I can only guess that the check images processed through the PayPal Here application will remain on the device until the space is needed which could be months.  The problem with all of this is that if people think credit card security was questionable, check processing security is nonexistent.  As a result, anyone with a modicum of knowledge could use the check information on one of these devices to drain every bank account stored.

Let us hope that the PCI Security Standards Council and the card brands quickly get back to certifying these applications so that merchants are not using insecure payment solutions.

29
Feb
12

PCI DSS Compliance Certificates

In this month’s PCI SSC QSA Newsletter, the FAQ of the Month is about so called ‘PCI DSS Compliance Certificates’.  I started to hear about these a couple of years ago, but it got really big last year when I started running into processors and acquiring banks demanding them.  I had a particularly troubling conversation with a processor who demanded that we produce one for one of our clients.  When offered the PCI DSS Attestation Of Compliance (AOC), this processor acted as though we were trying to put something over on them.  When I asked him where I was supposed to get such a certificate when it does not exist on the PCI SSC Web site, he accused me of not being a QSA because all QSAs know what the certificate looks like and where to get it.

As a result, a lot of QSAs must have submitted a question regarding these certificates like I did.  Here is the PCI SSC’s response.

“In addition to the official PCI SSC reporting forms and templates, some QSA or ASV companies provide certificates, letters or other documentation as confirmation that an organization is PCI DSS compliant. The PCI SSC does not prevent QSAs or ASVs from producing this type of documentation, as it is considered an additional service which the assessor company may elect to provide and is therefore outside of the purview of the Council.  However, in accordance with the ethical requirements for QSA and ASV companies, any such certificates, letters and other documentation must be accurate and not be in any way misleading.  Additionally, these certificates, letters and other documentation should be clearly identified as supplemental materials provided by the QSA or ASV; they should not be presented as documents endorsed by the PCI SSC, nor should they be considered replacements for the official PCI SSC templates and forms which have been approved by the payment brands.

The PCI SSC website contains reporting templates and forms which have been approved by all payment brands, including ROC templates, Attestations of Compliance, Self-Assessment Questionnaires, and Attestations of Scan Compliance for ASV scans. Compliance validation and reporting requirements are determined by the individual payment card brands and, irrespective of whether an organization is performing a self-assessment or has an onsite review completed by a QSA company, acceptance of a validation method outside of those listed on the Council website is ultimately up to the entity accepting the validation (that is, the acquiring bank or payment card brand). In many cases, certificates, letters or other documentation issued by QSA or ASV companies outside of the official PCI SSC templates may not be accepted by acquiring banks or payment card brands. ASVs and QSAs should encourage their clients to check with their acquirer or the payment brands directly to determine their compliance reporting requirements, including whether the submission of such certificates is acceptable.”

So all of you processors and acquiring banks that seem to think the only acceptable proof of PCI compliance is some mystical PCI DSS Compliance Certificate, stop demanding them.  They do not exist and never have existed.  The document you need for proof of PCI compliance is the Attestation Of Compliance (AOC), period.  All self-assessment questionnaires (SAQ) contain the AOC and there is a separate AOC form for those submitting a Report On Compliance (ROC).

And all of you QSAs and ASVs out there differentiating yourselves because you produce these nice, but essentially worthless, certificates, stop misinforming merchants, processors and acquiring banks by implying that QSAs and ASVs not producing such a certificate are somehow doing something wrong or worse, dishonest.

Now that the PCI SSC has clarified this situation, hopefully, this marketing ploy will stop.

17
Feb
12

Database 2012 Threats

I attended a Webinar recently put on by Application Security Inc. regarding the threats to databases for the coming year.  If you did not attend it, you missed a good session.  But the most disturbing thing brought up was their top 10 list database vulnerabilities and misconfigurations.  Their top 10 list is:

  1. Default or weak passwords
  2. SQL injection
  3. Excessive user and group privileges
  4. Unnecessary DBMS features enabled
  5. Broken configuration management
  6. Buffer overflows
  7. Privilege escalation
  8. Denial of Service
  9. Unpatched RDBMS
  10. Unencrypted data

If you look at my post a while back on the 2011 Verizon Business Services’ reasons for why organizations were breached, there is a great correlation between Verizon’s report and what Application Security Inc. is saying.

Their first point about weak or default passwords is very clear and should not need to be discussed.  In this day and age, we should all be ashamed that this is even on the list, let alone the first item on the list.  The bottom line here is that, if you use default or weak passwords, you deserve to be breached.

They brought up and interesting point about SQL injection attacks that a lot of organizations do miss or underestimate and that is the internal SQL injection.  Most organizations are so focused on the external threat that they forget about the threat from the inside.  Worse yet, most security professionals and DBAs are unaware of the threat SQL injection poses even without the Web.  Since most of today’s attacks are perpetrated once past the perimeter, the protection from the inside attack is very relevant and very important.  Because once an attacker is on the inside, it is relatively trivial to use SQL injection or other techniques to obtain data.  More and more organizations are beginning to understand the insider threat and are firewalling all of their database servers away from the general user community as well as minimizing the number of users that have direct SQL access to those servers.

Excessive privileges cannot always be addressed at the DBMS level.  In today’s packaged software world, a lot of the rights are managed and maintained at the application level and that security matrix is maintained in a database table.  The granularity that can be granted is usually where things go awry because the application’s security system only provides an “all or nothing” approach.  Application vendors are getting better with this because of SOX, HIPAA, PCI and the like.  However, organizations typically need to be on the most current releases to have access to such enhanced security granularity.  Unfortunately, there are very few organizations that can afford the most current release or can implement the most current release due to their extensive modifications.  The simplest way to address this issue is the periodic reviews of database privileges and minimizing those users that have excessive privileges.  In the longer term, I expect we’ll see the return of the data dictionary with the addition of user rights and roles that will manage this problem.

Unnecessary features enabled are a vendor and DBA issue.  In some cases, vendors make changing features impossible or near to impossible once the RDBMS is installed.  In some cases, there are physical reasons as to why a feature must be enabled at installation.  However, there are also instances where features could be enabled or even disabled at anytime, but because the vendor only wanted to do that at installation, that is the only time you can deal with the feature.  This results in a lot of DBAs installing the RDBMS with every feature/function available, whether it is needed or not, just in case they might need it later on.  Do not get me wrong as I understand the drivers for this practice.  In today’s “I needed it yesterday” world, it is tough to be responsive when something will require an entire re-install of the RDBMS and migration of existing data in order to get something done.  It is time for IT people as a whole to start explaining to non-IT people that there are just some tasks that take time to do properly no matter how quickly anyone needs them completed.  Our infrastructure has become susceptible to attack in large part because of this rapid response desire.  If we intend to make things secure, we need to stop and think things through before creating even larger issues.

The previous issue feeds directly into the next; broken configuration management.  Configuration management is broken because the vendors make it virtually impossible not to break it.  And even when configuration and changing configuration is easy and manageable, DBAs are not always as structured as other IT operational disciplines.  As a result, whether talking about the configuration of the RBDMS or the client that loads on the workstation, configurations are too broad because of the “just in case” factor.  I know it is a pain to only enable what needs to be enabled and then six months later have to reinstall everything just for a particular feature, but that is the correct way to do things if you intend to be secure.

Buffer overflows, privilege escalations and denial of service are all common vulnerabilities that organizations will have differing levels of success in mitigating.  I will tackle the easiest to address first, privilege escalation.  If there is any area where security can always be addressed it is with privilege escalation.  The reason privilege escalation exists is because someone, usually a developer, created the issue because they decided to allow users to perform a task that the user should not be allowed to perform.  Because if they were allowed to perform the function, then they would not need their privileges escalated to perform it.  The easiest thing to do is to disable those functions that require privilege escalation.  However, in some cases, that approach will create operational issues that will be unacceptable.  In those cases, monitor the daylights out of things so that you can be sure that the privilege escalation did not result in a different outcome.

In a lot of cases, there can be little done to address a denial of service (DoS) attack short of blocking the offender(s).  Denial of service does not compromise information; it just makes the information stored in the database unavailable.  And for most organizations, that is an important distinction.  If no information has been or can be compromised, then DoS is an annoyance and should be treated as such.  However, some DoS attacks can be used to defeat security measures in the RDBMS by causing the RDBMS to fallback to a basic operational state.  It is in these situations that one has to be careful because information can be lost.  The easy fix is to put a firewall in front of the database and enable DoS attack protections.

Buffer overflows are the most insidious attacks because, in some cases, there is little that can be done to stop them.  A lot of security professionals make the success of buffer overflow attacks sound like they are all the result of sloppy coding practices.  And while there is some truth to that view, the amount of which depends on the skills of your programmers, the success of buffer overflow attacks is also the result of embedding too much flexibility into our applications and leveraging the capabilities of the RDBMS.  In today’s world of open constructs such as SQL and RegEx, we have effectively made everyone a potential database programmer all in the sake of expediency.  Yes, customer service is highly improved, but at what cost?  Web application firewalls can minimize buffer overflows by “learning” how your SQL calls are typically structured, but they are not a complete answer nor do they completely remove the risks.  The way to fix the problem is to reduce functionality and make applications more complicated and difficult to use.  For most organizations that is not an option.  As a result, we must minimize the risks but be willing to accept the risks that remain as a result of our desire for ease of use and flexibility.  Minimizing the risk may mean implementing that Web application firewall internally as well as externally.

While I was glad to see that unpatched RDBMS software low on the top 10 list, I was very disappointed that it was still in the top 10.  One would think with all of the discussions about the importance of patching software, this would not occur in the top 10.  I understand the issues of compatibility and testing that make patching difficult, but really?  Maybe you need to invest in more than one or two instances of the RDBMS.  This is the cost of doing business the correct way.  If you are not doing things the correct way, then do not complain when you have a breach.  So while you saved yourself money on licensing costs on the front end, you likely paid for that cost savings a hundredfold on the back end.

I also understand the issues and fears with encryption.  For a lot of people, encryption is this mystical science that only certain “geeks” practice and practice well.  For others, the problem with encryption is the perceived loss of ready access to their data.  As time goes on, I would say that unencrypted data will rise to the top of the top 10 list.  Why?  Because the information age is all about the control of information.  The more information you control and can use to your advantage, the more power and control.  If your information can be readily obtained through public sources or the lax security surrounding your information systems, then you have little, if any, power or control.  The next 10 years will likely be spent by most organizations figuring out what information is critical to their business model and implementing the necessary protections around that information.  Critical information will be protected like the gold at Fort Know because, to that organization, that is their “gold” and it must be protected accordingly.  And that protection will likely involve encryption for some or all of it.

I know that people have a lot on their plates these days.  However, if you are a security person or a DBA, you need to leverage these surveys to your advantage and address the top 10 issues.  If more companies did this, the less data that would be breached.

01
Jan
12

Encryption Basics

NOTE: This is a revised version of my original post to reflect readers concerns regarding statements made that do not reflect best practices surrounding encryption key management.  A big thank you to Andrew Jamieson who reviewed and commented on this revised posting.

During the last couple of years, I have run into more and more questions regarding encryption and encryption key management than I thought existed.  As a result, I have come to the realization that, for most people, encryption is some mystical science.  The stories of the Enigma machine and Bletchley Park have only seemed to add to that mysticism.  Over the years, I have collected my thoughts based on all of the questions and developed this distilled and very simplified version of guidance for those of you struggling with encryption.

For the security and encryption purists out there, I do not represent this post in any way, shape, or form as the “be all, to end all” on encryption.  Volumes upon volumes of books and Web sites have been dedicated to encryption, which is probably why it gets the bad reputation it does as the vast majority of these discussions are about as esoteric as they can be.

In addition, this post is written in regards to the most common method of encryption used in encrypting data stored in a database or file and that is the use of an encryption algorithm against a column of data or an entire file.  It does not cover public key infrastructure (PKI) or other techniques that could be used.  So please do not flame me for missing your favorite algorithm, other forms of encryption or some other piece of encryption minutiae.

There are all sorts of nuances to encryption methods and I do not want to cloud the basic issues so that people can get beyond the mysticism.  This post is for educating people so that they have a modicum of knowledge to identify hyperbole from fact.

The first thing I want to clarify to people is that encryption and hashing are two entirely different methods.  While both methods obscure information, the key thing to remember is that encryption is reversible and hashing is not reversible.  Even security professionals get balled up interchanging hashing and encryption, so I wanted to make sure everyone understands the difference.

The most common questions I get typically revolve around how encryption works.  Non-mathematicians should not need to know how an encryption algorithm works, that is for the experts that develop and prove that they work.  In my opinion, unless you are a mathematician studying cryptography, I recommend that people trust the research conducted by the experts regarding encryption algorithms.

That is not to say you should not know strong cryptography from weak cryptography.  I am just suggesting that the underlying mathematics that defines a strong algorithm can be beyond even some mathematicians, so why we expect non-mathematicians to understand encryption at this level is beyond me.  My point is that the algorithms work.  How they work is not and should not be a prerequisite for management and even security professionals to using encryption.

This leads me to the most important thing people need to know about encryption.  If you only take away one thing from this post, it would be that strong encryption comes down to four basic principles.

  • The algorithm used;
  • The key used;
  • How the key is managed; and
  • How the key is protected.

If you understand these four basic principles you will be miles ahead of everyone else that is getting twisted up in the details and missing these key points.  If you look at PCI requirement 3, the tests are structured around these four basic principles.

On the algorithm side of the equation, the best algorithm currently in use is the Advanced Encryption Standard (AES).  AES was selected by the United States National Institute of Standards and Technology (NIST) in 2001 as the official encryption standard for the US government.  AES replaced the Data Encryption Standard (DES) that was no longer considered secure.  AES was selected through a competition where 15 algorithms were evaluated.  While the following algorithms were not selected as the winner of the NIST competition, Twofish, Serpent, RC6 and MARS were finalists and are also considered strong encryption algorithms.  Better yet, for all of you in the software development business, AES, Twofish, Serpent and MARS are open source.  Other algorithms are available, but these are the most tested and reliable of the lot.

One form of DES, Triple DES (3DES) 168-bit key strength, is still considered strong encryption.  However how long that will remain the case is up for debate  I have always recommended staying away from 3DES 168-bit unless you have no other choice, which can be the case with older devices and software.  If you are currently using 3DES, I would highly recommend you develop a plan to migrate away from using it.

This brings up another key take away from this discussion.  Regardless of the algorithm used, they are not perfect.  Over time, encryption algorithms are likely to be shown to have flaws or be breakable by the latest computing power available.  Some flaws may be annoyances that you can work around or you may have to accept some minimal risk of their continued use.  However, some flaws may be fatal and require the discontinued use of the algorithm as was the case with DES.  The lesson here is that you should always be prepared to change your encryption algorithm.  Not that you will likely be required to make such a change on a moment’s notice.  But as the experience with DES shows, what was considered strong in the past, is no longer strong or should not be relied upon.  Changes in computing power and research could make any algorithm obsolete thus requiring you to make a change.

Just because you use AES or another strong algorithm does not mean your encryption cannot be broken.  If there is any weak link in the use of encryption, it is the belief by many that the algorithm is the only thing that matters.  As a result, we end up with a strong algorithm using a weak key.  Weak keys, such as a key comprised of the same character, a series of consecutive characters, easily guessed phrase or a key of insufficient length, are the reasons most often cited as why encryption fails.  In order for encryption to be effective, encryption keys need to be strong as well.  Encryption keys should be a minimum of 32 characters in length.  However in the encryption game, the longer and more random the characters in a key the better, which is why you see organizations using 64 to 256 character long random key strings.  When I use the term ‘character’ that can be printable characters of upper and lower case alphabetic as well as numeric and special characters.  But ‘character’ can also include hexadecimal values as well if your key entry interface allows for hexadecimal values to be entered.  The important thing to remember is that you should ensure that the values you enter for your key are as hard to guess or brute force as maximum key size of the algorithm you are using.  For example, using a seven character password to generate a 256 bit AES key does not provide for the full strength of that algorithm.

This brings us to the topic of encryption key generation.  There are a number of Web sites that can generate pseudo-random character strings for use as encryption keys.  To be correct, any Web site claiming to generate a “random” string of characters is only pseudo-random.  This is because the character generator algorithm is a mathematical formula and by its very nature is not truly random.  My favorite Web site for this purpose is operated by Gibson Research Corporation (GRC).  It is my favorite because it runs over SSL and is set up so that it is not cached or processed by search engines to better guarantee security.  The GRC site generates 63 character long hexadecimal strings, alphanumeric strings and printable ASCII strings, not numerical strings provided by other random and pseudo-random number generator sites.  Using such a site, you can generate keys or seed values for key generators.  You can combine multiple results from these Web sites to generate longer key values.

In addition, you can have multiple people individually go to the Web site, obtain a pseudo-random character string and then have each of them enter their character string into the system.  This is also known as split key knowledge as individuals only know their input to the final value of the key.  Under such an approach, the key generator system asks each key custodian to enter their value (called a ‘component’) separately and the system allows no key custodian to come into contact with any other custodian’s component value.  The key is then generated by combining the entered values in such a way that none of the individual inputs provides any information about the final key.  It is important to note that simply concatenating the input values to form the key does not provide this function, and therefore does not ensure split knowledge of the key value.

Just because you have encrypted your data does not mean your job is over.  Depending on how your encryption solution is implemented, you may be required to protect your encryption keys as well as periodically change those keys.  Encryption key protection can be as simple as storing the key components on separate pieces of paper in separate, sealed envelopes or as high tech as storing them on separate encrypted USB thumb drives.  Each of these would then be stored in separate safes.

You can also store encryption keys on a server not involved in storing encrypted data.  This server should not be any ordinary server as it needs to be securely configured and very limited access.  Using this approach is where those key encryption keys (KEK) come into play.  The way this works is that each custodian generates a KEK and encrypts their component with the KEK.  Those encrypted components can then be placed in an encrypted folder or zip file where computer operations have the encryption key.  This is where you tend to see PGP used for encryption as multiple decryption keys can be used so that in an emergency, operations can decrypt the archive and then the key custodians or their backups can decrypt their key components.

Finally, key changes are where a lot of organizations run into issues.  This is because key changes can require that the information be decrypted using the old key and then encrypted with the new key.  That decrypt/encrypt process can take days, weeks even years depending on the volume of data involved.  And depending on the time involved and how the decrypt/encrypt process is implemented, cardholder data can potentially be decrypted or exposed because of a compromised key for a long period of time.

The bottom line is that organizations can find out that key changes are not really feasible or introduce more risk than they are willing to accept.  As a result, protection of the encryption keys takes on even more importance because key changes are not feasible.  This is another reason why sales of key management appliances are on the rise.

That is encryption in a nutshell, a sort of “CliffsNotes” for the non-geeky out there.  In future posts I intend to go into PKI and other nuances to encryption and how to address the various PCI requirements in requirements 3 and 4.  For now, I wanted to get a basic educational foundation out there for people to build on and to remove that glassy eyed look that can occur when the topic of encryption comes up.

22
Dec
11

Google Wallet

Good news for anyone considering using Google Wallet.  Google Wallet encrypts the PAN and only stores the last four digits of the PAN in clear-text according to a forensic assessment conducted by ViaForensics.  The other piece of good news from ViaForensic’s examination was that drive by attacks against Wi-Fi or near field communications (NFC) to intercept Google Wallet communications appear to fail.

Based on ViaForensic’s analysis, it appears that Google Wallet would likely comply with the PA-DSS.  The full PAN is encrypted and communication of the PAN via Wi-Fi or NFC is secured.  Granted there are a lot of other PA-DSS requirements that we do not have a window into that may or may not be PA-DSS compliant.  But on the whole, I would have to believe that Google Wallet would be PA-DSS certified.  So, why is Google Wallet not PA-DSS certified?

First and foremost, in the eyes of the PCI SCC and the card brands, Google Wallet and similar applications are storing pre-authorization data and are just an electronic representation of someone’s traditional wallet.  The PCI SSC does not certify traditional wallets, so why would they certify electronic wallets?  As a result, the PA-DSS does not apply.  Should Google and other vendors of electronic wallets ensure the security of cardholder data?  No doubt about it.

But a more important reason that the PCI SSC is backing away from certification is related to the other findings in ViaForensic’s report.  Their analysis also uncovered some not so good news in that Google Wallet stores a lot of personally identifiable information (PII) unencrypted.  This PII includes such information as cardholder name, expiration date, credit limit and account balance.  I think most people would now start to understand why the PCI SSC backed away from certifying such applications.

The PCI SSC does not want to be on the hook for the unsecured PII.  If the PCI SSC were to certify Google Wallet as PA-DSS compliant, I am sure their lawyers informed them that such a certification would drag them into lawsuits involving the exposure of the unsecured PII even though the PA-DSS does not cover PII outside of the PAN.  Their lawyers probably advised them that a PA-DSS certification would likely imply to users of these electronic wallet applications that their PII was included in the PA-DSS certification.  As a result, the PCI SSC and card brands would likely have to launch a massive and costly educational campaign to explain to the public that the PA-DSS certification was only related to protection of a cardholder’s PAN and nothing else.  And even with such a campaign, the PCI SSC would likely still get dragged into lawsuits over peoples’ PII being exposed.

And the likelihood of such lawsuits is very high.  Smartphones are regularly lost and the security protecting them is almost non-existent, if security is even enabled.  A hacker can easily take a lost smartphone and obtain enough information to perform identity theft.  Hackers could even decrypt the PAN given the high likelihood that the PIN to decrypt the PAN could be derived from other information on the smartphone.  The nightmare scenario would be development of malware delivered through the smartphone’s application store that harvests the PII.

At the end of the day, there is just too much risk involved in certifying such applications because there is just no way to manage the risk.  So those of you thinking the PCI SSC should certify these applications need to rethink your position.

FYI  This is my 200th post.  I never guessed that this blog would last this long and I want to take this time to thank all of you for keeping this going.

05
Sep
11

It Is Time To Address PCI Compliance Reporting

It is QSA quality assurance assessment season at work.  I found out through our QSAC key contact person that we are being assessed again by the PCI SSC to see if our Reports On Compliance (ROCs) are written correctly.  This is a rather timely topic given the recent news that the PCI SSC revoked the QSAC and PA-QSAC status of an organization.

If the PCI compliance program has a flaw, this is the spot.  In the immortal words of Billy Crystal from his Saturday Night Live skit ‘Fernando’s Hide Away’, “It is better to look good, than to feel good.”  And that is exactly what the Scorecard, now known as the Reporting Instructions basically promote.  I have written about this topic before, but it is time to remind people of how ridiculous this process is to PCI compliance.

To any QSAs that have been through the QA process, it all comes down to having used the correct language in responding to the requirements of the ROC, rather than whether or not you actually assessed the right things.  And to add insult to injury, the PCI SSC advises QSACs to develop a template for the ROC with all the correct language written and proofed to ensure that ROCs are written to the standard the PCI SSC requires.  Technically, this allows a QSA to just fill in the blanks so that the ROC can be correctly filled out.  

Ironically, on August 3, 2011, this may be exactly what happened to Chief Security Officers (CSO) and why they were stripped of their QSAC and PA-QSAC statuses.  CSO may have had the greatest templates for Reports On Compliance (ROC) and Reports On Validation (ROV), but without the supporting documentation, they could have been just filling in the blanks with the right type of information without actually ensuring that the information supported the conclusions of the report.  While the FAQ issued by the PCI SSC does not explicitly state the reason for CSO’s QSAC and PA-QSAC status revocation, it does imply that this was likely the case when it says, “CSO’s status as a QSA and PA-QSA was revoked only after careful review of reports and evidence submitted as part of the quality assurance program …”

It is not like the PCI SSC cannot determine this fact; it is just that they likely do not have the resources to go through a proper assessment of a QSAC or PA-QSAC.  We have been repeatedly told over the years that the whole reason that all of that verbiage is required in the ROCs and ROVs was that the PCI SSC and the acquiring banks only had that language to give them an idea of what was performed, how it was performed and what the results were.  However, the PCI SSC has had the right to review work papers as well as the ROCs and ROVs for over two years now.  And what, exactly, the acquiring banks gleaned from the verbiage in the ROCs are debatable as I rarely ever hear back from any institutions regarding questions.  As a result, in my humble opinion, there is no good reason for all of the verbiage in the ROCs/ROVs.  As long as the PCI SSC has access to any project’s work papers as evidence, there is no reason to document all of the fieldwork in the ROC or ROV.  And to take things to their logical conclusion, unless there are compliance issues for a particular requirement, is there really a need for acquiring banks to get anything more than the AOC?

In the past when I have brought this up, it has been rebuffed by the PCI SSC, card brands and processors because they point out that the ROC and ROV are the only pieces of documentation that proves the QSA or PA-QSA did their job.  Really?  Telling your assessors to have a template and fill in the blanks is better?  Seriously?  This all comes down to an ability to trust the assessors are doing their job.  And if you cannot trust your assessors, then what is the point?  Coupling the QA program with an independent assessment of a QSAC’s/PA-QSAC’s work papers should be more than adequate to determine if the evidence exists and the appropriate work is being performed.

Reviewing work papers is a tough process.  In the public accounting world, we have internal and external reviews of our work.  Internal reviews are typically referred to as inter-office inspections as senior personnel from one office examine another office’s work papers for a sample of engagements to confirm that the work papers support our conclusions and opinions.  External reviews are conducted in a similar fashion, but by another accounting firm.  Inter-office reviews can occur as often as necessary.  External reviews typically occur every three to five years.  While all of this can appear a bit self serving, I can tell you from going through numerous inter-office and external inspections that they are anything but easy and typically bring out a number of areas that require improvement or changes in procedures.  I would highly recommend to the PCI SSC that they consider the self assessment and independent assessment approach for QSACs and PA-QSACs to supplement the existing PCI SSC QA process.

There would be all sorts of winners if we brought sanity to the ROC and ROV.  The first would be the organizations being assessed as they would likely see lower costs for their assessments.  I believe this because in my limited analysis of engagement costs, 30% to 50% of the cost of an assessment seems to be attributable to writing the report to meet the requirements documented in the Reporting Instructions.  QSAs would be able to create ROCs and ROVs much faster as the only times that would require detailed documentation would be any items that are Not In Place.  QSAs would win because they would not have to put forth an inordinate amount of effort generating 200+ page tomes.  Acquiring banks and processors would win because they would not have to read through those 200+ pages figure out if there are issues and where they exist.

I intend to bring this topic up again at the PCI Community Meeting in September.  Hopefully we can fix this problem and bring some rationality to the PCI compliance reporting process.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031