Archive for the 'Requirement 4 – Encrypt transmission of cardholder data' Category



14
Nov
15

Small And Mid-Sized Businesses

At this year’s PCI Community Meeting, the push was to address the security issues faced by small and mid-sized businesses, otherwise referred to as SMB. However, in my opinion, the approaches being suggested are still too complex. Great security results from simplicity, not complexity. As a result, I propose the following approach for SMBs because SMB executives typically have little time to fully educate themselves in information security, let alone, PCI. And while I am of the opinion that executives should have such knowledge, it is just not happening.

There Are No “Silver Bullet” Solutions

First and foremost. There are no “silver bullet” solutions that will entirely remove your organization from PCI scope. Any vendor telling you that their solution removes your organization from PCI scope is lying to you. If you hear such a statement from a vendor, the vendor does not know what they are talking about and their statements regarding PCI should no longer be trusted. The bottom line is that, if your organization accepts credit/debit cards for payment for goods/services, the organization will always have some PCI scope. The least amount of scope an organization can achieve is complying with the requirements listed in the SAQ A. There is nothing less. Anyone telling you otherwise does not know what they are talking about.

DO NOT STORE CARDHOLDER DATA (CHD)

This is probably the biggest single thing an SMB can do. In this day and age, there is no reason that any organization needs to retain CHD. Period. The most common business justification is that the organization does recurring transactions and that is the reason to retain CHD. Processors have a solution for that situation and many others. So I say it again. There is no valid business reason for any organization to retain CHD. None. Nada. Zip.

The first question out of an SMB executive’s mouth to a payment solution vendor should be, “Does your solution store cardholder or sensitive authentication data?” If the answer is anything other than an immediate and definitive “NO”, the meeting or telephone call is over, done, complete. There is nothing more to discuss. SMBs must stop being an easy target for attacks. The easiest way to do that is not having the CHD in the first place.

The second question that a payment vendor should be asked is, “How does your solution minimize my organization’s PCI scope?” If the vendor cannot provide you with a whitepaper on this subject, run away. If the documentation provided by the vendor leaves you with more questions than answers for PCI compliance, you also need to run away. In all likelihood, if this is what you encounter, the vendor’s PCI compliance is questionable, complex or requires too much effort on your part to be PCI compliant. This question should result in a one to three page whitepaper on PCI and how the vendor’s solution minimizes your organization’s scope.

So what solutions reduce scope to the minimum?

If you are a traditional brick and mortar retailer, end-to-end encryption (E2EE) from the card terminal, also known as the point of interaction (POI), to the transaction processor. PCI has a validation program called point-to-point encryption (P2PE). P2PE solutions are independently validated to ensure that they are secure. Solutions such as Shift4’s Dollars on the Net, First Data’s TransArmor and Verifone’s VeriShield are E2EE solutions that could meet the P2PE standard, but for various reasons the providers chose not to validate them to the P2PE standard. The key capability for any such solution is that the solution encrypts the CHD/SAD immediately when it is read from the card and none of your organization’s technology can decrypt the information and therefore read it.

If your organization does eCommerce, then you want to use a redirect or iFrame to process transactions in order to reduce PCI scope. The best example of a redirect is when a merchant uses PayPal for processing payments. The merchant’s Web site has a PayPal button that sends the customer to PayPal who then processes the customer’s payment transaction. At no time does the sensitive authentication data (SAD) encounter the merchant’s Web site. One of the concerns from merchants about redirects is the myth that customers vacate their shopping carts because they are redirected to a different site for payment. While this was true in the early days of eCommerce, with the increased use of PayPal and similar payment services, customers seem to have gotten over that practice and vacated shopping carts are no longer an issue. But if this is still a concern, use this as a teaching moment and educate your customer base that you do the redirect to ensure the security of their SAD.

An iFrame is essentially a Web page within a Web page. But the key thing from a PCI compliance perspective is that the iFrame is produced and managed by a third party, not the merchant. An iFrame can be a Web page, but more often than not it is a series of fields that gather the SAD for conducting a payment transaction. As with the redirect, the SAD never comes into contact with the merchant’s Web site.

Both of these solutions take your organization’s Web site out of scope so you do not need external and internal vulnerability scans and penetration tests. However, just because your Web site does not have to go through the rigors of PCI compliance, you still need to ensure its security. See my post on SAQ A and SAQ A-EP for a more detailed discussion on this topic.

Tokenization

Tokenization is the act of encrypting or tokenizing the primary account number (PAN) so that when it is returned to the merchant for storage it has no value to anyone if it is disclosed. Tokenization can occur at the time a card is swiped or dipped at the terminal or it can be done by the transaction processor at the back end of the transaction. Regardless of where the tokenization occurs, paired with E2EE or P2PE, tokenization further minimizes PCI scope.

If your organization needs to perform recurring transactions such as with subscriptions or automatic reorders, tokens can be generated by the processor so that they can be used just like a PAN. While a token is not a PAN, in situations where they can be reused for future transactions, it is incumbent upon the merchant to protect access to the token so that it cannot be sent to the processor for fraudulent charges.

And that is it. Not storing CHD, E2EE/P2PE and tokenization will reduce an organization’s PCI compliance footprint to the absolute minimum. It really is that simple. However, finding the solutions that bring all of that to the table is where the work comes in. However, any SMB that asks the right questions of its vendors can put together a solution that minimizes their scope and provides protection for CHD/SAD as good as with the big boys.

Advertisement
31
Oct
15

SSL Is Not Going To Go Quietly

A lot of organizations are finding out that just turning off SSL is just not an option. This is particularly true of merchants running eCommerce sites predominantly used by mobile customers or customers running older operating systems. To the surprise of a lot of IT people, it turns out that most mobile browsers do not support using TLS. And while most Western PC users have reasonably current browser software, the rest of the world does not and turning off SSL will remove a significant portion of some merchant’s customer base. As a result, for some organizations going “cold turkey” on SSL is just not an option without suffering significant consequences.

But there is a larger problem with SSL lurking inside almost every data center. That is with appliances and data center management software that have SSL baked into them for their Web-based management interfaces. A lot of vendors availed themselves of OpenSSL and other open source SSL solutions to secure communications with their appliances and solutions. To remediate these solutions, an organization might be lucky enough to upgrade the firmware/software. Unfortunately, a lot of organizations are finding that replacement is the only option offered by vendors to address these situations.

The bottom line is that because of these situations, SSL and early TLS will not be addressed by just disabling it and moving on. As the PCI SSC found out when they asked Qualified Security Assessor Companies and Participating Organizations about what it would take to address the SSL/early TLS situation, they were told about these issues and therefore set a deadline of June 30, 2016 to provide time to address these situations.

While organizations have until June 30, 2016 to address SSL and early TLS, that does not mean an organization can just sit by and do nothing until that deadline. Here are some things your organization should be doing to address SSL and early TLS if you are unable to just turn it off.

  • Get a copy of NIST Special Publication 800-52 Revision 1 titled ‘Guidelines for the Selection, Configuration and Use of Transport Layer Security (TLS) Implementations’. This publication is the Bible for how to minimize and mitigate the risks of SSL and early TLS.
  • Identify all instances of where SSL or TLS are used and versions supported. It is not just those instances that need to be remediated, but all instances. The reason is that TLS v1.3 is in draft specification and its release is likely just around the corner in 2016. That is why a complete inventory is needed so that when TLS v1.3 is available you will know what remaining instances will potentially need to be updated, upgraded or possibly even replaced.
  • Implement TLS-FALLBACK-SCSV to minimize the chance of SSL/TLS fallback. This option was developed to address the issue created by POODLE. However, be aware that only certain versions of browsers support this option, so it is not a perfect solution.
  • Monitor your external Web sites for SSL and early TLS usage. Track statistics of how many sessions are using SSL or early TLS so that you can determine usage of those protocols and therefore know the actual impact of any decision regarding those protocols. These statistics will also allow you to know when you might be able to pull the plug on SSL and early TLS with minimal impact.
  • Modify any external Web sites to present a message to anyone using SSL or early TLS to warn them that you will be no longer supporting SSL/early TLS as of whatever date your organization chooses to drop that support.
  • Where possible, configure the Web site to only use SSL or early TLS as the absolute last resort. Unfortunately, a lot of vendors modified their SSL solution to not allow this sort of change so do not be surprised if that does not become an option.
  • Develop a migration plan for your remaining instances where SSL or early TLS are used. Contact vendors involved and document what their plans are for dropping SSL and early TLS.
  • Be prepared to create compensating controls for SSL and early TLS that you will not be able to remediate by the deadline. Unfortunately, I have a sneaking suspicion that some vendors will miss the June 30, 2016 deadline as will some merchants be unable to turn off SSL by the deadline. As a result, those organizations will have to put compensating controls in place to maintain PCI compliance. These compensating controls will likely be messy and complex as enhanced monitoring will likely be the only controls available.
21
Feb
15

Incidental Contact

I have had a number of questions recently regarding how to deal with the occasional customer that sends cardholder data (CHD) or sensitive authentication data (SAD) to the merchant via email or instant messaging in blatant disregard to security.

Most people point to requirement 4.2 in the PCI DSS v3 and say it is not allowed for PCI compliance.  However, that is wrong.  Requirement 4.2 states:

“Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word is “send”.  Requirement 4.2 does not say a merchant or service provider cannot receive PANs by end-user messaging technologies, only that they cannot send them by those same messaging technologies.

The Council has always recognized that there were always going to be a small percentage of people that would ignore security and will send their CHD/SAD via any number of insecure methods all in the name of expediency or convenience.  As a result, the PCI DSS has been structured to allow for those occurrences, something a lot of QSAs refer to as “incidental contact”.  What is important to a QSA is how you handle incidental contact.

The first important point to make is that once CHD/SAD is received via an end-user messaging technology, the merchant or service provider cannot then forward the information on using email or similar technologies.  The merchant or service provider must break the chain of that communication as soon as possible.

Security purists will point to the fact that deleting such messages from their sources is not secure.  In some cases a message could exist overnight and therefore exist on backup tapes of some technologies.  While this is all true, we are not talking about a consistent flow of CHD/SAD, we are talking about an occasional occurrence.  Organizations will have to accept the risk that their end-user messaging systems will have some CHD/SAD in them but that the amount is trivial because of how they deal with such occurrences.  If your organization is not willing to accept this risk, then you will have come up with an approach that will allow you to stop such occurrences.

The other key point to make is that incidental contact does not necessarily bring the end-user messaging technology into scope for PCI compliance.  In my opinion, what a merchant or service provider needs to prove to their QSA is that such occurrences are not condoned by the organization (i.e., by policy, such exchanges are not recommended), employees are trained to handle such exchanges securely, and that the exchanges occur only occasionally.  The term “occasionally” is the tough one and is up to the organization to define for the QSA.  I have dealt with large organizations that could receive around 50 such messages a day on bad days, but the annual total of incidental contact was well below 1% of the total number of transactions.  The rule of thumb that I use is that as long as the volume of transactions received over end-user messaging never exceeds 1% of the total I consider that as incidental contact.  However, I could see acceptable arguments for a 2% threshold based on the type of customers of the organization.  However, going higher than that value would, in my opinion, be too great.

With that stated, what is an organization to do with such messages?

Some organizations prefer to not act on any end-user messaging that contains CHD/SAD.  They prefer to record the sender’s communication account information, delete the message and then send a message back to the sender explaining that they cannot accept CHD/SAD through the communication method and tell the sender to use one of their approved methods for communicating CHD/SAD.

Other organizations are all about customer service and will reluctantly accept such communications.  They will print out the communication and delete the original message.  Once they have processed the transaction, they redact the CHD/SAD, take a copy of the redacted original and then securely destroy the original.  I recommend redaction using a Sharpie marker or similar.  The reason for taking and retaining a copy of the original is so that, when held up to a light, the redacted digits cannot be determined as would be the case if the redacted original were retained.

Some organizations will use the transaction confirmation process as an opportunity to remind their customer that the sending of CHD/SAD via the end-user messaging technology should be avoided in the future.

We live in an imperfect world where people are not necessarily as security conscious as the world sometimes demands.  As a result, merchants and service providers need to be flexible in how they approach situations where their customers communicate with them through insecure channels.  Hopefully I have given you some ideas as to how to approach these situations and deal with them in as secure a manner as possible.

18
Feb
15

Council Surveys QSAs On SSL

This message popped into my inbox late yesterday.

20150217-PCISSCemailMsg

The survey in question contains the following questions.

20150217-PCISSCSurvey

All of my clients have gotten rid of SSL on their public facing Web sites.

The dilemma we have is that while SSL is dead, it is baked into so many products and appliances.  My clients are therefore stuck with appliances and software products that have SSL hard coded into them.  As a result, they will be dependent on their vendors to convert to TLS.

That said, what is the risk of using SSL internally?  Not a good practice, but truthfully, what is the risk?

In my opinion, using SSL internally for the next 12 to 24 months would not be the end of the world as long as it does not become a significant attack vector.

It will be interesting to hear the results of this survey.

07
Feb
15

SSL Is Officially Declared Dead

On January 30, 2015, QSAs received the latest edition of the Council’s Assessor Newsletter.  Buried in that edition was the following statement.

Notice: PCI DSS and PA-DSS v3.1 Revisions Coming

In order to address a few minor updates and clarifications and one impacting change, there will be a revision for PCI DSS and PA-DSS v3.0 in the very near future. The impacting change is related to several vulnerabilities in the SSL protocol. Because of this, no version of SSL meets PCI SSC’s definition of “strong cryptography,” and updates to the standards are needed to address this issue. (Highlighting emphasis added by the PCI Guru)

We are working with industry stakeholders to determine the impact and the best way to address the issue. While we do not have the final publication date, our goal is to keep you apprised of the progress and to provide you with advanced notification for these pending changes. We are also preparing several FAQs that will accompany release of the revised standards.

Should you have any questions, please contact your Program Manager.”

Because the announcement was titled about the coming v3.1 revisions to the PCI DSS and PA-DSS standards, I am sure a lot of QSAs missed this pronouncement.

Not that this should be a surprise to any QSA as the POODLE vulnerability effectively killed SSL.  The Council has now officially announced that SSL is no longer deemed to be strong cryptography.

Therefore, those of you still using SSL to secure transmissions containing cardholder data (CHD) need to stop that practice as soon as possible and convert to TLS or IPSec.

UPDATE: On February 13, 2015, the PCI SSC issued an update to their original announcement in the Assessor Newsletter.

08
Oct
14

Do Not Jump To Conclusions

A QSA apparently posed a question to the Council regarding the scope of wireless headsets used in a client’s call centers.  In this case, the headsets rely on DECT technology.  The response from the Council was as follows:

“Although DECT is not specifically referenced in PCI DSS v3, it is a digital wireless telephone technology and given the scenario you are describing, PCI DSS requirement 4.1 and 4.1.1 would apply.”

The resulting LinkedIn discussion surrounded whether the DECT headsets are in-scope which, of course, they are in-scope.  However, the implication of the discussion was that, if in-scope, could the DECT headsets be considered as PCI compliant.  Let us walk through a discussion of this issue and develop a position on whether or not DECT headsets are a risk and can they be considered PCI compliant.

For those of us that do not have the PCI DSS memorized requirement 4.1 states:

“Use strong cryptography and security protocols (for example, SSL/TLS, IPSEC, SSH, etc.) to safeguard sensitive cardholder data during transmission over open, public networks, including the following:

– Only trusted keys and certificates are accepted.

– The protocol in use only supports secure versions or configurations.

– The encryption strength is appropriate for the encryption methodology in use.”

Requirement 4.1.1 states:

“Ensure wireless networks transmitting cardholder data or connected to the cardholder data environment, use industry best practices (for example, IEEE 802.11i) to implement strong encryption for authentication and transmission.”

For those of you not up on DECT, it does not rely on strong encryption as defined by NIST and other recognized sources.  The encryption used is 64-bit, almost as lame as DES.  But it gets worse; the protocol does not require the use of a secure authentication method to pair devices to their base station.  As a result, it is relatively easy to force authentication to a rogue base station.  To add to the threat, the theoretical transmission distance is 500m or around a third of a mile.  So it has the capability of transmitting fairly long distances.

Sounds like a PCI and general security train wreck does it not?

Now before we all go off and tell every one of our call center clients that DECT is no longer allowed, let us all take a big deep breath and look at this issue clearly.

The first question that should always be asked is what the real world likelihood of such an attack is.  In this case, would an attack on 20, 50, 100 or more DECT headsets make sense?  Probably not and here is why I believe that to be the case.

You would need as many rogue devices as actual headsets to surreptitiously pair with each individual headset in order to get the conversations.  This would require a large van with racks of notebooks in order to accomplish such an attack.  And that assumes that the transmission distance quoted in the standard can be relied upon.  However, based on the use of my own DECT phones at my home, I can tell you that my phones have issues 30’ away from my house, let alone a third of a mile away.

If that isn’t enough, the DECT cards required are no longer manufactured.  If you are lucky, you may be able to get them on eBay from Europe for about $25€ or $30USD.  I would take this as a good indication that DECT hacking was not a big thing.  But it does get worse; the cards use the PCMCIA interface (superseded in 2003) and, according to the limited number of eBay sellers, do not work reliably for hacking DECT when using the requisite adapter cables for connecting them to modern computers via USB.  As a result, the hack will also require a large number of old notebooks to execute.

The final nail in this coffin is that the known software exploit, ‘deDECTed’, appears to have languished in development (most likely because of the situation with the PCMCIA cards) and was only included in one distribution of BackTrack, now Kali Linux.  You can still download it, but without the requisite hardware, you are pretty much at a standstill.

While all of the tools exist, is this threat realistic?  Why would someone go through all of this effort when, in all likelihood, it would have been probably a thousand times easier to hack the call recording system?  Hacking the call recording system would skip all of the rigmarole of surreptitiously going after the headsets and skip straight to searching the recordings.

In my opinion, while there is a threat, the risk of that threat occurring is low.  Based on this analysis, I would feel comfortable judging these DECT headsets as being PCI compliant and would provide this analysis in my work papers so that reviewers could understand my rationale.

However, this is me talking from my willingness to accept this risk.  Other people and organizations might not be quite so willing and may decide to not allow DECT headsets or phones.  That is their decision but it should be made with information and discussion such as was provided here and not in a vacuum as a “knee jerk” response.

By the way, this technique of capturing people’s conversations is much easier to do with Bluetooth and such tools exist in Kali Linux to accomplish that attack.  However, the same issue of one rogue device to one Bluetooth device still exists.  Good news there, Kali Linux is available for smartphones, so you only need a lot of smartphones to execute the attack.  That is mitigated by the fact that the distance for Bluetooth is only 30’ or 9m.  So as long as a call center enforces a policy of no personal or foreign technology on the call center floor, then any headsets should be safe.

The take away from this post is to think through the implications of the Council’s directives before you go off advising organizations that certain technologies are not PCI compliant.  While I agree with the Council’s answer to the question, it did not immediately mean that the technology was now verboten just because the technology’s basic characteristics appeared to make it non-compliant.  QSAs and organizations need to assess the threat, the risk of the threat occurring and then make a decision as to whether or not that threat is something to be managed or avoided.

24
Aug
14

P2PE Versus E2EE

I have been encountering a lot of organizations that are confused about the difference between the PCI SSC’s point-to-point encryption (P2PE) certified solutions and end-to-end encryption (E2EE).  This is understandable as even those in the PCI community are confused as well.

E2EE is the generic terminology used by the IT industry to describe any solution that encrypts communications from one endpoint to another endpoint.  Key management of the encryption can be done by any party that has an endpoint such as a merchant or a service provider.  Examples of E2EE include IPSec, SSL and TLS.

One of the most common E2EE solutions used by merchants is derived unique key per transaction (DUKPT) also known as “duck putt”.  DUKPT is commonly used in the convenience store and gas station industries to encrypt sensitive authentication data (SAD) from the gas pump to the merchant or processor.  DUKPT uses the 56-bit data encryption standard (DES) encryption or triple DES (3DES) algorithms.  While DES and 3DES 56-bit and 112-bit are no longer considered secure, because DUKPT uses a unique key for every transaction, it means that every transaction has to be individually broken to gain access to the data.  While using the cloud could be leveraged to perform this rapidly, it would be too costly an effort for the data retrieved.  As a result, DUKPT is still considered a secure method of encryption.

P2PE is a subset of E2EE.  This is because the major difference between P2PE and E2EE is that P2PE does not allow the merchant to be a manager of the encryption keys.  Under the P2PE standard, only the transaction processor or other third party is allowed to perform key management.  The merchant is never allowed to perform encryption key management under the P2PE standard.  As a result, DUKPT can be used by both P2PE and E2EE solutions.  However, under P2PE, the key management must be done by a third party, not the merchant.

While third party key management is typically acceptable for small merchants, this does not work for merchants that switch their own transactions to various processors as do mid-sized and large merchants.  That does not mean that E2EE solutions are not acceptable for reducing PCI scope.  As with PA-DSS certified applications, P2PE certified solutions can be accepted by a QSA as long as they are implemented according to the P2PE implementation guide which can reduce the amount of testing a QSA is required to perform.  In my experience, P2PE versus E2EE testing efforts are typically negligible, so any so-called savings are limited at best.

The huge downside to P2PE for merchants is that once you decide on a given P2PE solution, you are pretty much stuck with it and the processor providing it.  That is because most processors offering P2PE are only offering one P2PE solution.  As a result, if a better deal comes along for processing your transactions, you will likely have to replace your terminals and possibly other equipment to switch to the new processor.  For some merchants, that could be a costly proposition and make any switch not worth the effort.

So if your organization is looking at P2PE versus E2EE, I would not necessarily give an advantage to P2PE over E2EE.  Just because an E2EE solution is not P2PE certified does not mean it is not secure.  It only means that the vendor did not believe that the P2PE certification was worth the effort.

18
May
14

Adventures In Finding Cardholder Data

On page 10 of the PCI DSS v3 under the heading of ‘Scope of PCI DSS Requirements’, second paragraph, is the following sentence.

 “At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

Under the first bullet after that paragraph is the following.

 “The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined CDE.”

In the past, organizations would rely on their database and file schemas along with their data flow diagrams and the project was done.  However, the Council has come back and clarified that the search for cardholder data (CHD), primarily the primary account number (PAN).  The Council has stated that this search needs to be more extensive to prove that PANs have not ended up on systems where it is not expected.

Data Loss Prevention

To deal with requirement 4.2, a lot of organizations invested in data loss prevention (DLP) solutions.  As a result, organizations with DLP have turned those DLP solutions loose on their servers to find PANs and to confirm that PANs do not exist outside of their cardholder data environment (CDE).

Organizations that do this quickly find out three things; (1) the scope of their search is too small, (2) their DLP solution is not capable of looking into databases, and (3) their DLP tools are not as good at finding PANs at rest as they are when it’s moving such as with an email message.

On the scope side of the equation, it’s not just servers that are in scope for this PAN search, it’s every system on the network including infrastructure.  However, for most infrastructure systems such as firewalls, routers and switches it is a simple task to rule them out for storing PANs.  Where things can go awry is with load balancers, proxies and Web application firewalls (WAF) which can end up with PANs inadvertently stored in memory and/or disk due to how they operate.

Then there is the scanning of every server and PC on the network.  For large organizations, the thought of scanning every server and PC for PANs can seem daunting.  However, the Council does not specify that the identification of CHD needs to be done all at once, so such scanning can be spread out.  The only time constraint is that this scanning must be completed before the organization’s PCI assessment starts.

The second issue that organizations encounter with DLP is that their DLP has no ability to look into their databases.  Most DLP solutions are fine when it comes to flat files such as text, Word, PDF and Excel files, but the majority of DLP solutions have no ability to look into databases and their associated tables.

Some DLP solutions have add-on modules for database scanning but that typically required a license for each database instance to be scanned and thus can quickly become cost prohibitive for some organizations.  DLPs that scan databases typically scan the more common databases such as Oracle, SQL Server and MySQL.  But legacy enterprise databases such as DB/2, Informix, Sybase and even Oracle in a mainframe environment are only supported by a limited number of DLP solutions.

Another area where DLP solutions can have issues is with images.  Most DLP solutions have no optical character recognition (OCR) capability to seek out PANs in images such as images of documents from scanners and facsimile machines.  For those DLP solutions that can perform OCR, the OCR process slows the scanning process down considerably and the false positive rate can be huge particularly when it comes to facsimile documents or images of poor quality.

Finally there is the overall issue of identifying PANs at rest.  It has been my experience that using DLP solutions for identifying PANs at rest is haphazard at best.  I believe the reason for that is that most DLP solutions are relying on the open source Regular Expressions (RegEx) to find the PANs.  As a result, they all suffer from the same shortcomings of RegEx and therefore their false positive rates end up being very similar.

The biggest reason for the false positive rate is the fact that most of these solutions using RegEx do not conduct a Luhn check to confirm that the number found is likely to be a PAN.  That said, I have added a Luhn check to some of the open source solutions and it has amazed me how many 15 and 16 digit combinations can pass the Luhn check and yet not be a PAN based on further investigation.  As a result, having a Luhn check to confirm a number as a potential PAN reduces false positives, but not as significantly as one might expect.

The next biggest reason RegEx has a high false positive rate is that RegEx looks at data both at a binary level and character level.  As a result, I have seen PDFs flagged as containing PANs.  I have also seen images that supposedly contained PANs when I knew that the tool being used had no OCR capability.

I have tried numerous approaches to reduce the level of false positive results, but have not seen significant reductions from varying the RegEx expressions.  That said, I have found that the best results are obtained using separate expressions for each card brand’s account range versus a single, all-encompassing expression.

Simple Solutions

I wrote a post a while back regarding this scoping issue when it was introduced in v2.  It documents all of the open source solutions available such as ccsrch, Find SSNs, SENF and Spider.  All of these solutions run best when run locally on the system in question.  For small environments, this is not an issue.  However, for large organizations, having to have each user run the solution and report the results is not an option.

In addition, the false positive rates from these solutions can also be high.  Then there is the issue of finding PANs in local databases such as SQL Lite, Access or MySQL.  None of these simple solutions are equipped to find PANs in a database.  As a result, PANs could be on these systems and you will not know it using these tools.

The bottom line is that while these techniques are better than doing nothing, they are not that much better.  PANs could be on systems and may not be identified depending on the tool or tools used.  And that is the reason for this post, so that everyone understands the limitations of these tools and the fact that they are not going to give definitive results.

Specialized Tools

There are a number of vendors that have developed tools that have been developed to specifically find PANs.  While these tools are typically cheaper than a full DLP solution and some of these tools provide for the scanning of databases, it has been my experience that these tools are no better or worse than OpenDLP, the open source DLP solution.

Then there are the very specialized tools that were developed to convert data from flat files and older databases to new databases or other formats.  Many of these vendors have added modules to these tools in the form of proprietary methods to identify all sorts of sensitive data such as PANs.  While this proprietary approach significantly reduces false positives, it unfortunately makes these tools very expensive, starting at $500K and going ever higher, based on the size and environment they will run.  As a result, organizations looking at these tools will need more than just use their need for PAN search capability to justify their cost.

The bottom line is that searching for PANs is not as easy as the solution vendors portray.  And even with extensive tuning of such solutions, the false positive rate is likely going to make the investigation into your search results very time consuming.  If you want to significantly reduce your false positive rate, then you should expect to spend a significant amount of money to achieve that goal.

Happy hunting.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

10
Jan
14

The Economics Of EMV

There are a lot of people out there that have apparently taken big swigs of the EMV Kool Aid and think that merchants and banks in the United States are all idiots for not believing in EMV.  Well folks, here is EMV by the numbers.  Unfortunately, the best set of complete numbers I could get are from 2009, but I know that the fraud percentages have not radically changed since 2009.

As this example will illustrate, EMV in the US is a non-starter, not because we do not like EMV, but because it makes no financial sense. While I am using Target as the example, these numbers are pretty much what most retailers (large or small) are looking at as they evaluate going to EMV.

  • Target had around $65B USD in revenue for 2009 as reported in their Annual Report.
  • For 2009, card fraud amounted to 0.11% according to a report from the US Federal Reserve Bank of Kansas City report on EMV adoption. For comparison, card fraud in the UK (the best in Europe and the best for EMV countries) is 0.08%, a 0.03% improvement over the US.
  • We know that not all of Target’s revenue is in card transactions but I will estimate that 70% of revenue was card transactions (around $45.5B USD). Then Target has around $50M in losses related to card fraud for the year at 0.11%.  Therefore, assuming a 0.03% improvement in fraud due to implementing EMV, Target is saving around $13.5M USD a year.
  • Estimating between $50M to $100M USD to replace the POS (possibly), terminals and software to support true EMV (for comparison, Target is already spending an estimated $25M to $30M just on new terminals), Target gets a payback on that $13.5M USD savings due to EMV in around four to seven years.

I can tell you from experience that, if a merchant cannot get a three year or less payback, they will not even consider the investment. A two year or less payback is actually preferred and the only sure way for any project to get management’s consideration and approval.

But while the financials for EMV do not add up, there are also other factors that are causing retailers to question a conversion to EMV.

One of the largest is the fact that EMV does nothing to stem the fraud losses from card not present (CNP) transactions. Since most retailers are viewing eCommerce as their next new retail opportunity, the exponentially increasing losses due to CNP fraud does not improve the likelihood of converting to EMV. And with that larger focus on eCommerce and maintaining brick and mortar margins, there is also the concern regarding investing significantly in any changes to those brick and mortar operations that also hold back retailers from transitioning to EMV.

Another consideration is that a lot of retailers just upgraded their terminals a few years back to comply with the PCI PTS requirement. Most retailers like to get at least seven to ten years out of their technology investments. Had Visa and MasterCard played their cards right and coordinated their EMV push with the PTS changes, the US likely would have converted to EMV.

Finally, there are concerns about EMV even surviving given the advent of new payment technologies such as eWallets as well as Bitcoin and other new forms of payments. As a result, a lot of retailers are sitting on the sidelines while technology and payment methods sort themselves out before considering making any investments in new payment process capabilities.

That my friends are the cold, hard facts of why EMV is currently dead on arrival in the US.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031