Posts Tagged ‘P2PE


P2PE Versus E2EE

I have been encountering a lot of organizations that are confused about the difference between the PCI SSC’s point-to-point encryption (P2PE) certified solutions and end-to-end encryption (E2EE).  This is understandable as even those in the PCI community are confused as well.

E2EE is the generic terminology used by the IT industry to describe any solution that encrypts communications from one endpoint to another endpoint.  Key management of the encryption can be done by any party that has an endpoint such as a merchant or a service provider.  Examples of E2EE include IPSec, SSL and TLS.

One of the most common E2EE solutions used by merchants is derived unique key per transaction (DUKPT) also known as “duck putt”.  DUKPT is commonly used in the convenience store and gas station industries to encrypt sensitive authentication data (SAD) from the gas pump to the merchant or processor.  DUKPT uses the 56-bit data encryption standard (DES) encryption or triple DES (3DES) algorithms.  While DES and 3DES 56-bit and 112-bit are no longer considered secure, because DUKPT uses a unique key for every transaction, it means that every transaction has to be individually broken to gain access to the data.  While using the cloud could be leveraged to perform this rapidly, it would be too costly an effort for the data retrieved.  As a result, DUKPT is still considered a secure method of encryption.

P2PE is a subset of E2EE.  This is because the major difference between P2PE and E2EE is that P2PE does not allow the merchant to be a manager of the encryption keys.  Under the P2PE standard, only the transaction processor or other third party is allowed to perform key management.  The merchant is never allowed to perform encryption key management under the P2PE standard.  As a result, DUKPT can be used by both P2PE and E2EE solutions.  However, under P2PE, the key management must be done by a third party, not the merchant.

While third party key management is typically acceptable for small merchants, this does not work for merchants that switch their own transactions to various processors as do mid-sized and large merchants.  That does not mean that E2EE solutions are not acceptable for reducing PCI scope.  As with PA-DSS certified applications, P2PE certified solutions can be accepted by a QSA as long as they are implemented according to the P2PE implementation guide which can reduce the amount of testing a QSA is required to perform.  In my experience, P2PE versus E2EE testing efforts are typically negligible, so any so-called savings are limited at best.

The huge downside to P2PE for merchants is that once you decide on a given P2PE solution, you are pretty much stuck with it and the processor providing it.  That is because most processors offering P2PE are only offering one P2PE solution.  As a result, if a better deal comes along for processing your transactions, you will likely have to replace your terminals and possibly other equipment to switch to the new processor.  For some merchants, that could be a costly proposition and make any switch not worth the effort.

So if your organization is looking at P2PE versus E2EE, I would not necessarily give an advantage to P2PE over E2EE.  Just because an E2EE solution is not P2PE certified does not mean it is not secure.  It only means that the vendor did not believe that the P2PE certification was worth the effort.


Why The Paradigm Must Change

The Target, Neiman Marcus and the potential other breaches of retailers to come should be a learning moment for all of us to demand that the card brands change their business paradigm to one that is more secure.

Bolt-Ons Do Not Cut It

For all intents and purposes, how a credit card works has not changed since the late 1950s when they were introduced.  Yes, there have been advancements such as EMV, 3D Secure and end-to end encryption (E2EE), but those are all things that just bolt onto the original concept.  The trouble is that, given today’s technologies and their capabilities, the card and the bolt-ons are just no longer providing the security they once did.

With the Target breach there has been a call to get the US to finally convert to EMV.  The trouble is that EMV would have leaked enough information for fraud to be committed as well, so it is not an answer.

Trade association spokespeople trotted out 3D Secure and other methods of securing online transactions.  The trouble is that most merchants eschew 3D Secure and its kind.  In addition, there are known vulnerabilities with these supposedly secure payment methods so they also have potential issues that could be exploited.

Then there is E2EE also known as point-to-point encryption (P2PE) from a PCI perspective.  These also can be exploited.  It may be more difficult, but when you are determined to gain access to sensitive information, that does not matter.

After the release of the PCI DSS in 2008, a lot of retailers implemented a variety of E2EE solutions.  Unfortunately, the endpoint at the retail location was the POS register and not the terminal.  This was not due to merchants’ negligence; this was due to how their POS applications operated.  This allowed for attacks such as that used in the Target breach to succeed.  All the attacker has to do is insert their malware into the POS process so that the malware can “see” the cardholder data before it gets encrypted.

Even in solutions that do E2EE/P2PE to the terminal can be defeated by taking the same approach and inserting the malware into the terminal process before the terminal can encrypt the data.  Worse yet, if the terminal is breached, the attacker can capture PINs if they also have malware that captures the keystrokes on the terminal before the PIN is encrypted.  There are a number of methods to minimize these risks at the terminal, but if the terminal supply chain is compromised as it was over a year ago in the Barnes & Noble breach, there is little a merchant can do to stop such attacks.

The bottom line is that all of these solutions are bolt-ons to the existing card paradigm and all still have risks that a breach could occur.

Using Complexity Against Us

Brian Krebs and others have wondered aloud how a sophisticated organization such as Target that has information security and forensic resources second possibly only to the government could have been compromised.  Particularly after the 2007 compromise by Albert Gonzales when Target totally revamped and increased their security posture to minimize the likelihood of another event.

The first clue to me came when I read the iSIGHT PARTNERS report on the Target breach.  The theme that comes through loud and clear is that the attackers are using the complexity of Target’s technology infrastructure against Target.  I mean how could FTP activity and huge data transfers (internal and external) go so unnoticed?

Actually, that was likely fairly easy.  The attackers used existing network traffic to mask their own network traffic.  They sought out servers that already had large volumes of traffic and put their data collection server on one of those servers that already had a lot of traffic.  Better yet, a server that was already running as an FTP server.  As a result, even with diligent monitoring, the increase in traffic likely did not raise any alarms.

People assume that such breaches are like a “snatch and grab” in the real world.  The attackers break into an organization’s network, quickly take what they can off of the computers they encounter and leave.  That was the modus operandi (MO) in the past, but not today.  Sophisticated and organized attackers such as those that breached Target, do what they can to remain unseen while they learn more about their victim.  They take their time mapping out the network and determining what devices they want to compromise to further their efforts to gain access to the sensitive information they seek.  Because of this, it is highly likely that the Target attackers encountered the Target customer database during their investigation of the Target network and took it first so that they would have at least something for all of their efforts.

The most insidious thing I think the attackers did was that they likely used Target’s software distribution system to disseminate their malware.  Given the number of POS systems compromised (around 51,000); I find it hard to believe that the attackers manually installed their malware on those POS systems.  It would have placed their operation at extreme risk likely resulting in its discovery.  By using Target’s software distribution system, the attackers got an added benefit of legitimacy to their malware because they Target themselves did the installation.  As such, the malware would appear as valid because Target’s software management system initiated the change.

Now What?

All of this brings up an interesting conundrum.  If attackers are stepping up their game and using such techniques, how do we detect them?  It is a very good question with no good answers.  The iSIGHT report offers methods to stop and eradicate this particular attack.  However, the next attack and the attack after that will all likely use different malware and different techniques to get the data out of your network.

We are in is a war of escalation with no end in sight.  Merchants step up their efforts to stop such attacks and the attackers adapt and adopt new techniques to breach organizations and gain access to their sensitive information.  What we need is a solution that stops the escalation and gets us out of this vicious circle.

That is why I am pushing the 15 – 16 character single use transaction code as that solution.  My reasons are as follows.

  •  The algorithms already exist as a number of the card brands experimented with them a decade or more ago.
  • It will work with existing POS technology and applications.
  • It will work with existing eCommerce sites.
  • It can be implemented into eWallet applications.
  • It can be processed, stored and transmitted without encryption.
  • It can be generated by PCs, smartphones, tablets, credit card sized devices and any other devices that have computational capabilities.
  • It can be displayed on devices in a character format for manual entry or as one or 2D bar codes for scanning.
  • It can be transmitted via swipe, EMV, near field communication (NFC), Wi-Fi or even Bluetooth.
  • And best of all, it is secure by the very nature that it can only be used once.

There will be some changes that would be required at the transaction processors and acquiring banks to handle such a solution.  But given that some of the card brands already have experience with this solution, there is a body of knowledge that already exists as to how it needs to be implemented.

Let the discussion begin on how we move ahead with a better, more secure solution.


P2PE Revisited

David Froud is on a roll.  Tenable’s Jeffrey Man wrote a post regarding point-to-point encryption (P2PE) and it apparently got the juices flowing.

I have discussed P2PE (known to almost everyone else as end-to-end encryption or E2EE) a number of times (see my Post Series References page).  Also see my post on What Happens Once Merchants Get Rid Of Cardholder Data to understand how the risks will shift and that the terminal becomes a large attack point.  This is why we need to get to some sort of single use code for a payment which is easily handled with today’s smartphones.

Read the posts and decide.  I think you will find that they both make compelling cases for why P2PE certification is a non-starter and is really not needed.


How The PCI Standards Will Really Die

Welcome to the new year.  I hope the holidays have been treating you well and the coming year is good as well.

There have been a number of articles written about why and how the PCI compliance process will die.  It is not that I look forward to the PCI standards dying as they have brought a needed visibility to information security and privacy as well as the fact that PCI keeps me gainfully employed.  However if things stay on their current trajectory, the PCI standards will eventually die, but not for the reasons being quoted in today’s articles.  The real killers of the PCI compliance process will be the card brands and the PCI Security Standards Council.  Yes, the very folks that brought us the PCI standards will bring the ultimate demise of their precious set of standards.

The first death knell I see is that it is very easy to issue edicts from on high when you do not have to implement them.  Over the years, clarifications have been issued, quality assurance reviews performed, forensic examinations conducted and a host of other activities have resulted in “enhancements” to how the PCI standards are assessed and enforced.  Do not get me wrong, a lot of what has been done was needed and appreciated.

However, by the same token, some of what has come down has been a nightmare to implement.  Any QSAC not using some sort of automated system to conduct their PCI assessments will find it impossible to meet the current and any future documentation and tracking standards now required by the PCI SSC’s QA process.  Under the current standards, QSACs need to document who they interviewed and what the persons were interviewed about as well as tying documentation and observations to the tests performed.  Without some sort of automated process, these requirements are just too intensive to perform manually.

Documentation received and reviewed needs to have its file name, date of issue and a description of its purpose in the PCI assessment process documented.  The basic PCI DSS has a minimum of around 200 discrete documents that are required for the PCI assessment process.  The average we see for most of our engagements is over 600 documents which also include not only policies, standards and procedures, but configuration files, interview notes and observations such as screen shots, log files and file dumps.  You really have to question any QSAC that tells you they manually manage the process.  They either have an amazing and magically efficient project management process, they have very, very inexpensive staff (i.e., overseas labor) or they are short cutting the processes and producing a work product that does not comply with the PCI SSC QA program and have yet to be assessed by the PCI SSC (the most likely scenario).

Even using simple SharePoint or Lotus Notes solutions are not cheap when you consider the cost of the server(s) and the storage of all of documentation collected, which can be around 5 to 10GB per project, as well as all of the requisite system maintenance.  Servers and storage may be cheap, but it all adds up, the more clients you assess.  And speaking of the storage of documentation, the PCI SSC requires that documentation related to PCI assessments be stored for at least three years.  For those of us with electronic work paper management systems, this is not a problem.  However, given the amount of paper generated by these projects, those QSACs using the traditional paper filing methods will find a lot of shelf space taken up by their PCI engagements if they are truly following the procedures required by the PCI SSC.

All of this drives up the cost of a proper PCI assessment, more than I think the card brands and the PCI SSC are willing to admit.  It is not that I think the card brands and PCI SSC do not care about this situation, but more related to they do not have an understanding of the operational ramifications of their edicts.  The card brands and PCI SSC tread a very fine line here and to this point they have been heavy handed in the issuing of their edicts.  Going forward, the PCI SSC needs to ask the QSACs, Participating Organizations and ASVs to assess the cost and time impacts of these edicts so that they can be weighed against their benefits versus what is done now which is more of a procedural and proofing review.  If this is not done, there will soon come a point where merchants and service providers will push back hard and refuse to go through the process due to the cost and the amount of time involved to be assessed.

The next death knell is the inane process that is called the PCI Report On Compliance (ROC).  When the PCI SSC did not have access to the QSACs’ work papers, the current ROC writing process made some sense as there was no other way for the PCI SSC or the processors and acquiring banks to know if the QSACs had really done the work they were saying they had done.  However, all of that changed a number of years ago when the PCI SSC required QSACs to add a disclaimer to their contracts stating that the PCI SSC had the right to review all work products.  Yet even with this change, we continue to have to write an insanely detailed ROC, typically numbering in a minimum of 300+ pages for even the most basic of ROCs.

Unfortunately, there are QSACs out there that apparently have not been through the PCI SSC QA process and that dreaded of all states – Remediation.  As a result, they have much lower costs because they are not documenting their assessment work as completely as they need to and are not sampling, observing or interviewing like QSACs that have been through the QA process.  In addition, based on some work products we have seen, they also do not care about the quality of the resulting ROC as it looks entirely like a ‘find and replace’ of a template and makes no sense when you read it.  In talking to other large QSACs that have been through the QA process multiple times, the PCI SSC has indicated that they are monitoring the large QSACs more than the little QSACs because there is more risk with the large QSACs.  While true to an extent, we have encountered a number of smaller QSACs that perform assessments for large clients due to their much lower cost structure and their willingness to ‘overlook’ compliance issues.  If the PCI SSC does not go after these QSACs soon, there will likely be a number of breaches that occur due to the QSACs’ lack of diligence in performing their assessments.

I know of a number of QSACs that would like to see Bob Russo and the representatives of the various card brands to actually work as staff on a few PCI assessment engagements so that they can better appreciate the inordinate amount of work involved in generating a ROC.  I think they would be shocked at the amount of work effort they have driven into a process that is already too complicated and prone for error.

As it stands today, the ROC writing, review and proofing process is probably 50% to 60% of a good QSAC’s project costs.  To address this, the PCI SSC QA group tells QSACs to develop one or more templates for writing the ROC which, from what we have seen from some other QSACs, means a lot of mass ‘find and replace’ to speed the ROC writing process.  For the last few years, a number of QSACs have brought the ROC writing process up at the Community Meetings.  However the card brands continue to shoot down any sort of changes to the process.  As a result, the cost of producing a ROC is driven by the size and complexity of the merchants’ or service providers’ cardholder data environment (CDE).  These costs will only continue to rise as long as the PCI SSC does not allow QSACs to mark items as ‘In Place’ with only a check box and rely on the QSAC’s work papers versus the verbosity required now.  If this sort of process can work for financial auditors, it can work here as well.

A third death knell is the PCI SSC and card brands continuing to quote that the majority of breaches are the result of organizations not complying with the PCI DSS.  In discussions with a number of the PCI forensic examination companies, I am hearing that the card brands cannot believe the fact that more and more organizations were PCI compliant at the time of their breach.  The PCI SSC and card brands have apparently convinced themselves that the PCI standards are “perfect” and they cannot imagine that an organization could be breached unless that organization was not complying with the PCI standards.  There is no security standard that I am aware that totally prevent breaches.  So while the PCI standards are good baseline security standards, the card brands and PCI SSC seem to have forgotten that security is not perfect and that any security standard only minimizes the damage done when a breach occurs if the standard is truly followed.

And as organizations have gotten the PCI “religion,” the effort required to compromise them from the outside via traditional attacks has increased significantly.  As a result, successful attackers have changed strategy and work on social engineering their way past the bulk of an organization’s security measures.  The PCI DSS only has a little bit on social engineering in requirement 12.6 regarding security awareness training.  And even those organizations with the most robust of security awareness programs will tell you that, even after extensive security awareness training, human beings are still fallible and that some people still do very questionable things that continue to put organizations at risk, sometimes significant risk.  Even when you have the most diligent of employees, they still make mistakes in judgment from time to time.

Until the human element can be totally removed, there will always be a certain amount of risk that will never go away.  Again, the PCI SSC and card brands seem to not want to acknowledge the failings of the human element and appear to believe that technology is the savior based on the focus of the PCI standards.  However time and again, every security professional has seen very sophisticated security technologies circumvented by human error or just plain apathy towards security (i.e., “it always happens to someone else, not my organization” or “we’re too small to be a target”).

Until the PCI SSC and the card brands drop the “holier than thou” attitude toward the PCI standards and stop the public pillory of organizations that have been breached, there will continue to be editorial commentary regarding the pointlessness of the standards and ever more serious push back to complying with the standards.

These are the reasons why the PCI SSC and the card brands will be the ones that will kill the PCI standards.  At the moment, they are so far removed from the process; they do not understand how complicated and expensive the process has become which is why merchants and service providers are complaining about the ever increasing costs and effort related to the PCI assessment process.

The PCI SSC and card brands also seem to have forgotten that QSACs have to make money doing these assessments and, when you pile on clarifications and edicts that do nothing to streamline and simplify the process; you are only driving the costs of the process higher.  And higher costs only make merchants and service providers, who are on thin margins to being with, even more incentivized to use the much lower cost QSACs, driving the diligent QSACs out of the market, thus increasing the likelihood of breaches.

Again, it is not that I want the PCI standards to go away as I think they have brought a real benefit.  However, if these issues are not addressed, the PCI standards will end up going away.  I fear that, with them gone, there will be no carrot to ensure the security of cardholder information and we will end up back where we were before the PCI standards existed.


End-To-End Encryption – The Rest Of The Story

Step right up folks.  I have something that will cure all of your problems with credit card processing.  It is called end-to-end encryption.  Yes, folks, it is the be all, to end all in security.  It will cure all that ails you, particularly those nasty data breaches.  Don’t be shy, just step right up and get your own version while supplies last.

Gee, when end-to-end encryption (E2EE) is put that way, it sounds great, almost too good to be true.  And you would be right; it is too good to be true.  But if you listen to the statements of the proponents of E2EE, they make it sound like once E2EE is in place, it is like the Ronco Showtime Oven, “Just set it and forget it.”

Now, do not get me wrong.  E2EE is not a bad thing, but it does have its own set of risks.  And it is those risks that do not get discussed that concern me.  The reason for my concern is that if you discuss E2EE with any merchant, most see it as this panacea, something that will get them out of the PCI compliance game altogether.  However, nothing could be further from the truth.  If anything, E2EE may make PCI compliance even more daunting than it is today.

The first thing everyone seems to forget is that E2EE only removes those systems and networks that are between the endpoints.  That is because the data stream between the endpoints is encrypted and, therefore, out of scope for PCI compliance.  However, for a merchant, that means that the device that accepts the credit card is still in-scope for PCI compliance.  Bring this fact up to most merchants and they start complaining like no tomorrow.

That device might be as “simple” as a credit card terminal or as complex as an integrated point-of-sale (POS) workstation on a network.  However, since this device is an endpoint, the merchant or the merchant’s QSA needs to ensure that the endpoint is properly secured and cannot end up being a breach point.  Depending on the complexity of that device, that assessment might be very straight forward or very time consuming.  The reason the endpoint needs to be assessed is that security is only as good as its weakest link.  In the case of E2EE, the weakest links are the endpoints at which the data is encrypted and decrypted.

The next thing that seems to slip people’s mind is that fact that since the merchant has an endpoint, that endpoint is still a target.  Worse yet, because it is an endpoint, the level of sophistication likely required to compromise that endpoint goes up exponentially, meaning that any successful attack will likely be beyond the average merchant’s capability to readily detect.  The PCI DSS addresses this threat fairly well by requiring network monitoring, daily log reviews, anti-virus, anti-malware, firewalls and the like.  However, I can tell you from personal experience that your average merchant is not going to be equipped to deal with this new threat.

And what is the new threat?  The new threat is tampered with hardware and software.  If you think this is farfetched, think again.  It has already happened on a limited scale.  The doctoring of hardware is fairly straight forward to both accomplish and to detect.  Detection only takes looking inside the device and noticing something that does not belong.  However, doctored software is another story.  The concept of doctored software has been a concern in the health care industry since the start of using computerization for heart pacemakers.  While the health care industry has developed rigorous testing and certification procedures, the rest of the software industry has said there is no need.  That is, until now.  As the world further automates, the need for reliable, safe and secure software only increases because of the reliance people and organizations apply to that software.

So what can an organization do to stem this new threat after implementing E2EE?  Here are some thoughts.

  • Purchase your credit card processing equipment only from your acquiring bank or reputable vendor.  This is not a perfect solution to the problem, but doing this should be better than buying a used unit off of eBay or from Joe’s Guaranteed Card Equipment.  Yes, you may save a few bucks, but is that worth having every one of your customers that uses a credit card being compromised?  Probably not.
  • Ask your supplier of terminals or POS workstations about what they do to test these systems to ensure that they operate as expected and are not routing cardholder data to Timbuktu as well as your bank.  Ask them to provide those procedures in writing and review them to ensure they appear adequate.
  • Use serialized tamperproof tape on the seams and doors of your terminals and POS workstations.  Require that at every Manager shift change the new manager on duty is required to log their review of the devices, inventory the devices and notate if any have been tampered with.  If a device does appear to have been tampered with, it should be taken out of service until a new, secure device can replace it.
  • If using self-checkout systems, make sure to have those systems under both video and employee monitoring.
  • Upgrade your card processing devices to the latest devices.  Over the last few years, some of these devices have seen significant changes in their design that improves their tamper resistance.  This is particularly true of fuel pumps and certain types of terminals.
  • Review video monitoring if any manager notates that a device may have been tampered with to determine if you can identify possible suspects that may have tampered with the device.
  • Patch your devices as soon as possible to minimize their susceptibility to attack or compromise.
  • If the vendor of the equipment will perform updates, make sure that you or someone in your organization schedules the updates.  If anyone shows up at a location to “update” your equipment and it was not scheduled by your organization, contact law enforcement.
  • If updates will be done by the vendor remotely, make sure that someone from your organization initiates the remote access and they observe the remote update process.  At the end of the update process, the person should terminate the remote session of the vendor.

Even implementing these processes will not remove all of the risk.  Particularly the risk of having modified software introduced into your environment.  However, these processes will show a court that you attempted to conduct due diligence and tried to keep your equipment secure.


Is “End-To-End Encryption” Realistic? Part 3

Hopefully by this point I have pointed out that encryption, end-to-end or otherwise, is not a silver bullet.  It is just another tool to minimize the risk of data loss.  But why has it become the topic du jour?  That is what I hope to examine in this post.

There is the issue of end-to-end encryption even being feasible.  As I pointed out in my last post, while it is feasible, it may not be as secure as Mr. Carr and others desire.  In some cases, it may not be able to be implemented considering the technology used by all merchants.  Merchants live on very thin margins, even Target and Wal-Mart.  So the investment required to make changes may put some merchants out of business.  In today’s economic climate, the loss of jobs will far outweigh the monetary losses.  Until the economy picks up, merchants will likely fight to minimize any expenses to make changes to their systems and networks.

Speaking of monetary losses.  Based on the latest statistics I could find, 7.5% of Americans (almost 23 million people) have suffered from financial fraud.  While that is a fairly large number of people impacted, the total monetary losses to fraud versus total credit card charges are still well below 1%.  Until that percentage gets higher, we will likely see the card brands and merchants to accept this loss as the cost of doing business.

The fact that the US House of Representatives looked at this issue in the Committee on Homeland Security speaks volumes.  There is an assumption that this is the case since the bulk of fraud is now committed by criminal organizations.  I do not discount the possibility that some of these fraud moneys likely flows to terrorists, but the amount is likely so small that it is inconsequential.  Then there is the fact that Internet access in known terrorist countries and the number of attacks coming from those countries just does not support the conclusion that fraud funds terrorism.  Granted, a lot of attacks and fraud are conducted by surrogates on behalf of others.  However, based on everything I have read, there has been no correlation between the attackers and terrorists.  Until this can be correlated, this is just a smoke screen in my book.

In her statement during the House hearings, Representative Yvette Clark (D-NY) held out Chip and PIN as one of the keys to securing credit card transactions.  As I pointed out in my Chip and PIN post, this technology is not a silver bullet.  In fact, it has its own security issues, the largest being that the encryption it offers is weak at best.

Unfortunately, I think this issue is being discussed because the people discussing it believe that encryption solves the data breach problem.  If properly implemented, encryption will reduce the risk of successful data breaches, but it will not entirely get rid of them.  It will just make them more difficult to execute.  After all, banks and art museums still are robbed even with all of the security measures they have implemented.  What makes anyone think that data breaches will stop because of encryption?  That is the point, it will not.  Data breaches will continue to occur with or without encryption.  It is how successful those breaches are that will change.


Is “End-To-End Encryption” Realistic? Part 2

Let us examine what Robert Carr, CEO of Heartland, possibly means by ‘end-to-end encryption’.  In the Heartland press release it says, “For the past year, Carr has been a strong advocate for industry adoption of end-to-end encryption – which protects data at rest as well as data in motion – as an improved and safer standard of payments security.”

One of the keys to defining end-to-end encryption is the fact that Mr. Carr refers to it as protecting data at rest as well as when it is in motion.  As a former telecommunications and networking person, ‘end-to-end’ to me means from the initial point of contact with the network to that point on the network where the transmission terminates.  However, Mr. Carr is implying that he also includes the point at which the cardholder data retrieved from (i.e., the card, Smartphone, etc.) and then stored.  Therefore, it is from this definition that we will work.  Technically, what Mr. Carr describes seems possible.  However, there are some obstacles and limitations of the technology in place today that will make this re-engineering difficult and possibly impossible, at least for the immediate future.

In order to get true end-to-end encryption requires that the credit card also be encrypted.  Guess what?  We have that technology today in the Chip and PIN credit card.  If you remember from my post on Chip and PIN, this is not a technology without if flaws.  As I pointed out in that post, the chip on the Chip and PIN card is encrypted using either DES, 3DES, RSA or SHA.  Since DES is no longer considered a secure encryption method, the card brands should no longer recommend its use.  The larger problem with Chip and PIN encryption is that the encryption key is a four digit number (the PIN), which does not create a very secure cipher.  Essentially, we are talking about something in the neighborhood of 13-bit encryption versus the more robust 128-bit or better encryption required by the PCI DSS.  As a result, regardless of the encryption method used, it is the weak key (PIN) that creates weak encryption at the card.  If Mr. Carr thinks that he has part of his end-to-end solution in Chip and PIN, he needs to think again.

Then we have the encryption from the swiping device to the processor for approval or decline of the charge.  Now, a number of you may be saying, what about the POS system?  Hold that thought and I will discuss it in a little bit.  Let us talk about stand-alone terminals that may or may not be integrated into a POS system.  In order to get end-to-end encryption, the encryption must occur from the terminal that accepts the card all the way through to the ultimate storage location of the transaction.  The good news here is that the terminal is typically capable of using the latest encryption algorithms, so the transmission from the terminal to the processor can be properly secured.  However, the problem with encryption from the terminal to the processor is that this technology currently is an encrypted tunnel.  This means that any network devices between the terminal and the processor are unable to act on the message because it is contained in an encrypted tunnel.  For true stand-alone terminals, this is not a problem.  For terminals integrated with a POS solution, implementation of end-to-end encryption requires a separate connection from the terminal to the POS that transmits the approval/decline code and transaction amount back to the POS.  All of this is available today from some POS solutions.

However, end-to-end encryption gets trickier when it is provided as part of an integrated POS solution.  This is because these solutions typically integrate the terminal with the POS hardware and software.  The trickiness comes from the fact that we are relying on hardware and software to provide our security.  Since most of today’s POS solutions are based on some form of Microsoft Windows, security can be haphazard at best depending on how Windows has been implemented.  All that is required to compromise this solution is a piece of malware that positions itself between the reading and decryption of the credit card and the application that processes the transaction.  Based on what has been published, this appears to be exactly what happened in the Hannaford breach.  Therefore, the POS solution must be rigorously hardened and sufficiently monitored to ensure that it is not compromised.  Alternatively, if it is compromised, an alert is generated almost immediately to notify management of the compromise so that it can be addressed as soon as possible.  All of this technology exists today in the form of anti-virus, anti-malware and critical file monitoring solutions.  However, additional controls may also be needed to ensure that POS solution ghost images are not tampered with (use of hashing and periodic examination of images to ensure they hash properly) and that critical file monitoring is actually monitoring the correct files.

Those of you thinking of using Kerberos had better think again in regards to end-to-end encryption.  Kerberos encrypts between devices and/or applications.  Therefore, Kerberos just ensures encryption from the POS application/device to the next application/device it connects, most likely, just another application/device on your own network.  So the threat of malware still exists, it just may be further spread out to other devices/applications.  The other problem with Kerberos is interoperability outside of your own environment, in this case with your processor.  While Kerberos supports such capability, in my experience, very few organizations get Kerberos implemented properly on their own networks, let alone working properly with an outside network.

Going back to encryption between the terminal or the POS system to the processor.  I brought up the fact that current network encryption solutions (IPSec, PPTP, L2TP, etc.) create a tunnel that network devices between the two endpoints do not have access.  Obviously, for those of you using MPLS (aka multiprotocol label switching), this is an issue because your traffic cannot be rerouted by MPLS if it is tunneled.  Supposedly, this is being addressed by an IEEE committee to develop an encrypted tunnel that only encrypts packet payloads and leaves the information necessary for MPLS to operate in clear text.  When this standard will be released is anyone’s guess.  But until it is, MPLS networks become a potential problem for encrypted tunnels.

As you can see, end-to-end encryption is feasible, but its true value may not be what Mr. Carr believes or has been lead to believe.

Final post on end-to-end encryption.


Is “End-To-End Encryption” Realistic? Part 1

A January 26 Digital Transactions News article discusses Robert Carr’s call for end-to-end encryption.  The Heartland press release that drove all of the media stories around that time states that, “For the past year, Carr has been a strong advocate for industry adoption of end-to-end encryption – which protects data at rest as well as data in motion – as an improved and safer standard of payments security. While he believes this technology does not wholly exist on any payments platform today, Heartland has been working to develop this solution and is more committed than ever to deploying it as quickly as possible.”  Mr. Carr’s call for encryption appears to have been unknown to the public as the earliest reference of encryption a Google search could uncover was in Heartland’s 2008 annual report.  Given Heartland’s fiscal year ends on December 31, it is likely the statement on encryption in the annual report was written about the same time as the press release.

In response to Mr. Carr’s call for end-to-end encryption, Kevin Nixon argues in a March 24 article that end-to-end encryption is “bad medicine.”  So, what is meant by end-to-end encryption, the “one brief point” where cardholder data is exposed and why is this discussion creating such a stir?

In this post, I would like to discuss the “one brief point” comment.  Based on my experience with credit card processing, depending on where you are in the process, there can be more than just one “brief point.”  In fact, “brief“ may also be a misnomer.  Depending on the process, there could be numerous points where cardholder data is exposed.

Let us look at this from the merchant’s perspective.  For merchants using dial-up terminals, there is truly only one potential “brief point” of exposure between the terminal and the connection with the processor.  However, this exposure requires that the dial-up line be wire tapped and the electronic transmissions be recorded.  Given that a merchant with dial-up does not have a high volume of credit card transactions and that wire-tapping is a very serious federal felony, the risk of this occurring is low as the payback is also very low.

For merchants that use integrated POS, the first exposure point that can exist is between the credit card terminal and the POS solution.  This point exists because of the multitude of terminals that could possibly be connected to the POS.  Since POS vendors do not always have the resources to develop interfaces for every possible combination of credit card terminals, they develop interfaces to the most popular terminals following the 80/20 rule.  This compatibility issue has gotten less and less of a problem over the years as the terminal vendors adopted USB and other standard connectivity solutions to connect the terminals to the POS.  However, most POS vendors have made the wrong assumption that the connection between the terminal and the POS is secure which may or may not be true.

Then there is the connection between the integrated POS and the processor.  For large merchants that perform their own transaction switching, the connection between the POS and the internal switch can sometimes be unencrypted and that can be another point of exposure.  Interestingly enough, the connection between the merchant and the processor can also be an exposure point.  I cannot tell you the number of large processors that even up to a year ago were still struggling with getting connections to merchants encrypted.  And if a merchant’s connection to their processor is a private circuit, it is likely not encrypted because the PCI DSS does not require it.

Another exposure point is settlement.  While settlement is usually conducted over private circuits, it is typically done using FTP to transfer files between the merchant and the processor.  Some processors have moved to transferring settlement files via secure FTP, but have not necessarily moved all of their merchants to secure FTP.  As a result, there is potential risk in the fact that standard FTP is used during the settlement process.

These are just the obvious risks to cardholder data security.  Based on all of the custom solutions implemented by merchants and processors, there are unique risks present throughout the cardholder data flow.  As a result, each instance presents its own unique challenges to provide adequate security.  This is why securing cardholder data can be daunting.

In my next post, I will examine the definition of ‘end-to-end encryption’.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

June 2022