08
Aug
14

Requirement 10.6.2 Clarification

As a refresher, requirement 10.6.2 states:

“Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.”

The argument in PCI circles is the definition of “all other systems”.  Some of us believed that it meant systems other than those in-scope.  Other people believed that it had to refer to only in-scope systems such as a user workstation.  As a result, I asked the PCI SSC to clarify this requirement and this is the response I got back.

“In PCI DSS v2.0, logs for all in-scope systems were required to be reviewed daily. However it was recognized that for larger or more complex environments, there could be lower risk systems that were in scope for PCI DSS that could warrant less frequent log reviews. As such, PCI DSS v3.0 defines a number of events and system types that require daily log reviews, and allows the organization to determine the log review frequency for all other in-scope events and systems that do not fall into those categories.

For some environments, such as those designed specifically for the purposes of PCI DSS, then it is possible that all in-scope systems fall under the system categories defined in Requirement 10.6.1, meaning that daily log reviews are required for all in-scope systems. In other environments, there may be many different types of system that are considered in-scope, but which are not critical systems and neither store, process or transmit CHD nor provide security services to the CDE. Some possible examples could be stock- control or inventory-control systems, print servers (assuming there is no printing of CHD) or certain types of workstations. For these events or systems, the entity, as part of its annual risk assessment process, is expected to define the frequency for reviews based on the risk to its specific environment.

The intent of this update is not to apply PCI DSS Requirements to out-of-scope systems. We realize that the current wording is causing confusion and will address this in the next revision.”

So there we have it.  Not the first time my interpretation was wrong.  The requirement means in-scope systems that, from an assessment of risk, are at less of a risk of compromise can reduce the frequency of log reviews.

But that means you need to have an accurate risk assessment to support your argument.  So those of you that have not explicitly assessed the risk of your category 2 systems will have to break them out to support a reduced log review frequency.

08
Aug
14

The Dilemma Of PCI Scoping – Part 3

In part 2 we discussed the criticality of a risk assessment and started on implementing the framework with fixing monitoring and alerting so that we can properly manage the risk we will be accepting.  In this part I will deal with Category 2 and 3 systems and how to manage their risk.

One of the big problems I have with the security purists is with their arguments over Category 3 systems.  Do not get me wrong, I understand their reasons for their statements.  However to me, their arguments seem to imply that Category 3 systems are somehow not secure or not as secure as other systems and therefore cannot be trusted.  This further implies that organizations are implementing some sort of multi-tier security structure which should not be the case.  Taking such an approach would create an operational and support nightmare, let alone increase the risk of compromise.

In today’s world, all devices need to have a certain basic level of security comprised of, at a minimum, hardened OS, minimized ports and services, access control, anti-virus and anti-malware (as appropriate and necessary), personal firewall (if mobile), logging and other similar security measures as necessary.  All of this is something I like to refer to as “Security 101”.  If you are not doing Security 101 across the board, then you likely have bigger issues than just PCI compliance.  But if you are following a Security 101 approach, then all of your devices under your direct control should have a solid, basic level of security and should be able to be trusted when on your network.  As a result, with the proper logical access controls in place, you should be able to trust Category 3 systems coexisting with certain Category 2 systems.

Notice I said “certain Category 2 systems” not “all” Category 2 systems.

Category 2b systems that have inbound connectivity to the CDE are the most risky because of that direct inbound connectivity.  As such, should any Category 2b system become compromised, the CDE is also compromised by default.  Most Category 2b systems are gateways to the CDE such as with out-of-band management systems or “jump boxes” for network and system administration and Citrix or other virtual desktop technologies that you can use to grant business user access to CDE applications.

These gateways need to be configured such that if they become compromised they are immediately taken out of service until they can be rebuilt and/or reconfigured as secure.  This is typically accomplished not just with anti-virus, but also through file integrity monitoring, application whitelisting and other similar measures.  But this needs to be monitored at the virtual desktop infrastructure level (aka, Citrix, VMware, Terminal Services, etc. servers), as well as the virtual desktop image level.  If a file changes in the virtual environment or virtual desktop image that should not have changed, access to the virtual desktop environment is suspended until the infrastructure or image can be recertified as secure.  But that means very tight monitoring on the critical files, however this should be relatively easy to do as long as the virtual desktop image is restricted to only a single cardholder data (CHD) application.

The other thing that needs to happen with virtual desktop is that organizations must be extremely diligent about who has access to the virtual desktop.  This means very timely user management, something a lot of organizations are very lax about.  The PCI DSS states in requirement 8.1.4 that organizations must “Remove/disable inactive user accounts at least every 90 days.”  90 days is a very long time when you look at what attackers are able to do in such a timeframe.  As a result, organizations need to step up their game in this regard and when someone’s job function changes, that needs to have a help desk ticket generated for changing that person’s access.  A 90 day review process is fine, but if you are relying on that sort of thing as a control, you are not being diligent enough in managing your users.

But what about Category 2a systems you might ask?  Category 2a systems are directory servers, authentication servers, DNS and DHCP servers, anti-virus master servers, system update servers and other network and system management/monitoring support systems.  These systems are those that only system or network administrators should ever have direct access.  These systems should be configured as “bastion” servers, security hardened so that they are not easily compromised by vulnerabilities.  Most vendors of these systems understand this and have set up their installation routines accordingly – restricting services, allowing only necessary applications, etc.  The bottom line with Category 2a systems is that, if any of them become compromised, it is game over because the attacker has the privileges of an administrator.

I am not a fan of separate directory systems.  These systems double as the access control management system and the more complicated access control becomes, the more likely that access control becomes less reliable and secure.  As such, I prefer a common access control system managed through an integrated directory system such as Microsoft’s Active Directory (AD).  Over the years, the amount of ports required to make AD work between domain controllers have gotten much better and easier to implement.  Plus there is an ability starting with Windows Server 2008 to even limit and control the dynamic ports used.  As such, I recommend using a common domain approach rather than separate domains with trust relationships as it just makes management so much easier and more reliable.  Whether or not you choose to put domain controllers inside your CDE is up to you to decide.

However, there is a larger consideration here that needs to be put on the table.  Think about what the security purists are saying with their statements about Category 3 systems.  If an attacker can jump from a Category 3 system to owning a Category 2a system such as a directory server, DNS server, DHCP server, etc. what does that say about an organization’s security posture to begin with?  If an attacker gains control of a system/network administrator’s credentials, it is game over.  That is not a device failure issue, that is a social engineering failure.  As a result, you can have whatever controls of separation all you want, but in the end if an administrator gets owed, what did all of that separation do for you?  Nothing.  The best we can do is to manage and mitigate the risks and monitor for any attempts to leverage those risks and deal with any attacks as quickly as possible.

At the end of the day, organizations need to assess the risk that their Category 2 systems provide to the CDE.  If it is higher than desired, those systems need to be put into the CDE.  Others can share networks with other Category 2 and 3 systems if necessary.

There are obviously enhancements that you can make to further improve on this strategy.  But this is just what I would consider the minimum to get by which seems to be what a lot of you desire.

01
Aug
14

The Dilemma Of PCI Scoping – Part 2

In Part 1 I discussed how isolation is an answer, but probably not a viable answer for almost all but the most security conscientious of organizations such as the military, defense contractors or those that can afford that sort of painstaking luxury.  So unless you consider everything in scope for PCI compliance, is there a viable way to reduce scope?

Before we get to that though, we need a quick discussion on risk management as the solution is all predicated on the identification and management of risk.  Because if you cannot do an appropriate risk assessment, then the only choice you really have is to consider everything in scope I know the vast majority of you do not like that approach.

Assessing Risk

In order for my proposed solution to have a chance at working properly, an organization needs to understand its risks, what risks will be accepted and managed and what it will take mitigate the residual risks.  Doing a risk assessment is the way to do that, but most organizations avoid such an assessment for a variety of reasons.  The most common reasons I have heard are:

  • The risk assessment framework is too complex,
  • We tried this once before and never got any meaningful results,
  • We were never able to agree on the risks and their scores, or my personal favorite,
  • This spreadsheet is our risk assessment (it is not, but see Tom Benhave’s blog post on the topic as he does a much better job than I could explaining this topic).

The reason why risk is such a tough topic is that everyone has their own perspective on risk; good, bad or otherwise.  There are numerous articles and presentations on this phenomena but my favorite one is from Lance Spitzner of SANS who wrapped his around security awareness training and discusses risk at the start of his presentation describing why people are such poor judges of risk.  He uses various statistics regarding events that can happen in people’s lives to illustrate this fact.  My personal favorite example of just such a statistic is that people have a greater chance of dating a supermodel than of winning the PowerBall lottery.  Granted, both have tremendous odds, but the odds of dating a supermodel are still significantly less than the odds of winning the PowerBall.

The bottom line is that, without a decent risk assessment, an organization has no way to know the amount of risk the organization is willing to accept and how they will manage that risk.  The Council has repeatedly said that PCI compliance is supposed to consider risk and take a “risk-based” approach.  However, the problem is that we each have our own opinion of risk and what risks we are willing to take on.  But at the end of the day, no matter what an organization does, there is going to be risk.  The question is, “Are these risks my organization willing to take on?”  That question can only be answered by a risk assessment and an understanding of how risks can be managed and mitigated.

How your organization chooses which risks it is will to accept and how to manage those remaining risks are up to your organization to decide.  This is why the PCI DSS and all other security frameworks require an annual risk assessment to be performed.  The risk assessment process provides a framework for an organization to document their risks, understand those risks (size, frequency of occurrence, costs, etc.), how the risks can be managed or mitigated, then agree to what risks they will take on and how they will manage and/or mitigate those risks.

From here on we will assume that the organization has a valid risk assessment and that they are willing to take on the risks presented by the example I will discuss.

Managing Risk

Today’s integrated and connected world just does not lend itself to an isolationist approach due to the volume of information involved, business efficiencies lost and/or the operational costs such an approach incurs.  As a result, organizations need to take a hybrid approach of heavily protecting some components and taking on and managing the risks inherent to such an approach.

When it comes to the IT side of risk management and mitigation, most organizations rely on some form of near real time monitoring through collected system/event log data and other sources to monitor their environment(s).  Unfortunately, where this approach comes up short is that there are too many alerts to follow up and so alerts go unaddressed.  Almost every QSA can tell you about a discussion with operations personnel where the statement, “Oh, that’s a false positive alert, so I don’t have to worry about it” has been made.

This is the first problem you must address and make sure that this attitude never creeps back into your people that monitor alerts.  Anyone in operations that “knows” an alert is a false positive needs either: (1) re-education, or (2) your organization needs to seriously re-tune your alerting mechanism(s).  All you have to do is read the Target and Neiman Marcus press reports if you need examples of how bad things can get if your personnel are blowing off alerts because they believe they are not accurate.

In my experience, a lot of these problems are the result of bad or incomplete implementations of these systems.  Unfortunately, there are a lot of people out there that think that these solutions are more like a Ronco Rotisserie Oven where, as they famously say in the ads, “you can set it and forget it.”  Yes these solutions may be “appliances”, but that is where the comparison ends.

System incident and event management (SIEM) systems require fairly constant tuning and tweaking, beyond their own software and signature updates, to minimize false positive alerts in response to the changes to an organization’s networks and systems.  Yet time and again, I encounter monitoring and alerting systems that were put in place years ago (typically to meet PCI compliance) and have not been adjusted/changed since then while all around them changes have been occurring that affect their operation.

When interviewing the people responsible for these systems I hear statements such as, “Yeah, that alert started to appear when we implemented [name of change].  We were told to just ignore it.”  When asked why they have not tuned it out of the SIEM, you get either they do not have time, they do not know how, they do not have the rights to do that or, my personal favorite, the head of security or the security committee will not let us change that.

The reason this issue does not get addressed is that it has no visibility since alerts are tucked into the various monitoring tools.  So, the best way to address this situation is to give it visibility by automatically feeding all alerts into an organization’s help desk system.  This gives all alerts immediate visibility by putting them in an automated tracking and escalation process.  It also allows for triage and investigation activities to be documented and, based on the results of those triage and investigation activities, having the alert assigned to the right people/groups to address the alerts.

“Whoa, let’s not get crazy here,” I am sure some of you are YELLING at the screen.  There is no doubt this is a very brave step to take because this will potentially uncover something you probably did not want to advertise given the state of your existing alerting.  But that is typically only be a short term problem.  Unfortunately, it may be the only way to get the underlying problem of tuning and tweaking of the alerting systems completed and constantly addressed.

But taking such a step is not entirely a bad thing, at least in the long run.  A side benefit of doing such a thing is that it will focus an organization on triage activities for classifying the urgency of the alerts.  Not all alerts need immediate action, but a lot of them can require immediate investigation and then can be put on a back burner.  It will also give visibility to the number of alerts being dealt with on a daily basis after triage.  That typically results in identifying and justify why more staff are required to deal with the onslaught of alerts that need to be researched.

Another change that organizations should make is adding a box into their change control form that indicates what the impact of a change will have on the SIEM environment.  At least these three basic questions need to be answered in regards to SIEM or other monitoring systems.

  • Do new alerts need to be added and, if so, what do they need to monitor and what are the alerting thresholds?
  • Do existing alerts need to be modified and, if so, what modifications are needed?
  • Are there alerts that are no longer needed?

If you address these areas, you should have monitoring and alerting taken care of with a built in feedback loop to keep it that way.

In Part 3, I am going to wrap up my discussion on PCI scoping with a discussion of Category 2 and 3 systems.

27
Jul
14

The Dilemma Of PCI Scoping – Part 1

Based on the email comments of late, there are apparently a lot of you out there that really do not like the Open PCI Scoping Toolkit.  I am not sure exactly what post where I mentioned the Toolkit got you all wound up, but I have definitely hit a nerve.  From the comments in these messages, it is painfully obvious that the reason the SIG failed was that none of us are in agreement about how much risk we are willing to accept.  And that is why no PCI assessment is ever the same because organizations and even QSAs from the same QSAC can have different levels of risk tolerance.

I, too, have to admit that I think the Toolkit needs some work, but it is the best framework we have to start a discussion on the topic.  And that is the problem, the topic.  Until the Toolkit appeared, scoping discussions had no real framework and everyone had their own definitions.  And as I have pointed out before, while there are a lot of people out there that might not know the nuances of the PCI DSS, it seems that everyone “knows” what is in scope and what is out of scope.

As a result, QSAs have found out through the “School of Hard Knocks” that everyone has their own view of scope and there was no good guide to explain how or why to draw the line let alone discuss the topic civilly in some cases.  I view the Toolkit as the bare minimum.  If an organization wants to get even more restrictive and have more categories, great, that is their prerogative.  However, if they want to go less than the Toolkit, in my very humble opinion, they can do it without me.  The bottom line is, regardless of whether you are using the Toolkit or have your own approach, document the definitions of your categories and provide examples so that everyone can understand your rationale and then discuss the impacts on your organization’s PCI scope.  Without such a document, we are not going to have productive discussions on scope.  That is why I lean toward the Toolkit, it gives me a starting point to get a productive discussion started.

We seem to all be able to agree on the Category 1 and 3 systems, because those are clear and easy to identify.  Category 1 systems are always in the cardholder data environment (CDE) because they directly process, store or transmit cardholder data or define the CDE and are therefore always in-scope.  Category 3 systems never, ever process, store or transmit cardholder data and are therefore always out of scope.

It’s those pesky Category 2 systems (the ones that connect in some way to/from the CDE) that get everyone’s undies in a bunch.  The group that developed the Toolkit did their best to break them out in ways that made sense but were still understandable and easy to use.  The more that I have thought about it, the more I think they came up with the best compromise. In my opinion, if you start adding any more categories or sub-categories to the existing definitions you will lose almost everyone due to complexity, including security people.  However, I also don’t believe that simplifying Category 2 is an answer either.

But if the discussion about Category 2 is tough, the fact that the Toolkit allows for Category 3 systems to exist on networks with Category 2 systems sends some security purists right over a cliff.  Their rationale is that Category 3 systems could be easily attacked and therefore allows a beachhead to compromising Category 2 systems.  While this is true, the idea of totally isolating Category 2 systems is not realistic for most organizations because of the ramifications of such a decision.

Why Isolation Is Not An Option

Security purists seem to think isolation of the CDE is the answer.  From an outsourcing perspective, that would provide isolation.  But in my experience, even outsourcing is not as isolated as one would think.  Here is why I think that isolation does not work whether doing it internally or through outsourcing.

Isolation means physically and logically separate directory systems with no trust relationships between the isolated environment and normal production.  I have seen all sorts of technical gymnastics to secure directory systems inside the CDE that can still leave too many holes in firewalls so that the trust relationship can exist.  If you are truly serious about isolation, then you need to have true isolation and that means physically and logically separate directory systems.  This also means duplication of credential management and introducing the possibility of errors when provisioning accounts.

The idea of leveraging your existing solutions for network and application management must be rethought as well.  This means separate security event and information management (SEIM) solutions, separate network management and monitoring, separate application management and monitoring, etc.  I think you get the idea.

Of course separate firewalls, routers, switches, intrusion detection/prevention, load balancers and other infrastructure are also going to be needed.  If you use RADIUS or TACACS+ for authentication, you will have to have separate systems for authentication to the infrastructure as well.  You will also need separate DNS and DHCP servers if you intend to provide those services inside the CDE.  Of course all of this duplicated infrastructure adds to the likelihood that mistakes will be made in configuration changes that could result in a breach of that isolation.

There are no “out of band” or virtual terminal access into your pristine isolated environment.  So you will need to provide separate PCs for operations and network personnel so that they have access to the isolated environment and then another physically separate system to your organization’s normal work environment.  Internal users with access to cardholder data (CHD) will also be required to have physically separate PCs for accessing the CDE.  This will also mean ensuring security of network switches inside the CDE by using MAC filtering or “sticky” MAC approaches to ensure that only the PCs that should have access to the CDE do have access.  And of course wireless networking is totally out of the question.

But wait, you will also have to invest in some sort of portable media solution so that you can get data from the isolated environment to the “normal” production environment and vice versa.  No connected databases or application integration because that will require holes into and out of the isolated environment.  This is where outsourcing for isolation also comes up short.  But without application and data integration, the economies of scale shrink almost exponentially as more and more data must be manually moved between environments.  This drives the cost of isolation almost through the roof and typically makes isolation too expensive for most organizations.

Various government entities have all tried this approach with mixed results as far as breaches are concerned.  So in practice, the isolation approach will still leave your organization with some amount of risk that must be managed.

So if isolation is not the answer what is the answer?  In Part 2 I’ll discuss what I think works.

24
Jul
14

Keeping It Simple – Part 2

In Part 1, the key to keeping things as simple as possible is to avoid any storage of cardholder data.  Period.  End of discussion.

I also covered mobile payments because more and more small merchants go to those solutions because of the cheap interchange fees on transactions.  However, few small merchants truly understand the risks that these solutions present because they are offered by seemingly trustworthy providers.

So what else can you do to keep things simple?

Outsource

When consultants typically mention outsourcing, huge amounts of money typically float past people’s eyes.  However, PayPal is outsourcing and its fees are not too bad.  There are a number of similar services for eCommerce out there for conducting payments.  There used to be concerns about abandoned shopping carts because of a separate Window pop up, but as online shoppers have gotten used to PayPal like services, that has diminished.

Your bank may also have a partnership with one or more payment vendors, so it is worth it to talk to them first to find out what options they may have to offer.  If they do partner with a payment solution, a lot of times they can offer reduced fees/expenses and other incentives to use their partner(s).

The key thing to understand is that even when you invoke PayPal or similar services from your Web site, you still have a responsibility to ensure the security of your Web site.  This is because your Web site is the system that directs your customer to the payment service you have chosen.  If an attacker changes that redirect to “The Guru’s Payment Process”, that is not PayPal’s problem, that is your problem.  This was clarified in the PCI SSC’s Information Supplement on eCommerce back in January 2013 when the Council declared that even Web sites that redirected to a payment service still were in scope, albeit a very reduced scope.  The ramifications of this supplement have been discussed repeatedly with QSAs since then, but in my experience that message is still not being consistently presented to QSAs’ customers.

No Choice But To Store Cardholder Data

So you have found a solution that does exactly what your business needs, but it stores cardholder data (CHD).  The first question you should ask the vendor is, “How does your solution secure the stored cardholder data?”

For vendors that use data encryption, the answer to this question should be either triple DES (3DES) 168-bit or advanced encryption standard (AES) 128-, 192- or 256-bit encryption.  DES and 3DES 56- and 112-bit are no longer considered secure, so any vendor using those is not meeting the PCI encryption requirements.  Anything else and you should consult with a QSA as to whether or not the encryption algorithm is acceptable.

For vendors using encryption, the next question you should ask them is, “How does your solution perform key management?”  Preferably, someone other than someone in your organization manages the encryption keys such as the vendor.  However, I have seen some solutions where the key management is done by the merchant through a sophisticated key generation solution in the application so that the merchant never actually knows or has access to the encryption key(s).  The questions you need answered are all in requirements 3.5 and 3.6 of the PCI DSS.  If the vendor cannot address these requirements, then you need to find a new solution vendor that can provide answers.

For vendors that use one-way hashing, the preferred algorithms used should be SHA-2 or SHA-3 with a salt value.  MD5 and SHA-0 have known security issues that make them no longer secure.  SHA-1 has a potential security issue that could cause data to also be insecure, but a lot of software vendors still use SHA-1 with a salt value.  If you are going to use a solution that relies on SHA-1, the vendor should have additional controls in place to monitor access to the data and alert on any access where the data is being accessed in bulk.  If not, find another vendor.

Tokenization

Do not give up hope if your preferred solution stores cardholder data (CHD).  You may be able to use a processor that tokenizes the PAN.  Once tokenized, the PAN is no longer considered CHD, so the solution would not being storing CHD.

Proper tokenization is a form of encryption or hashing in that the token does not have any true one-to-one relationship with the PAN it replaces.  However, this is a fact you need to confirm with the transaction processor to ensure that their tokenization process is secure.  If they are not representing their token as secure, you need to move on to another processor.

Tokenization can occur at the swipe/dip of the card at the point of interaction (POI) or as the byproduct of the successful completion of a transaction (the most common occurrence).

Tokens can also be generated for eWallet situations where you want to encourage customers to come back and shop at your eCommerce site and store their cardholder information for ease of subsequent transactions.  In these instances, the processor generating the token also stores the CHD and then translates the token to that stored CHD when your system submits it and then sends the CHD for approval for payment.

This is how ExxonMobil Speedpass and open road toll collection systems work.  The RFID device is the token and the payment system recognizes the serial number of the RFID device and translates that to the customer’s stored CHD for payment processing.

There are more than enough ideas presented in these two posts to vastly simplify any merchant’s PCI compliance exposure.  If these solutions will not solve your issues, then you are either making your business too complex or your business model truly needs some expert advice to address your compliance challenges.

20
Jul
14

Keeping It Simple – Part 1

Apparently, I struck a nerve with small business people trying to comply with PCI.  In an ideal world, most merchants would be filling out SAQ A, but we do not live in an ideal world.  As a result, I have collected some ideas on how merchants can make their lives easier.

Do Not Store Cardholder Data

It sounds simple, but it amazes me how many small businesses are storing cardholder data (CHD).  In most cases, it is not like they wanted to store CHD, but the people in charge just did not ask vendors that one key question, “Does your solution store cardholder data?”  If a vendor answers “Yes”, then you should continue your search for a solution that does not store CHD.

Even when the question is asked of vendors, you may not get a clear answer.  That is not necessarily because the vendor is trying to hide something, but more likely because the salespeople have never been asked this question before.  As a result, do not be surprised if the initial answer is, “I’ll have to get back to you on that.”  If you never get an answer or the answer is not clear, then you should move on to a different vendor that does provide answers to such questions.

If your organization cannot find a solution that does not store CHD, then at least you are going into a solution with your eyes open.  However, in today’s payment processing application environment, most vendors are doing all that they can to avoid storing CHD.  If the vendors you are looking at for solutions are still storing CHD, then you may need to get creative to avoid storing CHD.

That said, even merchants that only use points of interaction (POI) such as card terminals can also end up with CHD being stored.  I have encountered a number of POIs that were delivered from the processor configured such that the POI was storing full PAN.  Apparently, some processors feel it is the responsibility of the merchant to configure the POI securely even though no such instructions were provided indicating that fact.  As a result, you should contact your processor and have them walk you through the configuration of the POI to ensure that it is not storing the PAN or any other sensitive information.

Then there are the smartphone and tablet solutions from Square, Intuit and a whole host of other mobile solution providers.  While the PCI SSC has indicated that such solutions will never be considered PCI compliant, mobile POIs continue to proliferate with small businesses.  The problem with most of these solutions is when a card will not work through the swipe/dip and the CHD is manually keyed into the device.  It is at that point when the smartphone/tablet keyboard logger software captures the CHD and it will remain in the device until it is overwritten which can be three to six months down the road.  In the case of EMV, the device can capture the PIN if it is entered through the screen thanks to the built in keyboard logger.  As a result, most EMV solutions use a signature and not a PIN.  The reason Square, Intuit and the like get away with peddling these non-compliant POI solutions is that they also serve as the merchant’s acquiring bank and are accepting the risk of the merchant using a non-compliant POI.

The bottom line here is that merchants need to understand these risks and then make appropriate decisions on what risks they are will to accept in regards to the explicit or implicit storage of CHD.

Mobile Payment Processing

The key thing to know about these solutions is that the PCI Security Standards Council has publicly stated that these solutions will never be considered PCI compliant.  Yes, you heard that right; they will never be PCI compliant.  That is mostly because of the PCI PTS standard regarding the security of the point of interaction (POI) for PIN entry and the fact that smartphones and tablets have built in keyboard loggers that record everything entered into these devices.  There are secure solutions such as the Verifone PAYware line of products.  However, these products only use the mobile device as a display.  No cardholder data is allowed to be entered into the mobile device.

So why are these solutions even available if they are not PCI compliant?  It is because a number of the card brands have invested in the companies producing these solutions.  As a result, the card brands have a vested interest in allowing them to exist.  And since the companies offering the solutions are also acting as the acquiring bank for the merchant, they explicitly accept the risk that these solutions present.  That is the beauty of the PCI standards, if a merchant’s acquiring bank approves of something, then the merchant is allowed to do it.  However, very few merchants using these solutions understand the risk these solutions present to them.

First is the risk presented by the swipe/dip device.  Some of these devices encrypt the data at the swipe/dip but not all.  As a result, you should ask the organization if their swipe/dip device encrypts the information.  If it does encrypt, then even if the smartphone/tablet comes in contact with the information, it cannot read it.  If it is not encrypted, I would move on to the next mobile payments solution provider.

The second risk presented is the smartphone/tablet keyboard logger.  This feature is what allows your mobile device to guess what you want to type, what songs you like and a whole host of convenience features.  However, these keyboard loggers also remember anything typed into them such as primary account numbers (PAN), driver’s license numbers and any other sensitive information they can come into contact.  They can remember this information as long as it is not overwritten in the device’s memory.  Depending on how much memory a device has, this can be anywhere from weeks to months.  One study a few years back found that information could be found on mobile devices for as long as six months and an average of three months.

While encrypting the data at the swipe/dip will remove the risk that the keyboard logger has CHD, if you manually key the PAN into the device, then the keyboard logger will record it.  As a result, if you are having a high failure rate with swiping/dipping cards, you will have a lot of PANs contained in your device.

The bottom line is that if you ever lose your mobile device or your trade it in, you risk exposing CHD if you do not properly wipe the device.  It is not that these solutions should not be used, but the purveyors of these solutions should be more forthcoming in the risks of using such solutions so that merchants can make informed decisions beyond the cheap interchange fees.

There are more things merchants can do to keep it simple and I will discuss those topics in a future post.

01
Jul
14

The Flaw In Requirement 8.5.1

Today it was unceremoniously announced that a number of major restaurant chains’ franchisees had been potentially hacked between February 28, 2014 and April 18, 2014 because their point of sale (POS) vendor’s remote access account had been compromised.  I say franchisees because I know a couple of these restaurant chains’ corporate operations and they were not using a third party to manage POS.

In a nutshell, the attackers gained access to the POS vendor’s LogMeIn account.  LogMeIn, like a lot of similar remote access facilities, has an address book where you can store remote access credentials.  So with access to LogMeIn, by default, the attackers had access to the address book that contained credentials for any customer environments in the address book (likely all customers, but possibly not).

To remind everyone, requirement 8.5.1 of the PCI DSS v3 states:

 “Additional requirement for service providers: Service providers with remote access to customer premises (for example, for support of POS systems or servers) must use a unique authentication credential (such as a password/phrase) for each customer.

Note: This requirement is not intended to apply to shared hosting providers accessing their own hosting environment, where multiple customer environments are hosted.

Note: Requirement 8.5.1 is a best practice until June 30, 2015, after which it becomes a requirement.”

The PCI SSC guidance for requirement 8.5.1 states:

 “To prevent the compromise of multiple customers through the use of a single set of credentials, vendors with remote access accounts to customer environments should use a different authentication credential for each customer. Technologies, such as two-factor mechanisms, that provide a unique credential for each connection (for example, via a single-use password) could also meet the intent of this requirement.”

It is likely that the vendor was trying to get a jump on complying with requirement 8.5.1 in the PCI DSS v3.  However, this vendor may have been using such an approach all along to manage customer remote access which is also not uncommon with technology companies.

The first thing to note is that requirement 8.5.1 is a best practice until June 30, 2015 after which it becomes a full requirement.  However, as I pointed out in an earlier post, a lot of vendors will likely have to start rolling out a remote access solution as soon as possible to minimize service level agreement (SLA) issues.

One of the most likely ways vendors are addressing compliance with 8.5.1 is through services such as LogMeIn, GoToMyPC and similar services.  These are inexpensive services available to any organization or anyone.  There are also enterprise solutions such as those from Bomgar and the like that purport to have better security.  However, all of these solutions share the concept of an address book to make gaining remote access easier for the vendors’ users that rely upon them.  And that is their Achilles’s heel.  If an attacker gains access to the remote access service, they gain access to the address book and, therefore, to the customers’ credentials stored in that address book.  Game over.

It is important to note though that what this vendor was doing fully complies with requirement 8.5.1.  But even though this service provider was complying with the intent of 8.5.1, the implementation was flawed.  This is just another example of how PCI compliance does not mean that security issues cannot still occur.

How easy is this to happen?  Think a spear phishing attack against any vendor that does remote support and maintenance.  Regardless of the customer credential management solution (in-house or cloud based), once access to the credential management solution is compromised any concept of customer security is over.

So what should vendors being doing to mitigate this situation?  Exactly what our vendor who was breached did, implement two-factor authentication on the credential management system.  Spear phishing attacks will not be successful because even with the credentials to LogMeIn or similar, the attacker will need the second factor.  Yes, the attacker can still compromise the support person’s desktop, but they will not have access to customer credentials.

Trouble is, some vendors will want a cheap two-factor solution meaning something that sends out codes via SMS, email or telephone, versus RSA SecurID or SafeNet to name a few.  Solutions over SMS, telephone and email have a variety of known vulnerabilities and can easily be intercepted or even redirected.  In the case of LogMeIn, they indicate that they only support SecurID.

Regardless, all of you service providers out there that have remote access to your customers managed by some enterprise credential management solution, please implement a strong two-factor authentication solution on your customer credential management solution before you too become a newspaper headline.

I would like to believe that this vendor thought they were doing the right thing and got burned because of how they implemented their solution.  At least they stepped up and fixed it the right way.  Unfortunately, this is how we sometimes learn, from our mistakes.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response to too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

August 2014
M T W T F S S
« Jul    
 123
45678910
11121314151617
18192021222324
25262728293031

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 918 other followers


Follow

Get every new post delivered to your Inbox.

Join 918 other followers