Archive Page 2

01
Aug
14

The Dilemma Of PCI Scoping – Part 2

In Part 1 I discussed how isolation is an answer, but probably not a viable answer for almost all but the most security conscientious of organizations such as the military, defense contractors or those that can afford that sort of painstaking luxury.  So unless you consider everything in scope for PCI compliance, is there a viable way to reduce scope?

Before we get to that though, we need a quick discussion on risk management as the solution is all predicated on the identification and management of risk.  Because if you cannot do an appropriate risk assessment, then the only choice you really have is to consider everything in scope I know the vast majority of you do not like that approach.

Assessing Risk

In order for my proposed solution to have a chance at working properly, an organization needs to understand its risks, what risks will be accepted and managed and what it will take mitigate the residual risks.  Doing a risk assessment is the way to do that, but most organizations avoid such an assessment for a variety of reasons.  The most common reasons I have heard are:

  • The risk assessment framework is too complex,
  • We tried this once before and never got any meaningful results,
  • We were never able to agree on the risks and their scores, or my personal favorite,
  • This spreadsheet is our risk assessment (it is not, but see Tom Benhave’s blog post on the topic as he does a much better job than I could explaining this topic).

The reason why risk is such a tough topic is that everyone has their own perspective on risk; good, bad or otherwise.  There are numerous articles and presentations on this phenomena but my favorite one is from Lance Spitzner of SANS who wrapped his around security awareness training and discusses risk at the start of his presentation describing why people are such poor judges of risk.  He uses various statistics regarding events that can happen in people’s lives to illustrate this fact.  My personal favorite example of just such a statistic is that people have a greater chance of dating a supermodel than of winning the PowerBall lottery.  Granted, both have tremendous odds, but the odds of dating a supermodel are still significantly less than the odds of winning the PowerBall.

The bottom line is that, without a decent risk assessment, an organization has no way to know the amount of risk the organization is willing to accept and how they will manage that risk.  The Council has repeatedly said that PCI compliance is supposed to consider risk and take a “risk-based” approach.  However, the problem is that we each have our own opinion of risk and what risks we are willing to take on.  But at the end of the day, no matter what an organization does, there is going to be risk.  The question is, “Are these risks my organization willing to take on?”  That question can only be answered by a risk assessment and an understanding of how risks can be managed and mitigated.

How your organization chooses which risks it is will to accept and how to manage those remaining risks are up to your organization to decide.  This is why the PCI DSS and all other security frameworks require an annual risk assessment to be performed.  The risk assessment process provides a framework for an organization to document their risks, understand those risks (size, frequency of occurrence, costs, etc.), how the risks can be managed or mitigated, then agree to what risks they will take on and how they will manage and/or mitigate those risks.

From here on we will assume that the organization has a valid risk assessment and that they are willing to take on the risks presented by the example I will discuss.

Managing Risk

Today’s integrated and connected world just does not lend itself to an isolationist approach due to the volume of information involved, business efficiencies lost and/or the operational costs such an approach incurs.  As a result, organizations need to take a hybrid approach of heavily protecting some components and taking on and managing the risks inherent to such an approach.

When it comes to the IT side of risk management and mitigation, most organizations rely on some form of near real time monitoring through collected system/event log data and other sources to monitor their environment(s).  Unfortunately, where this approach comes up short is that there are too many alerts to follow up and so alerts go unaddressed.  Almost every QSA can tell you about a discussion with operations personnel where the statement, “Oh, that’s a false positive alert, so I don’t have to worry about it” has been made.

This is the first problem you must address and make sure that this attitude never creeps back into your people that monitor alerts.  Anyone in operations that “knows” an alert is a false positive needs either: (1) re-education, or (2) your organization needs to seriously re-tune your alerting mechanism(s).  All you have to do is read the Target and Neiman Marcus press reports if you need examples of how bad things can get if your personnel are blowing off alerts because they believe they are not accurate.

In my experience, a lot of these problems are the result of bad or incomplete implementations of these systems.  Unfortunately, there are a lot of people out there that think that these solutions are more like a Ronco Rotisserie Oven where, as they famously say in the ads, “you can set it and forget it.”  Yes these solutions may be “appliances”, but that is where the comparison ends.

System incident and event management (SIEM) systems require fairly constant tuning and tweaking, beyond their own software and signature updates, to minimize false positive alerts in response to the changes to an organization’s networks and systems.  Yet time and again, I encounter monitoring and alerting systems that were put in place years ago (typically to meet PCI compliance) and have not been adjusted/changed since then while all around them changes have been occurring that affect their operation.

When interviewing the people responsible for these systems I hear statements such as, “Yeah, that alert started to appear when we implemented [name of change].  We were told to just ignore it.”  When asked why they have not tuned it out of the SIEM, you get either they do not have time, they do not know how, they do not have the rights to do that or, my personal favorite, the head of security or the security committee will not let us change that.

The reason this issue does not get addressed is that it has no visibility since alerts are tucked into the various monitoring tools.  So, the best way to address this situation is to give it visibility by automatically feeding all alerts into an organization’s help desk system.  This gives all alerts immediate visibility by putting them in an automated tracking and escalation process.  It also allows for triage and investigation activities to be documented and, based on the results of those triage and investigation activities, having the alert assigned to the right people/groups to address the alerts.

“Whoa, let’s not get crazy here,” I am sure some of you are YELLING at the screen.  There is no doubt this is a very brave step to take because this will potentially uncover something you probably did not want to advertise given the state of your existing alerting.  But that is typically only be a short term problem.  Unfortunately, it may be the only way to get the underlying problem of tuning and tweaking of the alerting systems completed and constantly addressed.

But taking such a step is not entirely a bad thing, at least in the long run.  A side benefit of doing such a thing is that it will focus an organization on triage activities for classifying the urgency of the alerts.  Not all alerts need immediate action, but a lot of them can require immediate investigation and then can be put on a back burner.  It will also give visibility to the number of alerts being dealt with on a daily basis after triage.  That typically results in identifying and justify why more staff are required to deal with the onslaught of alerts that need to be researched.

Another change that organizations should make is adding a box into their change control form that indicates what the impact of a change will have on the SIEM environment.  At least these three basic questions need to be answered in regards to SIEM or other monitoring systems.

  • Do new alerts need to be added and, if so, what do they need to monitor and what are the alerting thresholds?
  • Do existing alerts need to be modified and, if so, what modifications are needed?
  • Are there alerts that are no longer needed?

If you address these areas, you should have monitoring and alerting taken care of with a built in feedback loop to keep it that way.

In Part 3, I am going to wrap up my discussion on PCI scoping with a discussion of Category 2 and 3 systems.

27
Jul
14

The Dilemma Of PCI Scoping – Part 1

Based on the email comments of late, there are apparently a lot of you out there that really do not like the Open PCI Scoping Toolkit.  I am not sure exactly what post where I mentioned the Toolkit got you all wound up, but I have definitely hit a nerve.  From the comments in these messages, it is painfully obvious that the reason the SIG failed was that none of us are in agreement about how much risk we are willing to accept.  And that is why no PCI assessment is ever the same because organizations and even QSAs from the same QSAC can have different levels of risk tolerance.

I, too, have to admit that I think the Toolkit needs some work, but it is the best framework we have to start a discussion on the topic.  And that is the problem, the topic.  Until the Toolkit appeared, scoping discussions had no real framework and everyone had their own definitions.  And as I have pointed out before, while there are a lot of people out there that might not know the nuances of the PCI DSS, it seems that everyone “knows” what is in scope and what is out of scope.

As a result, QSAs have found out through the “School of Hard Knocks” that everyone has their own view of scope and there was no good guide to explain how or why to draw the line let alone discuss the topic civilly in some cases.  I view the Toolkit as the bare minimum.  If an organization wants to get even more restrictive and have more categories, great, that is their prerogative.  However, if they want to go less than the Toolkit, in my very humble opinion, they can do it without me.  The bottom line is, regardless of whether you are using the Toolkit or have your own approach, document the definitions of your categories and provide examples so that everyone can understand your rationale and then discuss the impacts on your organization’s PCI scope.  Without such a document, we are not going to have productive discussions on scope.  That is why I lean toward the Toolkit, it gives me a starting point to get a productive discussion started.

We seem to all be able to agree on the Category 1 and 3 systems, because those are clear and easy to identify.  Category 1 systems are always in the cardholder data environment (CDE) because they directly process, store or transmit cardholder data or define the CDE and are therefore always in-scope.  Category 3 systems never, ever process, store or transmit cardholder data and are therefore always out of scope.

It’s those pesky Category 2 systems (the ones that connect in some way to/from the CDE) that get everyone’s undies in a bunch.  The group that developed the Toolkit did their best to break them out in ways that made sense but were still understandable and easy to use.  The more that I have thought about it, the more I think they came up with the best compromise. In my opinion, if you start adding any more categories or sub-categories to the existing definitions you will lose almost everyone due to complexity, including security people.  However, I also don’t believe that simplifying Category 2 is an answer either.

But if the discussion about Category 2 is tough, the fact that the Toolkit allows for Category 3 systems to exist on networks with Category 2 systems sends some security purists right over a cliff.  Their rationale is that Category 3 systems could be easily attacked and therefore allows a beachhead to compromising Category 2 systems.  While this is true, the idea of totally isolating Category 2 systems is not realistic for most organizations because of the ramifications of such a decision.

Why Isolation Is Not An Option

Security purists seem to think isolation of the CDE is the answer.  From an outsourcing perspective, that would provide isolation.  But in my experience, even outsourcing is not as isolated as one would think.  Here is why I think that isolation does not work whether doing it internally or through outsourcing.

Isolation means physically and logically separate directory systems with no trust relationships between the isolated environment and normal production.  I have seen all sorts of technical gymnastics to secure directory systems inside the CDE that can still leave too many holes in firewalls so that the trust relationship can exist.  If you are truly serious about isolation, then you need to have true isolation and that means physically and logically separate directory systems.  This also means duplication of credential management and introducing the possibility of errors when provisioning accounts.

The idea of leveraging your existing solutions for network and application management must be rethought as well.  This means separate security event and information management (SEIM) solutions, separate network management and monitoring, separate application management and monitoring, etc.  I think you get the idea.

Of course separate firewalls, routers, switches, intrusion detection/prevention, load balancers and other infrastructure are also going to be needed.  If you use RADIUS or TACACS+ for authentication, you will have to have separate systems for authentication to the infrastructure as well.  You will also need separate DNS and DHCP servers if you intend to provide those services inside the CDE.  Of course all of this duplicated infrastructure adds to the likelihood that mistakes will be made in configuration changes that could result in a breach of that isolation.

There are no “out of band” or virtual terminal access into your pristine isolated environment.  So you will need to provide separate PCs for operations and network personnel so that they have access to the isolated environment and then another physically separate system to your organization’s normal work environment.  Internal users with access to cardholder data (CHD) will also be required to have physically separate PCs for accessing the CDE.  This will also mean ensuring security of network switches inside the CDE by using MAC filtering or “sticky” MAC approaches to ensure that only the PCs that should have access to the CDE do have access.  And of course wireless networking is totally out of the question.

But wait, you will also have to invest in some sort of portable media solution so that you can get data from the isolated environment to the “normal” production environment and vice versa.  No connected databases or application integration because that will require holes into and out of the isolated environment.  This is where outsourcing for isolation also comes up short.  But without application and data integration, the economies of scale shrink almost exponentially as more and more data must be manually moved between environments.  This drives the cost of isolation almost through the roof and typically makes isolation too expensive for most organizations.

Various government entities have all tried this approach with mixed results as far as breaches are concerned.  So in practice, the isolation approach will still leave your organization with some amount of risk that must be managed.

So if isolation is not the answer what is the answer?  In Part 2 I’ll discuss what I think works.

24
Jul
14

Keeping It Simple – Part 2

In Part 1, the key to keeping things as simple as possible is to avoid any storage of cardholder data.  Period.  End of discussion.

I also covered mobile payments because more and more small merchants go to those solutions because of the cheap interchange fees on transactions.  However, few small merchants truly understand the risks that these solutions present because they are offered by seemingly trustworthy providers.

So what else can you do to keep things simple?

Outsource

When consultants typically mention outsourcing, huge amounts of money typically float past people’s eyes.  However, PayPal is outsourcing and its fees are not too bad.  There are a number of similar services for eCommerce out there for conducting payments.  There used to be concerns about abandoned shopping carts because of a separate Window pop up, but as online shoppers have gotten used to PayPal like services, that has diminished.

Your bank may also have a partnership with one or more payment vendors, so it is worth it to talk to them first to find out what options they may have to offer.  If they do partner with a payment solution, a lot of times they can offer reduced fees/expenses and other incentives to use their partner(s).

The key thing to understand is that even when you invoke PayPal or similar services from your Web site, you still have a responsibility to ensure the security of your Web site.  This is because your Web site is the system that directs your customer to the payment service you have chosen.  If an attacker changes that redirect to “The Guru’s Payment Process”, that is not PayPal’s problem, that is your problem.  This was clarified in the PCI SSC’s Information Supplement on eCommerce back in January 2013 when the Council declared that even Web sites that redirected to a payment service still were in scope, albeit a very reduced scope.  The ramifications of this supplement have been discussed repeatedly with QSAs since then, but in my experience that message is still not being consistently presented to QSAs’ customers.

No Choice But To Store Cardholder Data

So you have found a solution that does exactly what your business needs, but it stores cardholder data (CHD).  The first question you should ask the vendor is, “How does your solution secure the stored cardholder data?”

For vendors that use data encryption, the answer to this question should be either triple DES (3DES) 168-bit or advanced encryption standard (AES) 128-, 192- or 256-bit encryption.  DES and 3DES 56- and 112-bit are no longer considered secure, so any vendor using those is not meeting the PCI encryption requirements.  Anything else and you should consult with a QSA as to whether or not the encryption algorithm is acceptable.

For vendors using encryption, the next question you should ask them is, “How does your solution perform key management?”  Preferably, someone other than someone in your organization manages the encryption keys such as the vendor.  However, I have seen some solutions where the key management is done by the merchant through a sophisticated key generation solution in the application so that the merchant never actually knows or has access to the encryption key(s).  The questions you need answered are all in requirements 3.5 and 3.6 of the PCI DSS.  If the vendor cannot address these requirements, then you need to find a new solution vendor that can provide answers.

For vendors that use one-way hashing, the preferred algorithms used should be SHA-2 or SHA-3 with a salt value.  MD5 and SHA-0 have known security issues that make them no longer secure.  SHA-1 has a potential security issue that could cause data to also be insecure, but a lot of software vendors still use SHA-1 with a salt value.  If you are going to use a solution that relies on SHA-1, the vendor should have additional controls in place to monitor access to the data and alert on any access where the data is being accessed in bulk.  If not, find another vendor.

Tokenization

Do not give up hope if your preferred solution stores cardholder data (CHD).  You may be able to use a processor that tokenizes the PAN.  Once tokenized, the PAN is no longer considered CHD, so the solution would not being storing CHD.

Proper tokenization is a form of encryption or hashing in that the token does not have any true one-to-one relationship with the PAN it replaces.  However, this is a fact you need to confirm with the transaction processor to ensure that their tokenization process is secure.  If they are not representing their token as secure, you need to move on to another processor.

Tokenization can occur at the swipe/dip of the card at the point of interaction (POI) or as the byproduct of the successful completion of a transaction (the most common occurrence).

Tokens can also be generated for eWallet situations where you want to encourage customers to come back and shop at your eCommerce site and store their cardholder information for ease of subsequent transactions.  In these instances, the processor generating the token also stores the CHD and then translates the token to that stored CHD when your system submits it and then sends the CHD for approval for payment.

This is how ExxonMobil Speedpass and open road toll collection systems work.  The RFID device is the token and the payment system recognizes the serial number of the RFID device and translates that to the customer’s stored CHD for payment processing.

There are more than enough ideas presented in these two posts to vastly simplify any merchant’s PCI compliance exposure.  If these solutions will not solve your issues, then you are either making your business too complex or your business model truly needs some expert advice to address your compliance challenges.

20
Jul
14

Keeping It Simple – Part 1

Apparently, I struck a nerve with small business people trying to comply with PCI.  In an ideal world, most merchants would be filling out SAQ A, but we do not live in an ideal world.  As a result, I have collected some ideas on how merchants can make their lives easier.

Do Not Store Cardholder Data

It sounds simple, but it amazes me how many small businesses are storing cardholder data (CHD).  In most cases, it is not like they wanted to store CHD, but the people in charge just did not ask vendors that one key question, “Does your solution store cardholder data?”  If a vendor answers “Yes”, then you should continue your search for a solution that does not store CHD.

Even when the question is asked of vendors, you may not get a clear answer.  That is not necessarily because the vendor is trying to hide something, but more likely because the salespeople have never been asked this question before.  As a result, do not be surprised if the initial answer is, “I’ll have to get back to you on that.”  If you never get an answer or the answer is not clear, then you should move on to a different vendor that does provide answers to such questions.

If your organization cannot find a solution that does not store CHD, then at least you are going into a solution with your eyes open.  However, in today’s payment processing application environment, most vendors are doing all that they can to avoid storing CHD.  If the vendors you are looking at for solutions are still storing CHD, then you may need to get creative to avoid storing CHD.

That said, even merchants that only use points of interaction (POI) such as card terminals can also end up with CHD being stored.  I have encountered a number of POIs that were delivered from the processor configured such that the POI was storing full PAN.  Apparently, some processors feel it is the responsibility of the merchant to configure the POI securely even though no such instructions were provided indicating that fact.  As a result, you should contact your processor and have them walk you through the configuration of the POI to ensure that it is not storing the PAN or any other sensitive information.

Then there are the smartphone and tablet solutions from Square, Intuit and a whole host of other mobile solution providers.  While the PCI SSC has indicated that such solutions will never be considered PCI compliant, mobile POIs continue to proliferate with small businesses.  The problem with most of these solutions is when a card will not work through the swipe/dip and the CHD is manually keyed into the device.  It is at that point when the smartphone/tablet keyboard logger software captures the CHD and it will remain in the device until it is overwritten which can be three to six months down the road.  In the case of EMV, the device can capture the PIN if it is entered through the screen thanks to the built in keyboard logger.  As a result, most EMV solutions use a signature and not a PIN.  The reason Square, Intuit and the like get away with peddling these non-compliant POI solutions is that they also serve as the merchant’s acquiring bank and are accepting the risk of the merchant using a non-compliant POI.

The bottom line here is that merchants need to understand these risks and then make appropriate decisions on what risks they are will to accept in regards to the explicit or implicit storage of CHD.

Mobile Payment Processing

The key thing to know about these solutions is that the PCI Security Standards Council has publicly stated that these solutions will never be considered PCI compliant.  Yes, you heard that right; they will never be PCI compliant.  That is mostly because of the PCI PTS standard regarding the security of the point of interaction (POI) for PIN entry and the fact that smartphones and tablets have built in keyboard loggers that record everything entered into these devices.  There are secure solutions such as the Verifone PAYware line of products.  However, these products only use the mobile device as a display.  No cardholder data is allowed to be entered into the mobile device.

So why are these solutions even available if they are not PCI compliant?  It is because a number of the card brands have invested in the companies producing these solutions.  As a result, the card brands have a vested interest in allowing them to exist.  And since the companies offering the solutions are also acting as the acquiring bank for the merchant, they explicitly accept the risk that these solutions present.  That is the beauty of the PCI standards, if a merchant’s acquiring bank approves of something, then the merchant is allowed to do it.  However, very few merchants using these solutions understand the risk these solutions present to them.

First is the risk presented by the swipe/dip device.  Some of these devices encrypt the data at the swipe/dip but not all.  As a result, you should ask the organization if their swipe/dip device encrypts the information.  If it does encrypt, then even if the smartphone/tablet comes in contact with the information, it cannot read it.  If it is not encrypted, I would move on to the next mobile payments solution provider.

The second risk presented is the smartphone/tablet keyboard logger.  This feature is what allows your mobile device to guess what you want to type, what songs you like and a whole host of convenience features.  However, these keyboard loggers also remember anything typed into them such as primary account numbers (PAN), driver’s license numbers and any other sensitive information they can come into contact.  They can remember this information as long as it is not overwritten in the device’s memory.  Depending on how much memory a device has, this can be anywhere from weeks to months.  One study a few years back found that information could be found on mobile devices for as long as six months and an average of three months.

While encrypting the data at the swipe/dip will remove the risk that the keyboard logger has CHD, if you manually key the PAN into the device, then the keyboard logger will record it.  As a result, if you are having a high failure rate with swiping/dipping cards, you will have a lot of PANs contained in your device.

The bottom line is that if you ever lose your mobile device or your trade it in, you risk exposing CHD if you do not properly wipe the device.  It is not that these solutions should not be used, but the purveyors of these solutions should be more forthcoming in the risks of using such solutions so that merchants can make informed decisions beyond the cheap interchange fees.

There are more things merchants can do to keep it simple and I will discuss those topics in a future post.

01
Jul
14

The Flaw In Requirement 8.5.1

Today it was unceremoniously announced that a number of major restaurant chains’ franchisees had been potentially hacked between February 28, 2014 and April 18, 2014 because their point of sale (POS) vendor’s remote access account had been compromised.  I say franchisees because I know a couple of these restaurant chains’ corporate operations and they were not using a third party to manage POS.

In a nutshell, the attackers gained access to the POS vendor’s LogMeIn account.  LogMeIn, like a lot of similar remote access facilities, has an address book where you can store remote access credentials.  So with access to LogMeIn, by default, the attackers had access to the address book that contained credentials for any customer environments in the address book (likely all customers, but possibly not).

To remind everyone, requirement 8.5.1 of the PCI DSS v3 states:

 “Additional requirement for service providers: Service providers with remote access to customer premises (for example, for support of POS systems or servers) must use a unique authentication credential (such as a password/phrase) for each customer.

Note: This requirement is not intended to apply to shared hosting providers accessing their own hosting environment, where multiple customer environments are hosted.

Note: Requirement 8.5.1 is a best practice until June 30, 2015, after which it becomes a requirement.”

The PCI SSC guidance for requirement 8.5.1 states:

 “To prevent the compromise of multiple customers through the use of a single set of credentials, vendors with remote access accounts to customer environments should use a different authentication credential for each customer. Technologies, such as two-factor mechanisms, that provide a unique credential for each connection (for example, via a single-use password) could also meet the intent of this requirement.”

It is likely that the vendor was trying to get a jump on complying with requirement 8.5.1 in the PCI DSS v3.  However, this vendor may have been using such an approach all along to manage customer remote access which is also not uncommon with technology companies.

The first thing to note is that requirement 8.5.1 is a best practice until June 30, 2015 after which it becomes a full requirement.  However, as I pointed out in an earlier post, a lot of vendors will likely have to start rolling out a remote access solution as soon as possible to minimize service level agreement (SLA) issues.

One of the most likely ways vendors are addressing compliance with 8.5.1 is through services such as LogMeIn, GoToMyPC and similar services.  These are inexpensive services available to any organization or anyone.  There are also enterprise solutions such as those from Bomgar and the like that purport to have better security.  However, all of these solutions share the concept of an address book to make gaining remote access easier for the vendors’ users that rely upon them.  And that is their Achilles’s heel.  If an attacker gains access to the remote access service, they gain access to the address book and, therefore, to the customers’ credentials stored in that address book.  Game over.

It is important to note though that what this vendor was doing fully complies with requirement 8.5.1.  But even though this service provider was complying with the intent of 8.5.1, the implementation was flawed.  This is just another example of how PCI compliance does not mean that security issues cannot still occur.

How easy is this to happen?  Think a spear phishing attack against any vendor that does remote support and maintenance.  Regardless of the customer credential management solution (in-house or cloud based), once access to the credential management solution is compromised any concept of customer security is over.

So what should vendors being doing to mitigate this situation?  Exactly what our vendor who was breached did, implement two-factor authentication on the credential management system.  Spear phishing attacks will not be successful because even with the credentials to LogMeIn or similar, the attacker will need the second factor.  Yes, the attacker can still compromise the support person’s desktop, but they will not have access to customer credentials.

Trouble is, some vendors will want a cheap two-factor solution meaning something that sends out codes via SMS, email or telephone, versus RSA SecurID or SafeNet to name a few.  Solutions over SMS, telephone and email have a variety of known vulnerabilities and can easily be intercepted or even redirected.  In the case of LogMeIn, they indicate that they only support SecurID.

Regardless, all of you service providers out there that have remote access to your customers managed by some enterprise credential management solution, please implement a strong two-factor authentication solution on your customer credential management solution before you too become a newspaper headline.

I would like to believe that this vendor thought they were doing the right thing and got burned because of how they implemented their solution.  At least they stepped up and fixed it the right way.  Unfortunately, this is how we sometimes learn, from our mistakes.

22
Jun
14

Keep It Simple

I keep reading articles and blog posts about all sorts of security solutions and how to secure an organization from the onslaught of network attacks.  However, all of these discussions seem to assume that everyone is a Fortune 500 company.  Threat intelligence, hack backs, APT, etc. are discussed as though everyone has the ability to implement the recommendations presented.

Important statistics that all of us in the security profession need to remember is that the vast majority of organizations are not Fortune 500 organizations.  As of 2008 (the latest statistics I could find from the US Census Bureau), there are almost 6 million businesses in the United States.  The Fortune 500 therefore comprises 0.0084% of the total businesses in the US.  To make matters worse, organizations that employ less than 100 employees make up 98.1644% of all employers in the US.  I would guess that these statistics would be relatively consistent around the world.

The reason these statistics are important is that security professionals need to pull their collective heads out of their posteriors and stop making security so hard that it is impossible for the 98.1644% to implement.

Do you now understand the frustration of most business people?

They do not have a security staff of tens or even hundreds to tackle security issues.  They are lucky if they have an IT person or two.  If they can afford it, they outsource and do everything possible to make themselves secure, but they only can do so much and their resources are extremely limited.  Particularly so given the Great Recession that they just survived.

Margins for small businesses are very slim.  You can argue all you want that today’s businesses are only competitive if they leverage technology and that technology comes with prerequisites.  However, have we created an environment where the hurdle to be in business is now so high that small businesses are just going to be targets regardless?

As a result, I challenge the security community to come up with realistic solutions to security.  We need to develop common sense, simple but effective security methods so that the 98.1644% of organizations are reasonably protected.  Granted, security is not perfect, but we have got to stop discussing security and privacy as though every business is a Fortune 500.  They are not and our solutions and recommendations need to reflect that fact.

This brings me back to the PCI DSS.  If all of the requirements of the PCI DSS could be executed 99.9999% of the time every day, would it keep an organization secure (recognizing that security is not perfect)?  I believe it would.  But it’s that consistency of execution that is the problem regardless of organizational size.

So let us refocus our priorities and help the vast majority of the world get secure.

I have tried to do that with this blog, but I too have been seduced by the big dollars that the Fortune 500 toss my direction.  In my very humble opinion, I think we need to get back to our roots and do more for the vast majority that are struggling with security.

That said, in the process of simplifying, maybe there will be some opportunities for the fortune 500 to find solutions that are less complex.  It could be a win-win for everyone.

18
May
14

Adventures In Finding Cardholder Data

On page 10 of the PCI DSS v3 under the heading of ‘Scope of PCI DSS Requirements’, second paragraph, is the following sentence.

 “At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

Under the first bullet after that paragraph is the following.

 “The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined CDE.”

In the past, organizations would rely on their database and file schemas along with their data flow diagrams and the project was done.  However, the Council has come back and clarified that the search for cardholder data (CHD), primarily the primary account number (PAN).  The Council has stated that this search needs to be more extensive to prove that PANs have not ended up on systems where it is not expected.

Data Loss Prevention

To deal with requirement 4.2, a lot of organizations invested in data loss prevention (DLP) solutions.  As a result, organizations with DLP have turned those DLP solutions loose on their servers to find PANs and to confirm that PANs do not exist outside of their cardholder data environment (CDE).

Organizations that do this quickly find out three things; (1) the scope of their search is too small, (2) their DLP solution is not capable of looking into databases, and (3) their DLP tools are not as good at finding PANs at rest as they are when it’s moving such as with an email message.

On the scope side of the equation, it’s not just servers that are in scope for this PAN search, it’s every system on the network including infrastructure.  However, for most infrastructure systems such as firewalls, routers and switches it is a simple task to rule them out for storing PANs.  Where things can go awry is with load balancers, proxies and Web application firewalls (WAF) which can end up with PANs inadvertently stored in memory and/or disk due to how they operate.

Then there is the scanning of every server and PC on the network.  For large organizations, the thought of scanning every server and PC for PANs can seem daunting.  However, the Council does not specify that the identification of CHD needs to be done all at once, so such scanning can be spread out.  The only time constraint is that this scanning must be completed before the organization’s PCI assessment starts.

The second issue that organizations encounter with DLP is that their DLP has no ability to look into their databases.  Most DLP solutions are fine when it comes to flat files such as text, Word, PDF and Excel files, but the majority of DLP solutions have no ability to look into databases and their associated tables.

Some DLP solutions have add-on modules for database scanning but that typically required a license for each database instance to be scanned and thus can quickly become cost prohibitive for some organizations.  DLPs that scan databases typically scan the more common databases such as Oracle, SQL Server and MySQL.  But legacy enterprise databases such as DB/2, Informix, Sybase and even Oracle in a mainframe environment are only supported by a limited number of DLP solutions.

Another area where DLP solutions can have issues is with images.  Most DLP solutions have no optical character recognition (OCR) capability to seek out PANs in images such as images of documents from scanners and facsimile machines.  For those DLP solutions that can perform OCR, the OCR process slows the scanning process down considerably and the false positive rate can be huge particularly when it comes to facsimile documents or images of poor quality.

Finally there is the overall issue of identifying PANs at rest.  It has been my experience that using DLP solutions for identifying PANs at rest is haphazard at best.  I believe the reason for that is that most DLP solutions are relying on the open source Regular Expressions (RegEx) to find the PANs.  As a result, they all suffer from the same shortcomings of RegEx and therefore their false positive rates end up being very similar.

The biggest reason for the false positive rate is the fact that most of these solutions using RegEx do not conduct a Luhn check to confirm that the number found is likely to be a PAN.  That said, I have added a Luhn check to some of the open source solutions and it has amazed me how many 15 and 16 digit combinations can pass the Luhn check and yet not be a PAN based on further investigation.  As a result, having a Luhn check to confirm a number as a potential PAN reduces false positives, but not as significantly as one might expect.

The next biggest reason RegEx has a high false positive rate is that RegEx looks at data both at a binary level and character level.  As a result, I have seen PDFs flagged as containing PANs.  I have also seen images that supposedly contained PANs when I knew that the tool being used had no OCR capability.

I have tried numerous approaches to reduce the level of false positive results, but have not seen significant reductions from varying the RegEx expressions.  That said, I have found that the best results are obtained using separate expressions for each card brand’s account range versus a single, all-encompassing expression.

Simple Solutions

I wrote a post a while back regarding this scoping issue when it was introduced in v2.  It documents all of the open source solutions available such as ccsrch, Find SSNs, SENF and Spider.  All of these solutions run best when run locally on the system in question.  For small environments, this is not an issue.  However, for large organizations, having to have each user run the solution and report the results is not an option.

In addition, the false positive rates from these solutions can also be high.  Then there is the issue of finding PANs in local databases such as SQL Lite, Access or MySQL.  None of these simple solutions are equipped to find PANs in a database.  As a result, PANs could be on these systems and you will not know it using these tools.

The bottom line is that while these techniques are better than doing nothing, they are not that much better.  PANs could be on systems and may not be identified depending on the tool or tools used.  And that is the reason for this post, so that everyone understands the limitations of these tools and the fact that they are not going to give definitive results.

Specialized Tools

There are a number of vendors that have developed tools that have been developed to specifically find PANs.  While these tools are typically cheaper than a full DLP solution and some of these tools provide for the scanning of databases, it has been my experience that these tools are no better or worse than OpenDLP, the open source DLP solution.

Then there are the very specialized tools that were developed to convert data from flat files and older databases to new databases or other formats.  Many of these vendors have added modules to these tools in the form of proprietary methods to identify all sorts of sensitive data such as PANs.  While this proprietary approach significantly reduces false positives, it unfortunately makes these tools very expensive, starting at $500K and going ever higher, based on the size and environment they will run.  As a result, organizations looking at these tools will need more than just use their need for PAN search capability to justify their cost.

The bottom line is that searching for PANs is not as easy as the solution vendors portray.  And even with extensive tuning of such solutions, the false positive rate is likely going to make the investigation into your search results very time consuming.  If you want to significantly reduce your false positive rate, then you should expect to spend a significant amount of money to achieve that goal.

Happy hunting.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

October 2014
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 948 other followers


Follow

Get every new post delivered to your Inbox.

Join 948 other followers