Posts Tagged ‘compliance



08
Aug
14

Requirement 10.6.2 Clarification

As a refresher, requirement 10.6.2 states:

“Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.”

The argument in PCI circles is the definition of “all other systems”.  Some of us believed that it meant systems other than those in-scope.  Other people believed that it had to refer to only in-scope systems such as a user workstation.  As a result, I asked the PCI SSC to clarify this requirement and this is the response I got back.

“In PCI DSS v2.0, logs for all in-scope systems were required to be reviewed daily. However it was recognized that for larger or more complex environments, there could be lower risk systems that were in scope for PCI DSS that could warrant less frequent log reviews. As such, PCI DSS v3.0 defines a number of events and system types that require daily log reviews, and allows the organization to determine the log review frequency for all other in-scope events and systems that do not fall into those categories.

For some environments, such as those designed specifically for the purposes of PCI DSS, then it is possible that all in-scope systems fall under the system categories defined in Requirement 10.6.1, meaning that daily log reviews are required for all in-scope systems. In other environments, there may be many different types of system that are considered in-scope, but which are not critical systems and neither store, process or transmit CHD nor provide security services to the CDE. Some possible examples could be stock- control or inventory-control systems, print servers (assuming there is no printing of CHD) or certain types of workstations. For these events or systems, the entity, as part of its annual risk assessment process, is expected to define the frequency for reviews based on the risk to its specific environment.

The intent of this update is not to apply PCI DSS Requirements to out-of-scope systems. We realize that the current wording is causing confusion and will address this in the next revision.”

So there we have it.  Not the first time my interpretation was wrong.  The requirement means in-scope systems that, from an assessment of risk, are at less of a risk of compromise can reduce the frequency of log reviews.

But that means you need to have an accurate risk assessment to support your argument.  So those of you that have not explicitly assessed the risk of your category 2 systems will have to break them out to support a reduced log review frequency.

Advertisement
08
Aug
14

The Dilemma Of PCI Scoping – Part 3

In part 2 we discussed the criticality of a risk assessment and started on implementing the framework with fixing monitoring and alerting so that we can properly manage the risk we will be accepting.  In this part I will deal with Category 2 and 3 systems and how to manage their risk.

One of the big problems I have with the security purists is with their arguments over Category 3 systems.  Do not get me wrong, I understand their reasons for their statements.  However to me, their arguments seem to imply that Category 3 systems are somehow not secure or not as secure as other systems and therefore cannot be trusted.  This further implies that organizations are implementing some sort of multi-tier security structure which should not be the case.  Taking such an approach would create an operational and support nightmare, let alone increase the risk of compromise.

In today’s world, all devices need to have a certain basic level of security comprised of, at a minimum, hardened OS, minimized ports and services, access control, anti-virus and anti-malware (as appropriate and necessary), personal firewall (if mobile), logging and other similar security measures as necessary.  All of this is something I like to refer to as “Security 101”.  If you are not doing Security 101 across the board, then you likely have bigger issues than just PCI compliance.  But if you are following a Security 101 approach, then all of your devices under your direct control should have a solid, basic level of security and should be able to be trusted when on your network.  As a result, with the proper logical access controls in place, you should be able to trust Category 3 systems coexisting with certain Category 2 systems.

Notice I said “certain Category 2 systems” not “all” Category 2 systems.

Category 2b systems that have inbound connectivity to the CDE are the most risky because of that direct inbound connectivity.  As such, should any Category 2b system become compromised, the CDE is also compromised by default.  Most Category 2b systems are gateways to the CDE such as with out-of-band management systems or “jump boxes” for network and system administration and Citrix or other virtual desktop technologies that you can use to grant business user access to CDE applications.

These gateways need to be configured such that if they become compromised they are immediately taken out of service until they can be rebuilt and/or reconfigured as secure.  This is typically accomplished not just with anti-virus, but also through file integrity monitoring, application whitelisting and other similar measures.  But this needs to be monitored at the virtual desktop infrastructure level (aka, Citrix, VMware, Terminal Services, etc. servers), as well as the virtual desktop image level.  If a file changes in the virtual environment or virtual desktop image that should not have changed, access to the virtual desktop environment is suspended until the infrastructure or image can be recertified as secure.  But that means very tight monitoring on the critical files, however this should be relatively easy to do as long as the virtual desktop image is restricted to only a single cardholder data (CHD) application.

The other thing that needs to happen with virtual desktop is that organizations must be extremely diligent about who has access to the virtual desktop.  This means very timely user management, something a lot of organizations are very lax about.  The PCI DSS states in requirement 8.1.4 that organizations must “Remove/disable inactive user accounts at least every 90 days.”  90 days is a very long time when you look at what attackers are able to do in such a timeframe.  As a result, organizations need to step up their game in this regard and when someone’s job function changes, that needs to have a help desk ticket generated for changing that person’s access.  A 90 day review process is fine, but if you are relying on that sort of thing as a control, you are not being diligent enough in managing your users.

But what about Category 2a systems you might ask?  Category 2a systems are directory servers, authentication servers, DNS and DHCP servers, anti-virus master servers, system update servers and other network and system management/monitoring support systems.  These systems are those that only system or network administrators should ever have direct access.  These systems should be configured as “bastion” servers, security hardened so that they are not easily compromised by vulnerabilities.  Most vendors of these systems understand this and have set up their installation routines accordingly – restricting services, allowing only necessary applications, etc.  The bottom line with Category 2a systems is that, if any of them become compromised, it is game over because the attacker has the privileges of an administrator.

I am not a fan of separate directory systems.  These systems double as the access control management system and the more complicated access control becomes, the more likely that access control becomes less reliable and secure.  As such, I prefer a common access control system managed through an integrated directory system such as Microsoft’s Active Directory (AD).  Over the years, the amount of ports required to make AD work between domain controllers have gotten much better and easier to implement.  Plus there is an ability starting with Windows Server 2008 to even limit and control the dynamic ports used.  As such, I recommend using a common domain approach rather than separate domains with trust relationships as it just makes management so much easier and more reliable.  Whether or not you choose to put domain controllers inside your CDE is up to you to decide.

However, there is a larger consideration here that needs to be put on the table.  Think about what the security purists are saying with their statements about Category 3 systems.  If an attacker can jump from a Category 3 system to owning a Category 2a system such as a directory server, DNS server, DHCP server, etc. what does that say about an organization’s security posture to begin with?  If an attacker gains control of a system/network administrator’s credentials, it is game over.  That is not a device failure issue, that is a social engineering failure.  As a result, you can have whatever controls of separation all you want, but in the end if an administrator gets owed, what did all of that separation do for you?  Nothing.  The best we can do is to manage and mitigate the risks and monitor for any attempts to leverage those risks and deal with any attacks as quickly as possible.

At the end of the day, organizations need to assess the risk that their Category 2 systems provide to the CDE.  If it is higher than desired, those systems need to be put into the CDE.  Others can share networks with other Category 2 and 3 systems if necessary.

There are obviously enhancements that you can make to further improve on this strategy.  But this is just what I would consider the minimum to get by which seems to be what a lot of you desire.

01
Aug
14

The Dilemma Of PCI Scoping – Part 2

In Part 1 I discussed how isolation is an answer, but probably not a viable answer for almost all but the most security conscientious of organizations such as the military, defense contractors or those that can afford that sort of painstaking luxury.  So unless you consider everything in scope for PCI compliance, is there a viable way to reduce scope?

Before we get to that though, we need a quick discussion on risk management as the solution is all predicated on the identification and management of risk.  Because if you cannot do an appropriate risk assessment, then the only choice you really have is to consider everything in scope I know the vast majority of you do not like that approach.

Assessing Risk

In order for my proposed solution to have a chance at working properly, an organization needs to understand its risks, what risks will be accepted and managed and what it will take mitigate the residual risks.  Doing a risk assessment is the way to do that, but most organizations avoid such an assessment for a variety of reasons.  The most common reasons I have heard are:

  • The risk assessment framework is too complex,
  • We tried this once before and never got any meaningful results,
  • We were never able to agree on the risks and their scores, or my personal favorite,
  • This spreadsheet is our risk assessment (it is not, but see Tom Benhave’s blog post on the topic as he does a much better job than I could explaining this topic).

The reason why risk is such a tough topic is that everyone has their own perspective on risk; good, bad or otherwise.  There are numerous articles and presentations on this phenomena but my favorite one is from Lance Spitzner of SANS who wrapped his around security awareness training and discusses risk at the start of his presentation describing why people are such poor judges of risk.  He uses various statistics regarding events that can happen in people’s lives to illustrate this fact.  My personal favorite example of just such a statistic is that people have a greater chance of dating a supermodel than of winning the PowerBall lottery.  Granted, both have tremendous odds, but the odds of dating a supermodel are still significantly less than the odds of winning the PowerBall.

The bottom line is that, without a decent risk assessment, an organization has no way to know the amount of risk the organization is willing to accept and how they will manage that risk.  The Council has repeatedly said that PCI compliance is supposed to consider risk and take a “risk-based” approach.  However, the problem is that we each have our own opinion of risk and what risks we are willing to take on.  But at the end of the day, no matter what an organization does, there is going to be risk.  The question is, “Are these risks my organization willing to take on?”  That question can only be answered by a risk assessment and an understanding of how risks can be managed and mitigated.

How your organization chooses which risks it is will to accept and how to manage those remaining risks are up to your organization to decide.  This is why the PCI DSS and all other security frameworks require an annual risk assessment to be performed.  The risk assessment process provides a framework for an organization to document their risks, understand those risks (size, frequency of occurrence, costs, etc.), how the risks can be managed or mitigated, then agree to what risks they will take on and how they will manage and/or mitigate those risks.

From here on we will assume that the organization has a valid risk assessment and that they are willing to take on the risks presented by the example I will discuss.

Managing Risk

Today’s integrated and connected world just does not lend itself to an isolationist approach due to the volume of information involved, business efficiencies lost and/or the operational costs such an approach incurs.  As a result, organizations need to take a hybrid approach of heavily protecting some components and taking on and managing the risks inherent to such an approach.

When it comes to the IT side of risk management and mitigation, most organizations rely on some form of near real time monitoring through collected system/event log data and other sources to monitor their environment(s).  Unfortunately, where this approach comes up short is that there are too many alerts to follow up and so alerts go unaddressed.  Almost every QSA can tell you about a discussion with operations personnel where the statement, “Oh, that’s a false positive alert, so I don’t have to worry about it” has been made.

This is the first problem you must address and make sure that this attitude never creeps back into your people that monitor alerts.  Anyone in operations that “knows” an alert is a false positive needs either: (1) re-education, or (2) your organization needs to seriously re-tune your alerting mechanism(s).  All you have to do is read the Target and Neiman Marcus press reports if you need examples of how bad things can get if your personnel are blowing off alerts because they believe they are not accurate.

In my experience, a lot of these problems are the result of bad or incomplete implementations of these systems.  Unfortunately, there are a lot of people out there that think that these solutions are more like a Ronco Rotisserie Oven where, as they famously say in the ads, “you can set it and forget it.”  Yes these solutions may be “appliances”, but that is where the comparison ends.

System incident and event management (SIEM) systems require fairly constant tuning and tweaking, beyond their own software and signature updates, to minimize false positive alerts in response to the changes to an organization’s networks and systems.  Yet time and again, I encounter monitoring and alerting systems that were put in place years ago (typically to meet PCI compliance) and have not been adjusted/changed since then while all around them changes have been occurring that affect their operation.

When interviewing the people responsible for these systems I hear statements such as, “Yeah, that alert started to appear when we implemented [name of change].  We were told to just ignore it.”  When asked why they have not tuned it out of the SIEM, you get either they do not have time, they do not know how, they do not have the rights to do that or, my personal favorite, the head of security or the security committee will not let us change that.

The reason this issue does not get addressed is that it has no visibility since alerts are tucked into the various monitoring tools.  So, the best way to address this situation is to give it visibility by automatically feeding all alerts into an organization’s help desk system.  This gives all alerts immediate visibility by putting them in an automated tracking and escalation process.  It also allows for triage and investigation activities to be documented and, based on the results of those triage and investigation activities, having the alert assigned to the right people/groups to address the alerts.

“Whoa, let’s not get crazy here,” I am sure some of you are YELLING at the screen.  There is no doubt this is a very brave step to take because this will potentially uncover something you probably did not want to advertise given the state of your existing alerting.  But that is typically only be a short term problem.  Unfortunately, it may be the only way to get the underlying problem of tuning and tweaking of the alerting systems completed and constantly addressed.

But taking such a step is not entirely a bad thing, at least in the long run.  A side benefit of doing such a thing is that it will focus an organization on triage activities for classifying the urgency of the alerts.  Not all alerts need immediate action, but a lot of them can require immediate investigation and then can be put on a back burner.  It will also give visibility to the number of alerts being dealt with on a daily basis after triage.  That typically results in identifying and justify why more staff are required to deal with the onslaught of alerts that need to be researched.

Another change that organizations should make is adding a box into their change control form that indicates what the impact of a change will have on the SIEM environment.  At least these three basic questions need to be answered in regards to SIEM or other monitoring systems.

  • Do new alerts need to be added and, if so, what do they need to monitor and what are the alerting thresholds?
  • Do existing alerts need to be modified and, if so, what modifications are needed?
  • Are there alerts that are no longer needed?

If you address these areas, you should have monitoring and alerting taken care of with a built in feedback loop to keep it that way.

In Part 3, I am going to wrap up my discussion on PCI scoping with a discussion of Category 2 and 3 systems.

24
Jul
14

Keeping It Simple – Part 2

In Part 1, the key to keeping things as simple as possible is to avoid any storage of cardholder data.  Period.  End of discussion.

I also covered mobile payments because more and more small merchants go to those solutions because of the cheap interchange fees on transactions.  However, few small merchants truly understand the risks that these solutions present because they are offered by seemingly trustworthy providers.

So what else can you do to keep things simple?

Outsource

When consultants typically mention outsourcing, huge amounts of money typically float past people’s eyes.  However, PayPal is outsourcing and its fees are not too bad.  There are a number of similar services for eCommerce out there for conducting payments.  There used to be concerns about abandoned shopping carts because of a separate Window pop up, but as online shoppers have gotten used to PayPal like services, that has diminished.

Your bank may also have a partnership with one or more payment vendors, so it is worth it to talk to them first to find out what options they may have to offer.  If they do partner with a payment solution, a lot of times they can offer reduced fees/expenses and other incentives to use their partner(s).

The key thing to understand is that even when you invoke PayPal or similar services from your Web site, you still have a responsibility to ensure the security of your Web site.  This is because your Web site is the system that directs your customer to the payment service you have chosen.  If an attacker changes that redirect to “The Guru’s Payment Process”, that is not PayPal’s problem, that is your problem.  This was clarified in the PCI SSC’s Information Supplement on eCommerce back in January 2013 when the Council declared that even Web sites that redirected to a payment service still were in scope, albeit a very reduced scope.  The ramifications of this supplement have been discussed repeatedly with QSAs since then, but in my experience that message is still not being consistently presented to QSAs’ customers.

No Choice But To Store Cardholder Data

So you have found a solution that does exactly what your business needs, but it stores cardholder data (CHD).  The first question you should ask the vendor is, “How does your solution secure the stored cardholder data?”

For vendors that use data encryption, the answer to this question should be either triple DES (3DES) 168-bit or advanced encryption standard (AES) 128-, 192- or 256-bit encryption.  DES and 3DES 56- and 112-bit are no longer considered secure, so any vendor using those is not meeting the PCI encryption requirements.  Anything else and you should consult with a QSA as to whether or not the encryption algorithm is acceptable.

For vendors using encryption, the next question you should ask them is, “How does your solution perform key management?”  Preferably, someone other than someone in your organization manages the encryption keys such as the vendor.  However, I have seen some solutions where the key management is done by the merchant through a sophisticated key generation solution in the application so that the merchant never actually knows or has access to the encryption key(s).  The questions you need answered are all in requirements 3.5 and 3.6 of the PCI DSS.  If the vendor cannot address these requirements, then you need to find a new solution vendor that can provide answers.

For vendors that use one-way hashing, the preferred algorithms used should be SHA-2 or SHA-3 with a salt value.  MD5 and SHA-0 have known security issues that make them no longer secure.  SHA-1 has a potential security issue that could cause data to also be insecure, but a lot of software vendors still use SHA-1 with a salt value.  If you are going to use a solution that relies on SHA-1, the vendor should have additional controls in place to monitor access to the data and alert on any access where the data is being accessed in bulk.  If not, find another vendor.

Tokenization

Do not give up hope if your preferred solution stores cardholder data (CHD).  You may be able to use a processor that tokenizes the PAN.  Once tokenized, the PAN is no longer considered CHD, so the solution would not being storing CHD.

Proper tokenization is a form of encryption or hashing in that the token does not have any true one-to-one relationship with the PAN it replaces.  However, this is a fact you need to confirm with the transaction processor to ensure that their tokenization process is secure.  If they are not representing their token as secure, you need to move on to another processor.

Tokenization can occur at the swipe/dip of the card at the point of interaction (POI) or as the byproduct of the successful completion of a transaction (the most common occurrence).

Tokens can also be generated for eWallet situations where you want to encourage customers to come back and shop at your eCommerce site and store their cardholder information for ease of subsequent transactions.  In these instances, the processor generating the token also stores the CHD and then translates the token to that stored CHD when your system submits it and then sends the CHD for approval for payment.

This is how ExxonMobil Speedpass and open road toll collection systems work.  The RFID device is the token and the payment system recognizes the serial number of the RFID device and translates that to the customer’s stored CHD for payment processing.

There are more than enough ideas presented in these two posts to vastly simplify any merchant’s PCI compliance exposure.  If these solutions will not solve your issues, then you are either making your business too complex or your business model truly needs some expert advice to address your compliance challenges.

20
Jul
14

Keeping It Simple – Part 1

Apparently, I struck a nerve with small business people trying to comply with PCI.  In an ideal world, most merchants would be filling out SAQ A, but we do not live in an ideal world.  As a result, I have collected some ideas on how merchants can make their lives easier.

Do Not Store Cardholder Data

It sounds simple, but it amazes me how many small businesses are storing cardholder data (CHD).  In most cases, it is not like they wanted to store CHD, but the people in charge just did not ask vendors that one key question, “Does your solution store cardholder data?”  If a vendor answers “Yes”, then you should continue your search for a solution that does not store CHD.

Even when the question is asked of vendors, you may not get a clear answer.  That is not necessarily because the vendor is trying to hide something, but more likely because the salespeople have never been asked this question before.  As a result, do not be surprised if the initial answer is, “I’ll have to get back to you on that.”  If you never get an answer or the answer is not clear, then you should move on to a different vendor that does provide answers to such questions.

If your organization cannot find a solution that does not store CHD, then at least you are going into a solution with your eyes open.  However, in today’s payment processing application environment, most vendors are doing all that they can to avoid storing CHD.  If the vendors you are looking at for solutions are still storing CHD, then you may need to get creative to avoid storing CHD.

That said, even merchants that only use points of interaction (POI) such as card terminals can also end up with CHD being stored.  I have encountered a number of POIs that were delivered from the processor configured such that the POI was storing full PAN.  Apparently, some processors feel it is the responsibility of the merchant to configure the POI securely even though no such instructions were provided indicating that fact.  As a result, you should contact your processor and have them walk you through the configuration of the POI to ensure that it is not storing the PAN or any other sensitive information.

Then there are the smartphone and tablet solutions from Square, Intuit and a whole host of other mobile solution providers.  While the PCI SSC has indicated that such solutions will never be considered PCI compliant, mobile POIs continue to proliferate with small businesses.  The problem with most of these solutions is when a card will not work through the swipe/dip and the CHD is manually keyed into the device.  It is at that point when the smartphone/tablet keyboard logger software captures the CHD and it will remain in the device until it is overwritten which can be three to six months down the road.  In the case of EMV, the device can capture the PIN if it is entered through the screen thanks to the built in keyboard logger.  As a result, most EMV solutions use a signature and not a PIN.  The reason Square, Intuit and the like get away with peddling these non-compliant POI solutions is that they also serve as the merchant’s acquiring bank and are accepting the risk of the merchant using a non-compliant POI.

The bottom line here is that merchants need to understand these risks and then make appropriate decisions on what risks they are will to accept in regards to the explicit or implicit storage of CHD.

Mobile Payment Processing

The key thing to know about these solutions is that the PCI Security Standards Council has publicly stated that these solutions will never be considered PCI compliant.  Yes, you heard that right; they will never be PCI compliant.  That is mostly because of the PCI PTS standard regarding the security of the point of interaction (POI) for PIN entry and the fact that smartphones and tablets have built in keyboard loggers that record everything entered into these devices.  There are secure solutions such as the Verifone PAYware line of products.  However, these products only use the mobile device as a display.  No cardholder data is allowed to be entered into the mobile device.

So why are these solutions even available if they are not PCI compliant?  It is because a number of the card brands have invested in the companies producing these solutions.  As a result, the card brands have a vested interest in allowing them to exist.  And since the companies offering the solutions are also acting as the acquiring bank for the merchant, they explicitly accept the risk that these solutions present.  That is the beauty of the PCI standards, if a merchant’s acquiring bank approves of something, then the merchant is allowed to do it.  However, very few merchants using these solutions understand the risk these solutions present to them.

First is the risk presented by the swipe/dip device.  Some of these devices encrypt the data at the swipe/dip but not all.  As a result, you should ask the organization if their swipe/dip device encrypts the information.  If it does encrypt, then even if the smartphone/tablet comes in contact with the information, it cannot read it.  If it is not encrypted, I would move on to the next mobile payments solution provider.

The second risk presented is the smartphone/tablet keyboard logger.  This feature is what allows your mobile device to guess what you want to type, what songs you like and a whole host of convenience features.  However, these keyboard loggers also remember anything typed into them such as primary account numbers (PAN), driver’s license numbers and any other sensitive information they can come into contact.  They can remember this information as long as it is not overwritten in the device’s memory.  Depending on how much memory a device has, this can be anywhere from weeks to months.  One study a few years back found that information could be found on mobile devices for as long as six months and an average of three months.

While encrypting the data at the swipe/dip will remove the risk that the keyboard logger has CHD, if you manually key the PAN into the device, then the keyboard logger will record it.  As a result, if you are having a high failure rate with swiping/dipping cards, you will have a lot of PANs contained in your device.

The bottom line is that if you ever lose your mobile device or your trade it in, you risk exposing CHD if you do not properly wipe the device.  It is not that these solutions should not be used, but the purveyors of these solutions should be more forthcoming in the risks of using such solutions so that merchants can make informed decisions beyond the cheap interchange fees.

There are more things merchants can do to keep it simple and I will discuss those topics in a future post.

22
Jun
14

Keep It Simple

I keep reading articles and blog posts about all sorts of security solutions and how to secure an organization from the onslaught of network attacks.  However, all of these discussions seem to assume that everyone is a Fortune 500 company.  Threat intelligence, hack backs, APT, etc. are discussed as though everyone has the ability to implement the recommendations presented.

Important statistics that all of us in the security profession need to remember is that the vast majority of organizations are not Fortune 500 organizations.  As of 2008 (the latest statistics I could find from the US Census Bureau), there are almost 6 million businesses in the United States.  The Fortune 500 therefore comprises 0.0084% of the total businesses in the US.  To make matters worse, organizations that employ less than 100 employees make up 98.1644% of all employers in the US.  I would guess that these statistics would be relatively consistent around the world.

The reason these statistics are important is that security professionals need to pull their collective heads out of their posteriors and stop making security so hard that it is impossible for the 98.1644% to implement.

Do you now understand the frustration of most business people?

They do not have a security staff of tens or even hundreds to tackle security issues.  They are lucky if they have an IT person or two.  If they can afford it, they outsource and do everything possible to make themselves secure, but they only can do so much and their resources are extremely limited.  Particularly so given the Great Recession that they just survived.

Margins for small businesses are very slim.  You can argue all you want that today’s businesses are only competitive if they leverage technology and that technology comes with prerequisites.  However, have we created an environment where the hurdle to be in business is now so high that small businesses are just going to be targets regardless?

As a result, I challenge the security community to come up with realistic solutions to security.  We need to develop common sense, simple but effective security methods so that the 98.1644% of organizations are reasonably protected.  Granted, security is not perfect, but we have got to stop discussing security and privacy as though every business is a Fortune 500.  They are not and our solutions and recommendations need to reflect that fact.

This brings me back to the PCI DSS.  If all of the requirements of the PCI DSS could be executed 99.9999% of the time every day, would it keep an organization secure (recognizing that security is not perfect)?  I believe it would.  But it’s that consistency of execution that is the problem regardless of organizational size.

So let us refocus our priorities and help the vast majority of the world get secure.

I have tried to do that with this blog, but I too have been seduced by the big dollars that the Fortune 500 toss my direction.  In my very humble opinion, I think we need to get back to our roots and do more for the vast majority that are struggling with security.

That said, in the process of simplifying, maybe there will be some opportunities for the fortune 500 to find solutions that are less complex.  It could be a win-win for everyone.

18
May
14

Adventures In Finding Cardholder Data

On page 10 of the PCI DSS v3 under the heading of ‘Scope of PCI DSS Requirements’, second paragraph, is the following sentence.

 “At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

Under the first bullet after that paragraph is the following.

 “The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined CDE.”

In the past, organizations would rely on their database and file schemas along with their data flow diagrams and the project was done.  However, the Council has come back and clarified that the search for cardholder data (CHD), primarily the primary account number (PAN).  The Council has stated that this search needs to be more extensive to prove that PANs have not ended up on systems where it is not expected.

Data Loss Prevention

To deal with requirement 4.2, a lot of organizations invested in data loss prevention (DLP) solutions.  As a result, organizations with DLP have turned those DLP solutions loose on their servers to find PANs and to confirm that PANs do not exist outside of their cardholder data environment (CDE).

Organizations that do this quickly find out three things; (1) the scope of their search is too small, (2) their DLP solution is not capable of looking into databases, and (3) their DLP tools are not as good at finding PANs at rest as they are when it’s moving such as with an email message.

On the scope side of the equation, it’s not just servers that are in scope for this PAN search, it’s every system on the network including infrastructure.  However, for most infrastructure systems such as firewalls, routers and switches it is a simple task to rule them out for storing PANs.  Where things can go awry is with load balancers, proxies and Web application firewalls (WAF) which can end up with PANs inadvertently stored in memory and/or disk due to how they operate.

Then there is the scanning of every server and PC on the network.  For large organizations, the thought of scanning every server and PC for PANs can seem daunting.  However, the Council does not specify that the identification of CHD needs to be done all at once, so such scanning can be spread out.  The only time constraint is that this scanning must be completed before the organization’s PCI assessment starts.

The second issue that organizations encounter with DLP is that their DLP has no ability to look into their databases.  Most DLP solutions are fine when it comes to flat files such as text, Word, PDF and Excel files, but the majority of DLP solutions have no ability to look into databases and their associated tables.

Some DLP solutions have add-on modules for database scanning but that typically required a license for each database instance to be scanned and thus can quickly become cost prohibitive for some organizations.  DLPs that scan databases typically scan the more common databases such as Oracle, SQL Server and MySQL.  But legacy enterprise databases such as DB/2, Informix, Sybase and even Oracle in a mainframe environment are only supported by a limited number of DLP solutions.

Another area where DLP solutions can have issues is with images.  Most DLP solutions have no optical character recognition (OCR) capability to seek out PANs in images such as images of documents from scanners and facsimile machines.  For those DLP solutions that can perform OCR, the OCR process slows the scanning process down considerably and the false positive rate can be huge particularly when it comes to facsimile documents or images of poor quality.

Finally there is the overall issue of identifying PANs at rest.  It has been my experience that using DLP solutions for identifying PANs at rest is haphazard at best.  I believe the reason for that is that most DLP solutions are relying on the open source Regular Expressions (RegEx) to find the PANs.  As a result, they all suffer from the same shortcomings of RegEx and therefore their false positive rates end up being very similar.

The biggest reason for the false positive rate is the fact that most of these solutions using RegEx do not conduct a Luhn check to confirm that the number found is likely to be a PAN.  That said, I have added a Luhn check to some of the open source solutions and it has amazed me how many 15 and 16 digit combinations can pass the Luhn check and yet not be a PAN based on further investigation.  As a result, having a Luhn check to confirm a number as a potential PAN reduces false positives, but not as significantly as one might expect.

The next biggest reason RegEx has a high false positive rate is that RegEx looks at data both at a binary level and character level.  As a result, I have seen PDFs flagged as containing PANs.  I have also seen images that supposedly contained PANs when I knew that the tool being used had no OCR capability.

I have tried numerous approaches to reduce the level of false positive results, but have not seen significant reductions from varying the RegEx expressions.  That said, I have found that the best results are obtained using separate expressions for each card brand’s account range versus a single, all-encompassing expression.

Simple Solutions

I wrote a post a while back regarding this scoping issue when it was introduced in v2.  It documents all of the open source solutions available such as ccsrch, Find SSNs, SENF and Spider.  All of these solutions run best when run locally on the system in question.  For small environments, this is not an issue.  However, for large organizations, having to have each user run the solution and report the results is not an option.

In addition, the false positive rates from these solutions can also be high.  Then there is the issue of finding PANs in local databases such as SQL Lite, Access or MySQL.  None of these simple solutions are equipped to find PANs in a database.  As a result, PANs could be on these systems and you will not know it using these tools.

The bottom line is that while these techniques are better than doing nothing, they are not that much better.  PANs could be on systems and may not be identified depending on the tool or tools used.  And that is the reason for this post, so that everyone understands the limitations of these tools and the fact that they are not going to give definitive results.

Specialized Tools

There are a number of vendors that have developed tools that have been developed to specifically find PANs.  While these tools are typically cheaper than a full DLP solution and some of these tools provide for the scanning of databases, it has been my experience that these tools are no better or worse than OpenDLP, the open source DLP solution.

Then there are the very specialized tools that were developed to convert data from flat files and older databases to new databases or other formats.  Many of these vendors have added modules to these tools in the form of proprietary methods to identify all sorts of sensitive data such as PANs.  While this proprietary approach significantly reduces false positives, it unfortunately makes these tools very expensive, starting at $500K and going ever higher, based on the size and environment they will run.  As a result, organizations looking at these tools will need more than just use their need for PAN search capability to justify their cost.

The bottom line is that searching for PANs is not as easy as the solution vendors portray.  And even with extensive tuning of such solutions, the false positive rate is likely going to make the investigation into your search results very time consuming.  If you want to significantly reduce your false positive rate, then you should expect to spend a significant amount of money to achieve that goal.

Happy hunting.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

01
Mar
14

How Did It Happen?

This is just my supposition on how the Target breach occurred, but it is based on what has been released to date plus what limited knowledge I have of Target’s environment, the environments of other large retailers and my many years of penetration testing.

Fazio Mechanical Services

According to the latest reporting, Fazio Mechanical Services (Fazio) is believed to be the starting point of the Target breach.  From what has been reported, a Phishing attack on Fazio yielded access to Fazio’s computer systems and network.  In their statement regarding the breach, Fazio says:

 “Fazio Mechanical does not perform remote monitoring or control of heating, cooling or refrigeration systems for Target.”

“Our data connection with Target was exclusively for electronic billing, contract submission and project management, and Target is the only customer for whom we manage these processes on a remote basis. No other customers have been affected by the breach.”

If we take Fazio at their word, Fazio did not have direct access to Target’s network.  That means if Fazio was breached, that breach did not result in a direct path to Target’s network.  Brian Krebs reported that he spoke with an ex-Target employee who told him that Target uses the Ariba Supplier Management solution for managing its external vendors.  The Ariba system is available publicly on the Internet but it requires credentials in order to gain access to the application.

Based on these facts, my guess is that the Fazio attackers were likely separate from the Target attackers.  Therefore, the Fazio breach is like most breaches; the attackers get in, probe around and then leave if nothing of value can be identified.  That is not to say that they were not necessarily targeted for a way into Target, but I find it unlikely that Fazio was specifically targeted for the Target breach.

The Fazio attackers likely advertised the information and credentials that they gathered to other attackers on the Internet “underground” and sold them to whoever was willing to pay including the Target attackers.

The Russians

In my opinion, the Russians that eventually sold the card information were probably not the actual attackers that retrieved the cardholder data from Target.  However, they likely could have been behind the attack as the folks that instigated it and funded it.  Other than selling the cardholder information, until these individuals admit their role, we will probably never know if they were just a fence for the information retrieved or if they were behind the attack.

In my scenario, the Russians began scoping out likely candidates for compromise and picked Target because they found information on the Internet “underground” and determined that it was likely possible to successfully get in and get information.  Once the research was done, they then assembled a team to get the actual attack done.

The Malware

In reading the various news accounts, the Secret Service indicated that the attack was sophisticated.  A review of the infamous Microsoft case study, Target had implemented Microsoft Windows Server Update Services (WSUS) now part of Microsoft Service Center Operations Manager (SCOM) at all of their stores so that they could rapidly deploy updates to their stores in the smallest possible time frame.  In the retail business, IT people get very small windows of opportunity to perform updates so this architecture would provide IT with the ability to stage updates and then deploy those updates as quickly as possible.

A lot of people have commented throughout the numerous discussions of the breach on Google+, LinkedIn and Twitter questioning how the attackers could have compromised so many POS systems so quickly.  It is my opinion that this was done through SCOM.

But there is a huge problem with using SCOM when the software is not Microsoft’s – SCOM can be somewhat to very temperamental when it comes to deploying non-Microsoft software and updates.  Over the years it has gotten better with some non-Microsoft solutions, but considering the deployment of malware via SCOM and having it work right the first time requires knowledge of not only SCOM but the Target computing environment.

This brings me to the fact that I believe an insider had to have been involved in the breach.  Not necessarily an actual Target employee, although that cannot necessarily be ruled out, but more likely a knowledgeable contractor.  Like all large corporations, Target outsources development to contractors that have offices and staff located all over the world.  Those contractors also have their own contractors that are located all over the world.  It is my opinion that the Russians compromised one or more contractors with development knowledge of Target’s POS application and deployment of the POS software.  This was required to develop the malware from the BlackPOS code and develop a one-time successful deployment capability using SCOM.  Whether or not these individuals were actually part of the attack team is debatable.  They would only be needed to develop the solution and the SCOM deployment scripts and possibly procedures to avoid Target’s QA process.

Outsourced contractors in third world countries can be readily bought.  People in the West forget that these developers can be making anywhere from cents per hour to only a few dollars an hour.  That is why development work is outsourced to them as it is more cost effective than using developers where they are making one hundred dollars per hour or even more.

But that brings up an interesting conundrum in this breach.  If a contractor was compromised, could they not still be involved in Target’s development efforts and just deliver the malware directly as part of their deliverable?  I think that could have been a possibility, but it would have risked being discovered in Target’s code review, quality assurance and testing processes which is probably why the malware was not delivered by that method.

The Attackers

The attackers could have come from anywhere, but most likely are from Russia or one of the former Russian states such as Ukraine or Belarus.  The reason this is most likely is that the people that sold the Target cardholder data were Russians and they would want people with their same background to execute the attack as well as having some amount of control over the attack team.

The Attack

The attackers that broke into Target likely went shopping for ways into Target and found the Fazio Ariba credentials for Target as well as probably other credentials to other publicly available Target applications.  The attackers either bought those credentials or had their Russian bosses purchase those credentials.

I had to put my penetration testing hat on to figure out how the Ariba credentials came into play.  The reason is that if Ariba is available from the Internet to anyone, why would an attacker need credentials?  Then it dawned on me.  They needed the credentials in order to compromise Target quietly.

My rationale for this is that Target does a decent job at securing publicly facing applications, particularly since their 2007 breach.  Assuming the Ariba application was properly implemented, doing an attack without the credentials would have alerted Target’s information security personnel and it would have been game over.

As a result, the attackers needed the credentials so that they could gain access to Ariba so that they then could compromise it with a cross site scripting attack, SQL injection or whatever they used to gain access to one or more of the Ariba servers so that they could then breach the rest of Target’s network, specifically the SCOM system(s).  The reason this approach would be more likely to be ignored is that the attackers would have valid credentials and any anomalous activity would likely be written off by Target personnel.

This brings us to the next reason I believe an insider is involved.  The timeline discussed thus far gives the impression that the breach was a fairly quick operation.  The only way the breach could have been conducted so quickly is if the attackers had knowledge of where they needed to go to compromise the SCOM system.

That said, the database of Target guests that was also retrieved was likely collateral damage in that it was encountered during the attack and was taken so that the attackers did not walk away empty handed.  The other possibility is that the database was used to test the data exfiltration process to ensure it would go undetected.

Once the attackers owned the Ariba system, they would then have had access to the administrators of Ariba.  The insider would have given the attackers an idea of where the SCOM system was located and probably who had access.  It then became a process of compromising one of those Administrators to gain access to SCOM.  Because they were inside Target’s network, the administrators were likely compromised using an extremely targeted phishing attack using the internal email system.  As a result, the phishing message would have looked even more than legitimate because it was internally generated and delivered.  The message likely contained some sort of Word or Excel document that had backdoor software that would not be detected by the anti-virus solution.

However another option could have been used once the attackers were inside.  They could have approached any of the administrators and pretended to be a contractor and asked for access to SCOM in the test environment.  From there they could have staged their malware and then sent it through the QA process.  Regardless of how they gained access to SCOM, the attackers had to have used the SCOM system to deploy their malware with the speed that they deployed it.

Creating the data dispersal server was a straight forward problem.  With the insider’s knowledge, they knew where FTP was implemented and merely compromised the server to be their own collection point so as not to arouse suspicion.  To get the data out of Target they used DNS as every system needs access to DNS.  A lot of people have argued that Target should have seen the exfiltration of the data via DNS and have pilloried Target for their ineptitude.  However, if the attackers were as sophisticated as they have been portrayed, they likely constructed their exfiltration system to mimic the size of valid DNS packets and thus only traffic volume would have been a possible trigger.

Is this scenario correct?  We will not know until a final report is released if we ever see a final report that gives actionable information.

That said, I am sure there are a lot of you reading this and are shaking in your boots based on this scenario.  That fear is likely based on the fact that you realize how ill equipped your organization is to deal with this sort of attack.  And you should be scared.  This is a war of escalation that we are waging.  Organizations step up their game and the attackers up the ante on their side.  Like the famous saying, “When chased by a bear, I only have to outrun the last person to save myself” is very true in this situation as well.  Your organization’s security game only has to be better than the other organizations.  But when an organization like Target is breached and they were considered to be at the top of the security game, what chance does an organization with mediocre security have?

The only saving grace might be is that your organization is flying under the radar.  I say “might be” because, according to the majority of reports on the state of information security, most organizations have no idea that they have been compromised.  That is because people rely on anti-virus and other technologies that have a poor track record of identifying malware and sophisticated attacks.  And then, as we learned in this past week’s report on the Neiman Marcus breach, you can have information security personnel write off malware discovered as false positive results and let it re-infect for months without investigating or even worrying about what was going on.

It is easy to pillory the guy that got breached.  However, a lot of you should look inside your own organizations before tossing stones.  I would guess that most of you tossing those stones would not fair any better and likely worse than Target should your organization be breached.

08
Feb
14

Pre-Authorization Data

After a number of interactions with a variety of people over the last few weeks, it has become obvious that the concept of pre-authorization data is not clear to a lot of people.  And just because it is pre-authorization data does not mean that you are not required to protect it.  The Council has made it very clear that it is to be protected with the same rigor as post-authorization data.

Pre-authorization is defined as that time when an organization has a customer’s sensitive authentication data (SAD) but has not yet processed it for payment.  For most merchants, the time between collecting the SAD and processing it is measured in seconds.  For card present transactions, the SAD can be track or chip data.  For card not present transactions, it typically includes the cardholder name, primary account number (PAN), expiration date and CVV2/CVC2/CID2.

Here are some situations where that does not always happen that quickly.

Phone, facsimile and mail orders can extend the amount of time between receipt of the SAD and processing by seconds, minutes or even hours.  On the bizarre side of things, I encountered at one client people sending their physical cards in the mail for processing of their mail order.

At the fuel pump when a customer swipes their card, the system that manages the payment process will pre-authorize the purchase and then store the SAD until the customer finishes pumping fuel and hangs up the hose to complete the transaction.  That could be five to 10 minutes depending on fuel tank size.

In some industries the time of pre-authorization can be weeks, months and in some cases even a year or more.  Where this typically occurs is in the airline and hospitality industries.

When a customer makes an airline reservation, the airline will likely pre-authorize your airfare but may not actually charge your card until 7/14/60/90 days before you check in or even until you check in.  This can result in your SAD being stored for weeks or even months.

In the hospitality industry, a reservation typically does not cause a charge until a customer checks out even though they are required to have a card on file to hold the reservation.  When a customer checks into the property, the hotel’s billing system records the SAD and may also pre-authorize charges, but the actual card transaction is not processed until the customer checks out.  I have encountered SAD in hospitality systems that have been stored for more than a year due to reservations for special occasions such as graduations, birthdays, family reunions and anniversaries.  New versions of hospitality management systems encrypt pre-authorization data, however older systems did not and the security of the pre-authorization data may not be a rigorous as it should.

eWallet applications are another great example of pre-authorization data.  eWallets are just an electronic version of a consumer’s physical wallet.  eWallets can be applications on a smartphone/tablet or a specialized device such as Coin.  eWallets are not regulated by the Council or the card brands and never will be just as traditional wallets are not regulated.  That said, developers of eWallet applications should follow the PA-DSS for developing secure eWallet applications.

The most confusion over pre-authorization data typically occurs over SAD that an organization receives via email.  A lot of QSAs get their “undies in a bunch” over this and point to requirement 4.2 as the reason why this is unacceptable.  Requirement 4.2 states:

 “Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word in 4.2 is “send”.  Requirement 4.2 says nothing about receiving PANs by these methods and the reason is that it is pre-authorization data.  That does not mean that the Council recommends receiving PANs via email, IM or similar methods.  It is only recognition of what goes on in the real world.  There will always be a small percentage of people that will send their cardholder data insecurely and there is little an organization can do to stop it.  But even before you start the argument, you need to acknowledge that this data is pre-authorization data because a transaction has yet to be processed.

To keep your email system out of scope, the Council has told QSAs to make sure that organizations have a process to remove the SAD as soon as possible from the system(s) in question.  That typically involves the printing of the message, deleting the message from the system and then securely destroying the printed message once the transaction is processed.  This is all considered “incidental contact” in the eyes of the Council and the QSA can then consider the system out of scope as long as they can satisfy themselves that the manual process is reliable and consistently executed.

Web sites that store customers’ SAD for future purchases are also dealing in pre-authorization data.  Merchants providing such a service should contractually acknowledge their responsibility to protect that data with the same rigor as post-authorization data and that the stored data is only for use for purchases by the customer on the merchant’s Web site.

My final example of pre-authorization data is when an organization’s travel department stores employees’ card data for company travel reservations.  The systems used by these departments typically store the cardholder name, PAN, expiration date and the CVV2/CVC2/CID2.  When the PCI DSS originally was published, the card brands instructed QSAs that these sorts of systems were not in scope for an organization’s Report On Compliance (ROC).  However, as time has passed, some of the card brands now want those systems included in the ROC, so organizations need to contact the card brands to determine if their ROC should include such systems.  These systems can be interesting in how little protection they can provide for SAD, so regardless of whether they are included in the organization’s assessment, the organization should review the security of the application against the PCI DSS and mitigate any gaps.  As with the aforementioned Web sites that store pre-authorization data, an organization storing SAD for their employees should legally acknowledge their responsibility to protect that data as well as document how it may be used.

The bottom line is that all of these situations involve pre-authorization data and pre-authorization data can include everything recorded on a card’s magnetic stripe or chip.  Merchants can store pre-authorization data until a payment transaction is processed.  If they do store the pre-authorization data, they are obligated under the PCI DSS to protect it with the same rigor as post-authorization data, i.e., encrypted, extremely limited access, logging, monitoring, etc.

Hopefully we are now all on the same page.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031