Archive for May, 2012

29
May
12

The Failure Of PCI?

Steve Sommers of Shift4 has posted an interesting take on the PCI DSS and its value to merchants and service providers, particularly in the wake of the Global Payments breach.  Steve has asked me to comment on his post and here is my take.

This quote speaks to the frustration a lot of merchants and service providers feel.

“The only thing PCI did was round up a bunch of existing security best practices, compile them into lists, and publish these lists as “guidance” documents.”

No doubt about it, the PCI DSS is an amalgam of various security “best practices” bundled together and published by the PCI SSC.  I remember back in 2003 when the PCI DSS was the Visa CISP, fresh off the presses as an Excel spreadsheet, still embedded with the review comments from Visa and the consultants that created the CISP.

But what are the card brands to do?  Breaches are occurring all over the place when they start down this road.  The media reports that so many Visa or MasterCard accounts are breached and then they talk about the merchant or service provider involved.  Visa and MasterCard are trying to protect their brand names because studies show that the public remembers the card brand names, but quickly forget the names of the merchants or service providers breached.  As a result, they feel the need to develop guidelines to protect cardholder data to minimize the number and sizes of breaches.

Was life better when Visa, MasterCard, American Express, Discover and JCB all had their own individual standards?  You all complain now about one ROC or SAQ, how would you like to be filling out a different form for each card brand every year?  I would guess your answer is a huge ‘No’.

But the bigger question is, if not the PCI DSS, then what standard?  ISO 27002?  FISMA?  FFIEC?  HIPAA HITECH?  I have yet to find a security standard that everyone can agree on, let alone agree to follow.  People complain about every one of the information security standards.  And then there is that ugly mantra of “compliance is not security,” but I have already covered that ground.  So see my posts on why I think that saying is just a dodge.

All security standards are just a starting point or ante into the game.  A number of friends of mine have all remarked that their security programs only have to be a little bit better than everyone else’s to keep them off the target list.  However, that is the key; you need to be better than the other guy.  But will the other guy tell you what he is doing when he has the same strategy?  Standards like the PCI DSS give you that benchmark to start from so you know where you need to be better than the rest.

But the biggest problem with standards all comes down to the fact that humans are really averse to being measured or assessed against a standard.  Why?  It makes people responsible and accountable for what they do and few people want that sort of accountability – we all much prefer “wiggle room” in how our jobs are assessed.

“Then the card brands attached fines and penalties to punish merchants if they failed to comply with PCI “guidance” 100% of the time.”

“To me, the issue is this: PCI SSC promotes their work as “best practices” or ”guidance”,  and then the card brands turn around and flog merchants for not following them when they are breached.”

Steve is right on the mark with these statements.  As he stated earlier in his post and what I frequently state here, security is not perfect.  All security does is reduce the risk of a breach but does not remove that risk in its entirety.  As a result, even with PCI DSS compliance, breaches will still occur, but the goal is to make them much less frequent and a number of orders of magnitude smaller in the amount of data released.

This brings us to the heavy handedness in how the card brands handle breaches.  All I can say is that some of my friends in the forensics field are telling me that there are a number of breaches that they have investigated that were not the result of PCI non-compliance.  So Visa and PCI SSC GM Russo need to back off on their statement that “No compromised entity to date has been found to be in compliance with PCI DSS at the time of the breach.”  According to my sources, this is patently not true anymore and the card brands are not happy about that fact.

The card brands, in particular Visa, seem to refuse the premise that security is not perfect and keep pushing that the PCI DSS, if followed to the letter, is the solution to breaches.  None of this is the truth and security professionals know those facts.  As a result, we end up with media quotes from the card brands, PCI SSC representatives and security professionals that are at times out and out asinine.  Until we can all come to grips with these facts, we will continue to be playing a game of spin.  And spin get us nowhere.

“I personally believe that PCI is written in such a way – and interpretations among QSAs vary so much – as to make it impossible for anyone to be 100% compliant 100% of the time.”

The flexibility in the PCI DSS is there because security professionals and their employers would not have it any other way.  Would you prefer a dictatorial standard that specifically calls out solutions and vendors?  What would people be saying if only Cisco and Juniper firewalls, routers and switches were allowed?  What would Microsoft say if Windows was not allowed?  What would other vendors say if only MICROS POS solutions were approved?  What if only VLANs with specific ACLs were the only allowed method of network segmentation?  Can you say lawsuits?

The bigger problem that the PCI SSC needs to address is the QSA/ISA training.  A lot of QSAs are great technologists, but would not know a good or bad control environment if it bit them in the posterior.  Fewer QSAs and most ISAs know controls, but would not know a proper firewall or router configuration to save their lives.  And finally, there are a very, very few QSAs and some ISAs that know the technology and controls.  Unfortunately, the PCI SSC has not found the way to winnow out the QSAs and ISAs so that only the ones that know both technology and controls remain.

But even in such a perfect world, each QSAC has its own tolerance of risk.  As a result, what is PCI DSS compliant to one QSAC is not necessarily going to be acceptable to another QSAC because of the risk they are being asked to accept.  Firms like mine are fairly risk averse, so we are going to be more demanding when it comes to what is PCI compliant than other QSACs.  But by the same token, I do not believe we are unreasonable in what we demand for PCI compliance.

At the end of the day, while the PCI DSS is not perfect, it does provide the following benefits to merchants and service providers.

  • It provides a way to help everyone learn from the other guy’s mistakes.  As attack tactics change, so do the PCI standards to address tactics that might not be covered.
  • It gives everyone the baseline of what they need to execute 24x7x365 if they even think they have a better than average chance at security.

Prior to the PCI DSS, Visa CISP and the other standards, it was a crap shoot as to whether or not an organization’s security was going to be up to snuff.  I do not think that is where anyone wants to go.

Steve, I understand your frustration and the frustration and pain of merchants and service providers.  But if what I have stated here is not a net positive, I do not know what is.  Is it perfect?  Nothing in the world is perfect.  But there are some changes that would improve the program and make it seem much less painful and frustrating.  We just need to continue to work on the PCI SSC and the card brands to see the light and make the necessary changes.

Advertisement
25
May
12

What To Do About Insiders

The first posting I did on this subject was to provide an understanding that, despite the news stories, the insider threat is a very real threat and needs to be addressed.  However, what is an organization to do?  Employees and others need to have access to certain information in order to get their jobs done.  What steps should an organization take to minimize the insider threat?

First, I need to be very clear about this.  Even when you do all of what I recommend, you are only minimizing the insider threat.  The insider threat can never be totally mitigated.  Insiders must have access to information that the general public or even you business partners do not have access.  As a result, should an employee get sloppy with controls or go “rogue,” you can expect to lose whatever information that person had access.  Remember my mantra – security is not perfect.

I posted some ideas a while back on controls for automation.  Here are my minimum recommendations for manual controls to put into place to minimize the insider threat.

  • Management needs to recognize the importance of management controls.  The “tone at the top” really does mean something when it comes to controls.  However, management needs to understand that these sorts of controls are no absolute guarantee of avoiding issues.  Properly implemented, monitored and adjusted as necessary, such a control environment will confirm to the rest of the organization that management believes that controls are important.  If management does not know what to do regarding management controls, then they should consult with a public accounting firm as they are very aware of control environments and can assist in the design of a control environment.
  • Preventive controls.  Preventative controls, as their name implies, put in place something to prevent a problem.  A prime example of a manual preventive control is requiring a minimum of two signatures on checks.  The larger the amount on the check, the more people that have to sign off on the check.  Under such an approach multiple people have to collude to defraud the system.  This sort of approach can also be taken for report reviews of inventory, cash on hand and any other metrics that are important to the survival of the organization.  The idea is to ensure that at least two people are involved in these reviews and that they physically sign off on their review and document and start an investigation into any irregularities.
  • Detective controls.  As the name implies, detective controls are controls used to detect problems.  Following the example in preventative controls, the other people signing off on a check or reviewing a critical metric report is a detective control.  If the reviewer feels that something is not right with what they are reviewing, they are obligated to notify their immediate supervisor of the issue and ask the submitter to physically document the situation.  Once documented, the reviewer can then either sign off and accept the explanation, or refuse and further investigate.
  • Corrective controls.  Corrective controls are those controls used to ensure that the preventative and detective controls are focused on the right problems and are going to be able to be relied upon going forward.  Keeping to the theme, in the event of an irregularity being identified, management should then institute a root cause analysis and determine what caused the situation and make the necessary changes to the preventative and detective controls to ensure that people do not try to circumvent the control environment.
  • Hold employees responsible for the control environment.  Management may be responsible for establishing controls, but it is the employees that make the control environment actually work.  Employees should have their key controls evaluated at least annually to reinforce the importance of controls.  In our check example, the people signing off on checks should be evaluated on how many checks with problems are issued by the organization that they were required to sign.
  • Solicit control improvement ideas from employees.  The problem most organizations have with management controls is keeping them relevant.  A common example we see is a problem that occurred ten years ago has been addressed by automated controls in a new computer system, yet management continues to require the manual control to be followed.  Most of the time, employees know exactly what needs to be done, but management does not want to recognize that fact.
  • Have a third party periodically assess your controls.  In addition to employees providing ideas, organizations should periodically invite a third party, such as their accounting firm, to assess the control environment and recommend changes.  A number of years ago I worked with a large organization where we discovered that the way one of their computer systems had recently been modified, checks could be generated and bypass approvals and oversight.

For those of you that are going to recommend these minimum controls, my heart goes out to you.  The road ahead is likely to be very bumpy and contentious if your organization has a mediocre control environment.

Something to share with management as you push this sort of project is that there are very measureable benefits to implementing controls.  Every organization that I have worked with over the years has found that a byproduct of their controls projects has been fewer customer complaints and fewer employee screw ups.  Avoiding problems or making them smaller and less impactful on customers can add up to serious savings in time and money.

If you have a mature control environment, take a look at how you can make it better, more effective and more relevant.  If you do not have a mature control environment, then take baby steps.  Look to your accounting area as they will likely have the most robust control environment.  Grab one of those accountants and use them to help you look at other areas that may have problems that controls can address.

Best of luck to all of you on your journey.

10
May
12

More On Scoping

Everyone that is going through the PCI compliance process tries to get systems, processes, whatever, out of scope.  And while getting things out of scope is a good thing, it does not mean that they do not need to be assessed.  And this is one of the most contentious points of a PCI compliance assessment.

One of the biggest misconceptions about the PCI compliance assessment process is that, just because an organization says that something is out of scope, does not mean that it does not have to be examined.  The PCI compliance assessment process is all about trust, but verify.  So, when an organization says that a particular element is out of scope, it is up to their QSA to confirm that the item is, in fact, out of scope.

Take for example network segmentation that is used to delineate an organization’s cardholder data environment (CDE).  A QSA is required to confirm that the network segmentation implemented does in fact keep the CDE logically or physically separated from the rest of an organization.  That confirmation process will likely review firewall rules, access control lists and other controls on the network to prove that the CDE is segregated.  And going through these items can sometimes result in a lot of QSA effort, particularly as network complexity increases.

Another area where the out of scope effort can be messy is in the area of applications and whether they process, store or transmit cardholder data.  Proving that an application does not store cardholder data is typically fairly straight forward.  The QSA just examines the data schemas for files and databases looking for fields named credit card number or any 16 character fields.  A QSA will also typically run queries against the database looking for 16 digit numbers that start with known BINs.  I have been involved in a number of assessments where we have found cardholder data being stored in text and comment fields through our queries.  Determining whether an application is processing or transmitting cardholder data is more complicated and problematic.  It can take a quite a lot of effort to determine using an organization’s Quality Assurance or Testing facilities, but it can be accomplished.

The biggest clarification for v2.0 of the PCI DSS is that it is the responsibility of the organization being assessed to prove that their CDE is in fact accurate.  This had always been the implicit case, but with v2.0 of the PCI DSS, the PCI SSC has explicitly stated this fact.  Page 11 of the PCI DSS states:

“At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

As a result, the organization being assessed should provide proof to their QSA that they have taken an examination of all of their processes, automated and manual, and have determined what is in-scope and out of scope.  The results of this self examination are used by the QSA to confirm that the CDE definition, as documented by the organization, is accurate.

This clarification has resulted in a lot of questions.  The primary of which is along the lines of, “How am I supposed to prove that I have assessed my entire environment and made sure the CDE is the only place where cardholder data exists?”  While the implications of this question are obvious for the Wal*Mart’s and Best Buy’s of the world, even small and midsized merchants can have difficulties meeting this requirement.  And I can assure you that even the “big boys” with their data loss prevention and other solutions are not hyped on scanning every server and workstation they have for cardholder data (CHD).

For determining whether or not CHD is present in flat files on computers, there are a number of open source (i.e., “free”) solutions.  At the simplest are the following tools.

  • ccsrch – (http://ccsrch.sourceforge.net/) – If this is not the original credit card search utility, it should be.  ccsrch identifies unencrypted and numerically contiguous primary account numbers (PAN) and credit card track data on Windows or UNIX operating systems.  One of the biggest shortcomings of ccsrch is that it will not run over a network, so scanning multiple computers is a chore.  The other big shortcoming of ccsrch is that unless the data is in clear text in the file, ccsrch will not identify it.  As a result, file formats such as PDF, Word and Excel could contain CHD and may not necessarily be recognized.  It has been my experience that ccsrch tosses back a high number of false positive results due to its file format limitations and therefore recognizing data that is not a PAN as a PAN.
  • Find_SSNs – (http://security.vt.edu/resources_and_information/find_ssns.html) – While the file name seems to imply it only searches for social security numbers, it also searches for PANs and will do so for a variety of file formats such as Word, Excel, PDFs, etc.  Find_SSNs runs on a variety of Windows and UNIX platforms, but as with ccsrch, it does not run over a network; it must be run machine by machine.  Find_SSNs seems to have a very low false positive rate.
  • SENF – (https://senf.security.utexas.edu/) – Sensitive Number Finder (SENF) is a Java application developed at the University of Texas.  If a computer runs Java, it will run SENF so it is relatively platform independent and supports many file formats similar to Find_SSNs.  That said, as with the previous tools, SENF will not run over a network, it must run on each individual machine.  I have found SENF to have a much lower false positive rate than ccsrch, but not as low as either Find_SSNs or Spider.
  • Spider – (http://www2.cit.cornell.edu/security/tools/) – This used to be my favorite utility for finding PANs.  Spider will scan multiple computers over a network, albeit slowly and the fact that it has a propensity for crashing when run over the network.  However, it also seems to have a low false positive rate that is comparable to Find_SSNs.

I still use Spider and Find_SSNs for scanning log and debug files for PANs as I have yet to find anything as simple, fast and accurate when dealing with flat text files.  And yes, I use both as checks against each other for further reducing the false positive rate.  It amazes me, as well as my clients, the amount of incidental and occasional CHD that we find in log and debug files due to mis-configurations of applications and vendors who forget to turn off debugging mode after researching problems.

But I am sure a lot of you are saying, “Flat files?  Who stores anything in flat files these days?”  And that is the biggest issue with the aforementioned open source solutions; none of them will scan a database from a table schema perspective.  If the database data store does coincidentally stores clear text PANs as legible text, the aforementioned utilities will find it but that is pretty rare due to data compression, indexing and other issues with some database management systems.  As such, if you wanted to stay with open source, you had to be willing to use their code as a base and adapt it to scanning a particular database and table schemas unless you were willing to go to a commercial solution.  That is until OpenDLP (http://code.google.com/p/opendlp/).

OpenDLP is my personal open source favorite now for a number of reasons.  First, it uses Regular Expressions (RegEx) so you can use it to look not only for PANs, but a whole host of other information as long as it conforms to something that can be described programmatically such as social security numbers, driver’s license numbers, account numbers, etc.  Secondly, it will also scan Microsoft SQL Server and MySQL databases.  And finally, it will scan reliably over the network without an agent on Windows (over SMB) and UNIX systems (over SSH using sshfs).

At least I have gotten fewer client complaints over OpenDLP than I have for Spider for network scanning.  That said, OpenDLP can still tie up a server or workstation while it scans it remotely and it will really tie up a server running SQL Server or MySQL.  As such, you really need to plan ahead for scanning so that it is done overnight, after backups, etc.  And do not expect to scan everything all at once unless you have only a few systems to scan.  It can take a week or more for even small organizations.

But what if you have Oracle, DB/2, Sybase or some other database management system?  Unless you are willing to take the OpenDLP source code and modify it for your particular data base management system, I am afraid you are only left with commercial solutions such as Application Security Inc.’s DbProtect, Identity Finder DLP, ControlCase Data Discovery, Orbium Software’s Schema Detective or Symantec Data Loss Prevention.  Not that these solutions handle every database management system, but they do handle more than one database vendor and some handle most of them.

You should now have some ideas of how to scope your CDE so that you are prepared for your next PCI assessment.

06
May
12

How Many People Does It Take?

There are a lot of jokes that start with the phrase, “How many people does it take …”  But this post is no joke.  I have been taking some heat over my comment that you do not need to know who is attacking you, you only need to focus on what you need to protect.  As such, I felt the need to further explain myself.

The first complaint I get is that it is important for security professionals to know the tactics used by the attacker.

So my first question to all of you is, “How many people does it take to analyze attack vectors?”

We have forensic security organizations such as Verizon, Trustwave and Security Metrics that analyze attacks.  We have security companies such as IBM/ISS, Symantec, Kaspersky and McAfee that analyze attacks.  We have hardware/software vendors such as Checkpoint, Microsoft, Cisco and Palo Alto that analyze attacks.  I would venture to say there are hundreds of reliable sources for the analysis of attacks.  And yet, I am taken to task that you need to have your own analysis of attacks.  These hundreds of other sources just are not enough for you to rely on?  Really?  If you are doing the correct analysis of your vulnerability scanning and penetration testing reports, your attack vector risks should be known and you should have either patched or developed mitigations for those risks.

And while they might be put together in a slightly different sequence, DDoS is still DDoS and a SQL Injection is still a SQL Injection.  The bottom line is that the library of exploits available to an attacker is essentially finite.  This is proven out by the statistics that the forensic firms publish year after year.  As such, you should be able to monitor for all of these attacks fairly easily because they are all known quantities.  Yes, there is the rare Zero-Day that turns up every so often.  But, even those can be picked up if you have things configured and implemented properly.  If you think about it, unless an attacker is someone that can develop their own exploit code (and 99% do not), they are limited to whatever exploits are available in the public domain of exploits and that is a known quantity.  Take an inventory of what is available in Metasploit or Core Impact at any fixed point in time and you will see what I mean.

Then there is the group that argues that if you do not do analysis of the attacker, you cannot understand why you are being attacked.

So my second question is, “How many people does it take to give you an idea of why you are being attacked?”

This is pretty straight forward to figure out without some extensive and intensive analysis.  In 99% of cases, you are being attacked for one or more of the following reasons.

  • Your organization has sensitive information such as credit card numbers, bank account numbers, intellectual property or customer information that the attackers want.
  • Your organization has produced a product or service that has been perceived to be a safety hazard, overpriced or other detriment to society.
  • Your organization or an employee has publicly taken a stance on some issue(s) that has irritated some group(s) of people.
  • Your organization has donated money, time, products or services to an organization viewed by some group(s) of people as questionable.

Read the reports published by the forensic firms.  Read the news reports in the media.  If you distill down that information, the reasons for attacks break down into these four basic reasons.  Yet, security professionals continue to worry about the motivations of the attacker.  If you think your attack is unique, you are wasting your time.  The likelihood of your attack not being covered by these four primary reasons is slim to none.

I think these complaints just come down to the fact that doing the actual grunt work of security is just not very sexy work.  There is no doubt about that fact.  Ensuring the security of networks 24x7x365 is very, very monotonous work.  And it is that very monotony that is one of the primary reasons why organizations get breached.  People get bored with the monotony and they start to cut corners on procedures because, in their view, nothing is going on and, therefore, nothing will go on.  Only rotation of people and tasks will address the monotony, but that only works for so long.

This is why security professionals turn to automated tools to minimize reliance on people to flag potential anomalies.  Without tools, people get bored very quickly searching for the “needle in the haystack” through all of the data produced by all of the devices on your network.  However, even with all of the necessary tools, correlation of information still requires people to bring all of the anomalies recognized by the tools together and determine if all of these anomalies warrant further investigation.

Even with the necessary tools, you are not out of the woods.  One of the more common problems that we encounter is that organizations have not completely implemented those tools.  How many of you invested in the cool intrusion prevention system and still run it in notification mode?  Even then, those organizations that do completely implement the tools, do not always keep up on the “care and feeding” of the tools to ensure that the tools recognize the anomalies.  The tools are current and up to date, but anomalies are not recognized because the tools are not properly configured and tuned to the organization’s current network configuration.  Networks are not the static environments that a lot of people think they are.  As a result, either the number of false positives is so high that personnel ignore the voluminous number of alerts generated or anomalies are just never identified by the tools.

It is not until someone finally recognizes an anomaly for a breach that it finally gets interesting.  Then things become very interesting in a hurry.  Unfortunately, the statistics from the forensic firms point to the fact that, if an anomaly does get recognized, it is often many months to years down the road from the original compromise.

And that is where security professionals need to get better.  If you look at how long it took TJX to recognize their breach (years) versus how long it took Global Payments (months, but still counting), we are headed in the right direction.  But when it takes attackers only minutes, hours or even days to get your information, months still does not cut it.  We need to get to days or, better yet, minutes.  That is the challenge security professionals face and that is where we need to focus our efforts.

The PCI DSS is a good foundation, but the requirements of the PCI DSS are not going to get us to our goal.  We must go beyond the PCI DSS to get to our goal and that is a message that the PCI SSC and the card brands have consistently delivered.  The PCI DSS is only a security baseline, the ante into the game.  If you really want to be the best, you need to take your security game beyond the PCI DSS.

So let us start using the PCI DSS properly.  If your organization can execute the requirements of the PCI DSS 24x7x365 at almost 100% compliance, then you are ready to take things to the next level.  If you cannot achieve almost 100% compliance, then you need to work with your organization to get to that level.  Breaches and data loss are never going to go away, but if all organizations followed this approach, the number of breaches and amount of data lost would significantly drop.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2012
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031