Archive for September, 2010

29
Sep
10

Secure Coding And Application Vulnerability Scanning

Based on some of the mail I am getting these days, there is a lot of confusion regarding secure coding standards and application vulnerability scanning, that is, requirements 6.5 and 6.6.

First, let us talk about the intent of these requirements.  The overall intent of both of these standards is to stop insecure applications from being placed in production.  The intent of requirement 6.5 is to ensure that secure coding techniques are part of the system development lifecycle (SDLC) and that the most obvious errors, at the moment those are the OWASP Top 10, have been addressed during development.  The intent of requirement 6.6 is to ensure that either code reviews are conducted or an application firewall is used to protect applications.

The most common question I get regarding requirement 6.6 is that since it does not specify what should be tested, does that imply that only the OWASP Top 10 needs to be looked for when conducting the code review?

When will you people learn?  When the PCI DSS does not specify something, you always assume that you need to test everything.  In the case of requirement 6.6, you need to conduct application vulnerability scanning for all potential vulnerabilities, not just the OWASP Top 10.  This will become more important under PCI DSS v2.0 when they add other application vulnerability standards into the mix.  The bottom line is that if all you are testing for is the OWASP Top 10, you are not doing enough testing.

Another area where people get things wrong is that they conduct application vulnerability testing just like they do network vulnerability testing, which is after the application is in production.  Wrong!  Unfortunately, the PCI SSC has only trained the QSAs to understand this fact and only merchants and service providers that have been through ISA training likely know about this requirement.  Because of this, QSAs get beat up all of the time by merchants and service providers when they mandate application vulnerability testing and remediation before an application goes into production.  However, if you think about it, it has always been implicit in these requirements.  Remember, the intent of these requirements is to avoid putting vulnerable applications in production.  That is why you need to conduct your scanning as part of your QA processes before the application goes into production.  If any high, critical or severe vulnerabilities are discovered as part of the testing, those need to be either remediated or compensated for before the application is placed into production.

The final issue we consistently see is that secure coding techniques and code reviews are nowhere to be found in the SDLC.  A lot of organizations point QSAs to various coding Web sites for their SDLC.  They assume that these sites have already embedded secure coding techniques in their SDLC and that may or may not be the case.  A lot of SDLCs document how to create application security, but say little or nothing regarding secure coding techniques.  As a result, they are shocked when the QSA comes back and says that secure coding techniques are not in place.  But what this points out is that the organization does not use the SDLC because had they used it, they would have known that the SDLC did not address secure coding and code reviews.

The lessons you should have learned are as follows.

  • While requirement 6.5 only calls out the OWASP Top10, you need to also be worrying about all the other application vulnerabilities that could exist.
  • SDLCs are meant to be used, not just offered as a way to meet a requirement.
  • Secure coding techniques need to be documented as part of the SDLC and need to be followed.
  • Requirement 6.6 requires you to scan for all application vulnerabilities, not just the OWASP Top 10.
  • Application vulnerability scanning is performed before an application goes into production.
  • If high, critical or severe application vulnerabilities are identified by scanning, those vulnerabilities must be fixed before the application goes into production.
Advertisement
26
Sep
10

Zero Trust Security – The Technical Discussion

With the cultural issues out of the way, let us discuss some technical details.  Given the state of security technology and where security leadership sits these days, I question if Zero Trust can be implemented.

Essentially, with a ‘Zero Trust’ approach, we are talking about DMZs.  However, instead of our usual externally facing DMZs we are also talking about DMZs that are internally facing.  These are no ordinary DMZs, these are highly monitored and controlled DMZs with IDS/IPS, NAC, full logging and everything else required to ensure security.  These technologies are not for the faint at heart as they require a lot of planning in order to get them right.

Where a lot of organizations get things wrong is that they believe that all of these security technologies are like a Ronco Showtime Rotisserie oven, you just “Set it and forget it.”  If only security worked that way, but it does not.  As a result, one of the first stumbling blocks organizations interested in Zero Trust face is staffing since Zero Trust will require a significant amount of attention both from a security perspective and from their help desk.  I do not think that we are talking about a significant increase in security and help desk personnel, but the existing staffing levels are likely to be insufficient in a Zero Trust environment.

The next issue that I see is from the technology itself.  Most security technology is designed for Internet facing use, not internal use.  While these solutions can be used internally, they tend to create issues when used internally because of their severe responses to any perceived attacks.  As a result, in order to use these solutions, security professionals have to turn off or turn down certain features or functions because they get in the way of getting business done.  Then there are the applications themselves.  I cannot tell you how frustrated I get with vendor and in-house developers that cannot tell you from a networking perspective how their applications work.  As a result, security professionals are required to do extensive research to figure out what ports/services an application requires, if they even do such research.  That then results in what we tend to see on internal networks with internal DMZs, lots of ports/services open into the DMZ because they do not want the application to break.  In a Zero Trust approach, this is not acceptable.

Then there is logging and the management and maintenance of log data.  It still amazes me the amount of push back I still receive on logging and the management of log data.  Security professionals and managers complain and complain about the amount of data that needs to be retained and the length it needs to be retained.  Hello!  This is the only way you will ever know what went wrong and how it went wrong so that you can fix it.  But the security information and event management (SIEM) industry has not helped things by delivering solutions that can cost as much as a large Beverly Hills mansion and are as easy to implement as an ERP system.  While there are open source solutions, the usability of these solutions are questionable at best.  Unfortunately, the PCI DSS is mandating that log data be reviewed at least daily.  In order to get that done, merchants either cannot afford or do not have the time to invest to meet this requirement.  As a result, there is a lot of frustration that what merchants are being asked to do cannot be done.  Yet, log information capture and review is possibly one of the most important aspects of an organization’s security posture.  Because if you do not stop an attack with your firewall and IPS, the only way you know that is from your log data.  Damned if you do, damned if you do not.

So a merchant implements all of the necessary technologies and procedures to make Zero Trust a reality.  Is that merchant more secure?  If a merchant makes such an investment, the reward will likely be improved security.  But it will take continuous effort to keep Zero Trust running and that is where all organizations run into trouble with security initiatives.  It takes consistent execution to make security work and people and organizations these days lose interest in things they think are fixed and so security gets swept to the back burner.  As a result, it takes strong leadership to keep security off of the back burner.  Without that leadership, security will fall into a rut and an incident will occur that will make security a front burner topic again.

So while I think Zero Trust is probably the approach we should all work towards, it will take a lot of effort to make it a reality.

21
Sep
10

Zero Trust Security – The Cultural Discussion

There is a great motto on the SR-71 Blackbird flight crew badges, “In God we trust, all others we verify.”  John Kindervag of Forrester Research has written a paper titled ‘No More Chewy Centers: Introducing The Zero Trust Model Of Information Security’ that takes this motto into the information security realm.  The premise of this paper is what if you treat everything as untrusted on your network, internal or external?  This paper is a great read and is worth the cost to obtain a copy.

This concept may sound a bit extreme and, for some, may even seem an odd approach.  But you have to ask yourself, can you really trust all of your users?  And that is exactly the point John is making.  He points to 26 data security breaches in the first half of 2010 that were the result of “trusted” personnel deliberately or accidentally releasing information.  John’s advice, if you cannot trust your users, then you need to treat them and their network traffic as untrusted.

As a security professional, this approach sounds appropriate given today’s computing environment.  However, as a former senior IT executive, I have to say it sends chills down my spine.  For what this approach requires is that you tell your employees that they cannot be trusted.  If that does not scare the daylights out of you, it will sure scare it out of your human resources executives and probably a few, if not all, of the rest of your senior managers.

Then there is the process of selling such an approach.  And let us face it; it will be quite a sales job to get such an approach sold to senior management.  To exacerbate this process, surveys of senior managers portray security professionals as being too technical and cannot explain why security is necessary in business terms.  With that sort of disconnect, the concept of Zero Trust is going to be almost impossible for most security professionals to sell to their organizations.  In my opinion, the only way such an approach will ever be implemented is if it is suggested and driven by senior management, not IT or information security.

Then there is the fact that Zero Trust is not going to totally solve the security problem.  Remember my mantra, security is not perfect.  Zero Trust is only going to minimize risk, but it is likely to minimize it to the absolute minimum it can be reduced.  Senior managers are going to be skeptical about spending the money it will take to get to this level.  However, for the financial institution and health care industries, the cost will be worth the peace of mind.  Other industries will likely struggle with justifying the expense.  But in the end, I think this is probably the only route to as secure an environment as one can have.

In a future post, I will discuss the technological ramifications of Zero Trust.

19
Sep
10

The Reinvigoration Of Social Engineering

Social engineering did not go away, but it seems to have taken a backseat to other attack techniques over the last few years.  With the publication of the results of the social engineering contest at Defcon this year, the participants in the contest have shown that social engineering is still alive and well and a very successful attack technique.  The following quote from the report on the contest says it all.

“Targeting people has become the most cost efficient attack vector in many situations, and all indications point to this trend continuing to increase.”

Social engineering is one of the most insidious attack techniques around.  Unfortunately, organizations do little to address social engineering and have only made social engineering easier over the years.  Customer service methodologies and training over the last 30+ years have done a great disservice to organizations.  For example, organizations trip all over themselves to be the JD Power customer service leader.  Employees are assessed on their ability to solve a problem on the first customer contact.  Yet in my experience, these sorts of activities typically focus organizations on blindly providing customer service at the expense of the organization’s security.

The organizers of the contest defined 32 objectives or flags that contestants could obtain over a 25 minute call to the target.  These flags were assigned point values based on the perceived difficulty in obtaining them.  While the flags were not considered to be highly sensitive information, the flags were such that one as to wonder if even more sensitive information would have easily been obtained had the contestants been allowed to go after it.

Prior to the contest, contestants were required to develop dossiers and attack scenarios on their targets that were also graded and given a value that became part of their score.  In the 25 minutes, contestants could call their target once or multiple times.

The statistics gathered as a result of the contest bear out the effectiveness of social engineering.  Of the 15 organizations targeted, 14 of them did give up at least one flag.  More troubling is the fact that if a contestant encountered difficulty in obtaining information all it took to get the information was to hang up and call back and get a different employee.

Another area that provides concern is the amount of information the contestants were able to obtain through their dossier development.  The use of Google, Google Earth and Google StreetView provided an amazing amount of information for the contestants.  Also used were social media sites such as Facebook, MySpace and LinkedIn.  While Facebook, MySpace and similar sites have garnered the most attention by the media, it was LinkedIn that provided the most information, in a few cases providing the contestants with the ability to develop an organization chart for the target.

Security is only as good as the weakest link.  As this contest points out, an organization’s weakest link is probably their employees – the likely cause of which is a lack of or only cursory focus on security awareness.  The contest just magnifies the fact that organizations have done little or nothing to protect their organizations from information leakage by employees.  As I constantly like to remind everyone, security is not perfect.  While you may have a fairly good security awareness program, you are still at risk from social engineering.  As PT Barnum liked to say, “There’s a sucker born every minute.”  Humans are fallible and as much as we try, everyone has their moments, but some people have a lot more moments than others.

If you think this is all just a nice exercise and it really does not present a strong enough threat, then go back over the last six months and read all of the news clippings about data breaches and other exploits.  The majority of these attacks are all social engineering based or had a very strong social engineering component.

I highly recommend that you visit the Social-Engineer.org Web site and obtain a copy of their report.  Share the report with your executives, particularly the leader of your customer service area.  Hopefully they will get a clue regarding the amount of information that is inadvertently leaving your organization.

18
Sep
10

The 2010 PCI Community Meeting

It is that time of the year.  Time for another get together with the PCI SSC, the card brands, participating organizations and QSAs.  This year’s meetings are in Orlando and Barcelona.  Unfortunately, I am not going to be in attendance due to scheduling conflicts.  Since I will not be able to attend, I thought I would provide a topic for discussion.  I want to get the PCI SSC to repeal their inane Report On Compliance (ROC) report writing standard.  This standard has become onerous and, in the end, has become “make do” work.

To understand this situation, you need a bit of history.  Until last year, the only proof that the PCI SSC and the acquiring banks had to prove a QSA had done their job properly was to read the ROC.  The ROC was required to contain references to all of the documentation, interviews and procedures they had observed to ensure that an organization was complying with the PCI DSS.  As a result, this caused the PCI SSC to develop an extensive grading and scoring spreadsheet that is used to determine if a ROC covers everything it is required to cover.  Each test may have any of the following components.

  • Observation;
  • Interview;
  • Documentation;
  • Process/action/state; and
  • Network monitoring.

Each of these components may be assessed one to four scoring points depending on the number of occurrences that may be contained in the given test.    A ROC must score better than 75% in possible points to avoid remediation.  But the PCI SSC expects that a ROC should score no lower than 95% of possible points.

The PCI SSC has instructed QSAs that each test in the ROC must be able to stand on its own.  This means that one test is not allowed to reference another test.  As a result, QSAs must replicate of a lot of information throughout the report.  This obviously introduces the potential for errors and omissions in the report as well as making the report unnecessarily long.

To ensure the report writing process is truly questionable, the PCI SSC recommends that QSACs develop pre-written templates so that all of the components get covered for each test.  While a template speeds the report writing process, I would still estimate that the report writing process consumes at least one-third to one-half of a PCI assessment’s budget.  Not only does it take time to write, but it takes a lot of time to proof and to review.

As I stated earlier, last year, the PCI SSC began requiring language in all QSA contracts that grants the PCI SSC the right to examine any QSA’s work papers.  AS a result, one would think that this report writing standard would no longer be needed, but it is still in place.

Because a lot of our clients use hosting services, we get to see a lot of ROCs that have been prepared by other QSAs.  You can really tell those QSAs that have not been through the PCI SSC QA process by the fact that their ROCs are very short and lack detail.  But for those QSAs that have been through the QA process, based on my review of their ROCs, the grading scale seems to have caused QSAs to worry more about how the ROC is written and not necessarily on the actual assessment of the security practices of their client.  A lot of the writing is more about meeting the scoring template and not about the controls.  In some cases, the writing starts you to wonder if the control is really in place.

ROCs can become inordinately long because of the replication of the same information over and over.  During our QA remediation, we were told that the average ROC ran around 180 to 200 pages however I have yet to see one produced by us that is under 250 pages and we seem to average around 350 to 400 pages.  I have heard from some reviewers at acquiring banks that the only worthwhile information in these tomes is anything that is not in place and any compensating controls.  If that is all that is getting read, then what is the point of all of this other information that is being ignored?  The point is that it remains the way that the PCI SSC ensures that QSAs are doing their job.  And as I stated earlier if the writing makes you question if the control is in place, then what is the quality of all of this writing?

Now that the PCI SSC has the right to review a QSA’s work papers, there is no reason to require all of this pointless verbiage in the ROC.  QSAs should be able to have one column for each requirement in the report labeled ‘Status’ and the entry in the column is either ‘In Place’ or ‘Not In Place’.  If something is not in place, then the column next to it, labeled ‘Comments’, should document what is being done to bring a requirement into compliance and when that will occur.

If the PCI SSC is not comfortable with this approach, then maybe they have the wrong organizations as QSACs and they need to get rid of those that cannot conduct the work to professional standards.  This approach works for financial auditors, there is no reason it cannot work here.

Have a good time in Orlando or Barcelona.

12
Sep
10

What Happens Once Merchants Get Rid Of Cardholder Data?

I started thinking about this a couple of months ago.  I think this is one of the problems we have in our industry as well as society as a whole.  We do not take the time to think about what our actions might result in.  If we did, we might not continue to end up with ever larger problems.

There appears to be this belief that once merchants get rid of cardholder data, life will be so much better and safer.  But is that really what will happen?  What does happen once merchants get rid of cardholder data?  Do the clouds part?  Is there sunshine forever?

Granted this is all my suppositions, but I think it probably fairly portrays what will happen once cardholder data is out of merchants’ systems.

Merchants have been led to believe that attackers will have to move their target to where the data will have moved which would be service providers, processors and acquiring banks.  But merchants are not out of the woods once they no longer store cardholder data.  In their efforts to get to the service providers, processors and acquiring banks, the attackers will take whatever route they have to in order to achieve their objective.  Merchants may no longer store cardholder data, but they will transmit it and possibly still process it.

Merchants have to connect to service providers, processors and/or acquiring banks, so they are still part of the transmission of cardholder data.  As security professionals like to say, “Security is only as good as its weakest link.”  Where is the weakest link?  Unfortunately, it will be merchants.  Even though they no longer store cardholder data, they are still a target and will need to continue investing in security so that they protect their business partners.  If you think it was tough selling merchants on securing cardholder data, imagine selling them on securing their business partners after they stop storing cardholder data.

Since merchants will still come in contact with credit cards in order to obtain payment, they will need something like end-to-end encryption or other security measures so that when a customer pays with their credit card, the connection between the card and the processor is secured.  That now makes the credit card terminal or the integrated POS workstation the prime target to intercept cardholder data.  Therefore, criminals will move their focus to supplying merchants or their equipment suppliers with doctored terminals or integrated POS software to intercept cardholder data.  There have already been documented incidents of this happening, so one has to assume that these sorts of incidents will just increase in occurrence.

Chip and PIN can resolve some of this, but as some security researchers recently showed, Chip and PIN can also bring a new set of problems.  Everyone looked at this exploit as too difficult to pull off.  However, if you truly read the researchers’ report, you see that it would only take the doctoring of a terminal to execute.  But the PCI SSC says that terminals are “dumb.”  Yet a lot of the terminals being used these days have the processing capability of a netbook.

To exacerbate the situation with the terminal, you have the problem of what to do when the terminal cannot connect to the service provider, processor or acquiring bank.  Even in this age of high network availability, there will always be the occasional incidence of the knocked over utility pole or network failure.  In these instances there has to be a way to conduct the transaction as merchants are not going to deny sales because the network is down.  There are a couple of ways to deal with this situation.  The first is to fall back to the good old “knuckle-buster” and paper forms.  You then need to deal with the security of the forms, but that can usually be handled the same as how a merchant secures their cash.

The second option is to put a form of intelligence in the terminal or integrated POS solution to conduct the transaction without the network.  However, this involves the temporary storage of the cardholder data in the device until the network is available.  Where this typically goes wrong is that the device does not properly clear the data once it has been transmitted.  Most people would say, “So what?  The attackers would have to know when the network was down.”  True.  But what if the attackers doctored the terminal or POS software and periodically just didn’t allow a certain number of transactions process?  Do you think people would notice?  They would probably write it off as the technology just acting up.

In the end, merchants are only a little better off than when they stored cardholder data.  Until a new system is developed, we need to mitigate the risks of the existing system.  That is what the PCI standards are all about.  They were developed to mitigate the risks presented by the current credit card processing system.  They are not perfect, but they do reduce the risks to an acceptable level if they are followed.

02
Sep
10

Writing A Compensating Control

This is a very popular topic these days as more and more organizations have to rely on compensating controls to comply with the PCI DSS.  With the exception of requirement 3.2 – do not retain track data, any of the other PCI DSS requirements can be met with a compensating control.

First, let us get familiar with what is required for a compensating control.  For v1.2 of the PCI DSS, there are seven elements to the compensating control.

  • Identification of the PCI DSS requirement that the compensating control is addressing.
  • Identifying the business constraint(s) as to why the organization cannot meet the PCI DSS requirement.
  • Defining the objective(s) of the original PCI DSS requirement and the objective(s) that this compensating control addresses.
  • Identification of any additional risk presented by having the organization rely on the compensating control.
  • Identification and definition of the compensating controls.
  • Procedures followed to validate the compensating controls.
  • Procedures followed to maintain the compensating controls.

The first rule is that a compensating control should only address one PCI DSS requirement.  However, in practice, I write some compensating controls addressing a single group of PCI requirements such as with 8.4 where you have an (a) and (b) component.  For requirements like 8.5 where you really have separate requirements, I write those as separate compensating controls.  It is up to your QSA to determine whether or not multiple requirements can be covered by a single compensating control.

Identifying the business constraint should be the easiest part of a compensating control.  However, I have seen too many constraints that indicate the only reason there is a compensating control is because of a lack of the business’ ability to implement the requirement.  That just does not cut it.  You really need to document valid business reasons as to why a compensating control is needed.  The fact that your organization does not have the backbone to implement PCI DSS requirements is not a valid reason.

However, the biggest areas where things go awry is with sections 4, 5 and 6 of the compensating control form.  Section 4 is where the organization documents the controls that compensate for the fact that the original requirement not being implemented.  Section 5 is where the organization documents the processes for validating that the controls in section 4 are operating effectively.  And section 6 is where the organization documents its processes that maintain the controls documented in section 4.

The rule that people seem to miss is that if you document a control in section 4, then you need corresponding discussions in sections 5 and 6 to provide the processes that validate and maintain that control.  I have seen too many instances where a lot of great controls are documented in section 4 and then in sections 5 and 6 those same controls are never discussed.

To avoid this situation, I tell my clients to list out all of your compensating controls in section 4.  Then in the same order as in section 4, go through in sections 5 and 6 and document what was done to validate and maintain the controls.

As an example, if section 4 says, “Hardening standards for Linux go beyond the vendor standard and all services are removed other than those services absolutely needed for the Linux server to function as a Web server.”  Then in section 5, I would expect to see, “Validated that the client’s Linux hardening standard goes beyond the vendor’s by comparing the two standards.  Compared the hardening standard to the configuration of a sample of x of y Linux servers and confirmed that the standard was in fact implemented.”  In section 6, I would want to see, “Client’s Linux system engineers evaluate each security patch for its applicability to their environment and whether it enables unnecessary services.  Patches are implemented in a test environment and evaluated with port and vulnerability scanners to ensure that the patch does not enable services that are not necessary.”

Hopefully this clarifies for all of you how to write a proper compensating control.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

September 2010
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930