Posts Tagged ‘vulnerability testing


Secure Coding And Application Vulnerability Scanning

Based on some of the mail I am getting these days, there is a lot of confusion regarding secure coding standards and application vulnerability scanning, that is, requirements 6.5 and 6.6.

First, let us talk about the intent of these requirements.  The overall intent of both of these standards is to stop insecure applications from being placed in production.  The intent of requirement 6.5 is to ensure that secure coding techniques are part of the system development lifecycle (SDLC) and that the most obvious errors, at the moment those are the OWASP Top 10, have been addressed during development.  The intent of requirement 6.6 is to ensure that either code reviews are conducted or an application firewall is used to protect applications.

The most common question I get regarding requirement 6.6 is that since it does not specify what should be tested, does that imply that only the OWASP Top 10 needs to be looked for when conducting the code review?

When will you people learn?  When the PCI DSS does not specify something, you always assume that you need to test everything.  In the case of requirement 6.6, you need to conduct application vulnerability scanning for all potential vulnerabilities, not just the OWASP Top 10.  This will become more important under PCI DSS v2.0 when they add other application vulnerability standards into the mix.  The bottom line is that if all you are testing for is the OWASP Top 10, you are not doing enough testing.

Another area where people get things wrong is that they conduct application vulnerability testing just like they do network vulnerability testing, which is after the application is in production.  Wrong!  Unfortunately, the PCI SSC has only trained the QSAs to understand this fact and only merchants and service providers that have been through ISA training likely know about this requirement.  Because of this, QSAs get beat up all of the time by merchants and service providers when they mandate application vulnerability testing and remediation before an application goes into production.  However, if you think about it, it has always been implicit in these requirements.  Remember, the intent of these requirements is to avoid putting vulnerable applications in production.  That is why you need to conduct your scanning as part of your QA processes before the application goes into production.  If any high, critical or severe vulnerabilities are discovered as part of the testing, those need to be either remediated or compensated for before the application is placed into production.

The final issue we consistently see is that secure coding techniques and code reviews are nowhere to be found in the SDLC.  A lot of organizations point QSAs to various coding Web sites for their SDLC.  They assume that these sites have already embedded secure coding techniques in their SDLC and that may or may not be the case.  A lot of SDLCs document how to create application security, but say little or nothing regarding secure coding techniques.  As a result, they are shocked when the QSA comes back and says that secure coding techniques are not in place.  But what this points out is that the organization does not use the SDLC because had they used it, they would have known that the SDLC did not address secure coding and code reviews.

The lessons you should have learned are as follows.

  • While requirement 6.5 only calls out the OWASP Top10, you need to also be worrying about all the other application vulnerabilities that could exist.
  • SDLCs are meant to be used, not just offered as a way to meet a requirement.
  • Secure coding techniques need to be documented as part of the SDLC and need to be followed.
  • Requirement 6.6 requires you to scan for all application vulnerabilities, not just the OWASP Top 10.
  • Application vulnerability scanning is performed before an application goes into production.
  • If high, critical or severe application vulnerabilities are identified by scanning, those vulnerabilities must be fixed before the application goes into production.

Code Review

Requirement 6.6 of the PCI DSS discusses the concept of code reviews or the implementation of an application firewall to protect Internet facing applications.  For code reviews, requirement 6.6 states:

“Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, at least annually and after any changes”

The confusion regarding code reviews is exacerbated by the fact that most organizations have only read the PCI DSS and not the information supplements that further clarify the PCI DSS requirements.  In April 2008, the PCI SSC issued “Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified.”  Pages 2 and 3 go into detail regarding what the PCI SSC deems as appropriate for conducting code reviews.

The first thing that organizations get wrong about meeting 6.6 is conducting their application vulnerability assessment after the application is in production.  Typically, this is done to save time and money as most organizations are already conducting vulnerability scans and penetration testing to meet requirements 11.2 and 11.3.  The supplement is very clear that this is not acceptable when it states:

“The reviews or assessments should be incorporated into the SDLC and performed prior to the application’s being deployed into the production environment. The SDLC must incorporate information security throughout, per Requirement 6.3.”

The supplement continues to state:

“… it is recommended that reviews and scans also be performed as early as possible in the development process.”

Further clarifications provided during QSA re-certification training indicates that the PCI SSC really believes that the reviews or assessments MUST be incorporated into the SDLC, not that they should be incorporated.  As a result, the PCI SSC is instructing QSAs to ensure that application vulnerability assessments are done before the application is placed into production and that any critical, high or severe vulnerabilities are addressed prior to the application entering production.  The idea being that applications should go into production

Code reviews can be done manually or using automated tools.  However, if an organization is using one or more automated tools, the code review is not all about the tool.  There must be processes in place that address the vulnerabilities identified and those vulnerabilities that are critical, high or severe must be addressed prior to the application being placed into production.  Most organizations conduct this sort of testing as part of their quality assurance process.

Tools such as IBM/Rational AppScan have the ability to integrate into the developer’s workbench and conduct vulnerability testing while the code is developed.  However, while that ensures that specific code modules are secure, it does not ensure that all of the modules that make up the application are secure as a whole.  So a vulnerability scan of the completed application should be performed to ensure that the application as a whole is secure.

The next misunderstanding is related to having an “independent organization” conduct the code review.  This has been interpreted as code reviews must be conducted by third party application assessors.  The PCI SSC did not help this interpretation by their statement in the supplement when they stated:

“While the final sign-off/approval of the review/scan results must be done by an independent organization …”

However, the PCI SSC has indicated in QSA training that independent is defined as anyone not associated with the development of the code being reviewed.  A lot of organizations have a quality assurance group separate from their developers and so the quality assurance group is responsible for conducting the code reviews.  In organizations with very small IT organizations, as long as you have a developer that was not involved in developing the code being reviewed, they can be the independent party that conducts the code review.

Finally, code reviews are only required on code developed by the organization, not PABP or PA-DSS certified purchased software.  However, if the purchased software is not PABP or PA-DSS certified, then the software must be assessed under PCI DSS requirements 6.3 through 6.6.  If the software vendor will not cooperate with such an assessment or provide a copy of their own PCI DSS assessment under requirements 6.3 through 6.6, those requirements must be judged as not in place on the organization’s PCI assessment.


The Purpose Of Penetration Testing

I have received a number of questions regarding my penetration testing post.  All of the questions seem to concern why penetration testing is required.  Penetration testing has always been a bone of contention within the network security community.  So it should be expected that questions would arise regarding why it is required under the PCI DSS and why the requirement was expanded to internal testing under v1.2.

As I point out in my original post, penetration testing can leave the target compromised if testing is not conducted properly.  Conducting penetration testing properly usually involves using non-production systems configured the same as the production systems.  Unfortunately, a lot of organizations including some large ones do not have such environments and penetration testing must be performed against the production environment.  Either way, once the penetration testing is completed the penetration tester needs to “clean up,” if possible, after the testing by removing any exploits from the devices tested.  Unfortunately, a lot of tools do not allow for satisfactory clean up and the only alternative is to rebuild the device that was compromised.  It is because of this that a lot of IT professionals do not like penetration testing.

Unfortunately, penetration testing is a necessary evil to ensure that an organization’s network infrastructure and servers are secure.  If all you ever conduct is vulnerability scanning, how do you know that the vulnerabilities identified cannot be used to further someone’s attack against your infrastructure?  And that is the problem.  Without conducting a penetration test to attempt to leverage the vulnerabilities discovered, you have no way of knowing the true risk presented to your network by those vulnerabilities.

The worse problem with vulnerability scanning is that non-security professionals assume that because a vulnerability is rated medium or low, it is no big deal.  They get a false sense of security because the vulnerabilities are not considered serious.  Without the penetration test, there is no way to show them that the low and/or medium risk vulnerabilities can be leveraged to compromise their network and potentially gain access to their cardholder data environment.  I have seen many an example where networks were compromised through supposedly low risk vulnerabilities that ultimately allowed the penetration tester a beachhead from which to launch even more sophisticated attacks from inside the network.  And if you think this is unrealistic if you have properly secured your network, there are people successfully attacking just such networks.

With v1.2 of the PCI DSS, the penetration testing requirement was expanded to include the internal cardholder data environment.  The reason for expanding penetration testing to the internal network was in response to the breaches that have occurred.  Verizon Business Services and Trustwave had analyzed the breaches they had forensically investigated and came to the conclusion that around 70% of those breaches were the result of insider human error.  And while the initial compromise did not necessarily directly involve the cardholder data environment, once the attacker was inside, there were limited barriers to compromising the cardholder data environment.  As a result, the PCI SSC revised the PCI DSS requirements and mandated penetration testing as well as quarterly vulnerability testing for external and internal assets in the cardholder data environment.


What Is Penetration Testing?

This question comes up more than I would like so it is probably a good discussion topic.  And it pairs up nicely with my previous post regarding passing vulnerability scans.

First, let us get the obvious out of the way.  A penetration test is not a vulnerability scan and a vulnerability scan is not a penetration test.  Yes, I know that the lines are blurring between vulnerability scanning and penetration testing with certain tools.  However, in the true definition of a penetration test, a penetration test is one where you take the output of a vulnerability scan and using the vulnerabilities identified, you attempt to leverage those vulnerabilities to obtain control of the device and ultimately gain control of your objective.  In the case of PCI compliance, the objective is to obtain cardholder data.

Next, penetration testing is not all about the tools.  I do not care whether you use tools like Metasploit, SAINTexploit or Core Impact.  The best penetration testers use very few tools.  But the one tool that all good penetration testers share in common and rely on most is the world’s most high powered computer on the face of the Earth, their brain.  With penetration testing it is not always about the tools, but tools can help.  The key to successful penetration testing is being able to connect the dots based on what vulnerabilities you have available.  You have to change your mindset from one of being a “good guy” to one of “what kind of opportunity do I have and how far can I take it?”  In most cases, the target has very, very few or even no vulnerabilities, but other devices around the target may have vulnerabilities that ultimately may lead to the target being compromised.  It is the ability of the penetration tester to put the path together that is important as the path to a compromise is never a straight line.

Here is a real world example of what I am talking about.

I was working with a company and one of the things that they had not performed was an internal penetration test.  As their QSA, I obviously asked them to conduct a penetration test.  The first question their Director of Security asked was if it was legal for his group to conduct such a test.  This is the best first question to ask and something you should always ask whether you are a consultant or an employee.  Regardless of whether you are a consultant or employee, you should always have a letter signed by an officer of the organization that states you are allowed to conduct the penetration test.  Under federal law, it is a felony to conduct such testing without such permission.  There are a number of examples where well meaning people have been arrested and put in jail because they did not have such approval.

The next question from the Director was what tool they should use.

I stopped the Director right there and said, “Your next question should have been, is my staff qualified to conduct a penetration test?”  Just because people have certifications such as CISSP or CISM does not mean that a person can qualify as a penetration tester.  If a person has a current GIAC Certified Penetration Tester (GPEN) certification, I would consider that person qualified to be a penetration tester.  By the same token, just because someone is a great hacker also does not necessarily qualify them to be a penetration tester.  A good penetration tester needs to not only have the skills, but also needs to document everything about how they got in.  I worked with a great hacker a number of years ago that was probably one of the best at getting into whatever system he put his mind to get into.  However, he was a lousy penetration tester because he failed to document how he did what he did to compromise systems.  As a result, once he was done, he had no documentation to show for all of his work other than the compromised target and a very vague memory of how he got there.  It is the documentation of the compromise that is worth its weight in gold and what you need as a result of a penetration test.  Without such documentation, there is no way to address the shortcomings in security that was used to compromise the target.  In reviewing the Director’s staff, he had a couple of people that I deemed qualified, so we moved forward.

Back to the tool question.  The first tool out of the Director’s mouth was Nessus.  Nessus is a great vulnerability scanner and can do some penetration-like testing, but it is not a true penetration testing tool.  So we used Nessus to get a list of potential vulnerabilities of the in-scope PCI devices and systems and they downloaded a copy of Metasploit to use as their penetration testing tool.  One of the things I do not like about Metasploit is that not all exploits are necessarily available under Metasploit.  Another thing about Metasploit that troubles me is that a lot of the Metasploit exploits are “live” exploits and if successful, compromise the target.  If you want to “neuter” Metasploit exploits, it is up to you and your programming expertise to identify the problem areas and then remove them and still have a valid test.  Regardless, once that target gets compromised, the only option to correct the problem is rebuild that device.  As a result, another set of vulnerability scans and penetration testing have to be done that could cause the whole process to start over again.  Some of the commercial penetration testing tools use “widgets” that get installed in memory for conducting their compromises.  Since these “widgets” are only memory resident, the target only needs to be rebooted to clear them out of the system.  The key thing to note though is that regardless of approach, once a penetration test is done, there is clean up afterwards that must be done in order to ensure security.

Planning an attack is very important.  Anyone can use a tool and get nowhere.  The art in penetration testing is how an attack is constructed.  There needs to be a good analysis of the vulnerability scan to see what opportunities are available.  In the case of my client, there were a very limited number of vulnerabilities with which to work.  There were a couple of low rated vulnerabilities that showed some promise.  One of the penetration testers asked, “Low rated vulnerabilities, how can those be used?”  It all depends on what those low rated vulnerabilities are.  In this case, there were a couple of SMB and NetBIOS vulnerabilities that while rated low, could be used to escalate privileges.  And that is exactly where I suggested they start.  It took the penetration testers a couple of days, but ultimately they were able to leverage those original vulnerabilities to the point that they were able to escalate their privileges to where they were able to penetrate a server that stored cardholder data.

The first piece of good news is that the data stored in the compromised server is encrypted and they could not get to the keys, so the data remained secure even though the server was compromised.  Another piece of good news is that these penetration testers kept good notes on everything they did and had plenty of information on what needed to be fixed to improve their security posture.  The final piece of good news was that no devices were harmed (i.e., crashed) during the conducting of the penetration test.  There were two devices that required replacement after the test because the escalation of privileges left them in an unsecure state.  This was not too onerous since the environment is redundant and the backups were used for testing.

But there was also bad news during the penetration test.  The worst piece of bad news was that even though the penetration test set off a number of alerts, those alerts were not addressed by the network administration group.  The common refrain we heard during the debriefing was that they knew it was a penetration test and therefore just ignored them.  When asked how they knew it was the penetration test and not a valid alert, we got looks of incredulity as though it was just painfully obvious.  However, given that the attack was launched outside of the organization, management found it hard to believe that these people knew this was not a real attack.  As a result, management is taking a hard look at this group as well as looking at ways to make the alerting more effective and require that all alerts be addressed.

So, what are the lessons learned from this exercise?

  • Vulnerability scanning is not penetration testing.  Even if the vulnerability scanner allows for the use of credentials and can correlate internal and external vulnerabilities, it does not replace a properly conducted penetration test.
  • Not everyone can be a penetration tester.  Certifications do not necessarily matter.
  • Penetration testing is not a license to crash every device you test.  Penetration testing is to prove that a compromise can occur and that an objective achieved, not that devices can be crashed.  Crashing devices only proves that a denial of service can be conducted and anyone can prove that.
  • Regardless of the risk rating on vulnerabilities, you cannot discount their value in a penetration test.
  • Planning of the penetration test is like planning a campaign during a war.  Not only does the path to the ultimate objective need to be plotted, but contingencies planned in the event the path to the object is blocked at any point.  Planning includes considering any likely points where the compromise might be noticed.
  • Penetration testing will likely result in service outages and those outages should be taken into consideration during the planning process.  If possible, the penetration test should be conducted in a replica of the production environment.  If the penetration test is conducted in production, then management needs to understand and approve of the likely service outages that will occur as a result of this testing.
  • Penetration testing is not just running some tools and producing a report.  Even with great tools, penetration testing takes time and can take a lot of time compared to vulnerability scanning.  Patience is required.  However, if you are spending more than five days conducting a penetration test, you are likely taking too much time.
  • Regardless of whether you inform others in the organization that a penetration test is being conducted or they are not informed, you should expect that any alerts that are generated are addressed and notification of management occurs just as it should based on your incident response plan.

UPDATE: Here is a great article on 10 tips for successful penetration testing.


“Passing” Vulnerability Scans

What is a “passing” scan?  The PCI DSS requirement 11.2.b defines a passing scan as a scan that does not report any urgent, critical, or high vulnerabilities, i.e., any vulnerability with a CVSS base score of 4.0 or greater.  So what is the big deal and why is a passing scan so difficult to obtain?

The National Vulnerability Database (NVD) maintained by the National Institute of Standards and Technology (NIST) has 40,029 vulnerabilities cataloged from 1999 through the end of 2009.  Of those vulnerabilities, almost 94% (37,523) have a CVSS base score of 4.0 or greater.  As a result, statistics say that there are going to be times when the vulnerability scan comes back with an urgent, critical or high vulnerability.  While requirement 11.2.b allows a minimum of quarterly scanning, requirement 6.1.b requires that all urgent, critical or high patches must be applied within a month.  As a result, once a vulnerability is identified by your scan, you essentially have 30 days to apply a patch and you must rescan to ensure that the patch has been applied.

Under a quarterly scanning program, when a non-passing scan occurs, you must now schedule a scan at least 30 days later to prove that the urgent, critical or high vulnerability was patched.  Given statistics say that 94% of all vulnerabilities have a CVSS base score of 4.0 or greater, it is highly likely that you will have to scan at least eight times during the year.  The four quarterly scans plus four more remediation scans.  However, given those previous statistics, it is also highly likely that those four remediation scans will reveal new vulnerabilities meaning that you will have to scan at least four more times.  That means at least 12 scans, possibly more.  This is why a lot of organizations just do monthly scans.

But, this is not the entire patching story.  Most of the time, vendors have a patch within days or a week or two of identification.  However, there are instances where vendors have taken months or even years to deliver a patch.  As a result, in certain instances, patches may simply not be available from a given vendor.  In some rare and bizarre scenarios, we have seen certain patches from vendors remove patches thus reintroducing old vulnerabilities.  When systems were reviewed, the system patch records still indicated that all patches had been applied however the vulnerability had reappeared and had to be patched again.

Addressing vulnerabilities can get even more delayed when we talk about packaged software.  For organizations running packaged solutions, they typically do not have the option to patch their software within the 30 day window required in 6.1.b.  This is because packaged software vendors need to test operating system patches and other system patches with their software prior to telling their customers that the patch is compatible with the packaged solution.  In some cases, the software vendor issues their own service packs on a quarterly, semi-annual or other periodic basis that contain compatible system patches as well as any updates to their own software.

This is where the experience of the QSA comes into play.  An experienced QSA understands the realities and that scans showing new vulnerabilities are a fact of life.  As a result, I recommend the following guidelines to determine if an organization is meeting their PCI compliance obligations regarding patching and scanning.

  • Determine that vulnerability scanning, penetration testing and patch management processes are documented.  Obtain and review all policies, standards and procedures related to vulnerability scanning, penetration testing and patch management.
  • Determine that there is proof that supports that the patch management process works as documented.  Proof that a patch management process is working includes any reports from tools such as Microsoft WSUS, Big Fix, Lumension, GFI LANguard, Shavlik HfnetchkPro and the like as well as reviews of system patching records from the systems themselves and the vulnerability scanning and penetration testing reports.
  • Determine that the vulnerability scanning and penetration testing processes are functioning by reviewing all available reports from those processes for the PCI compliance reporting period.  Confirm that any new vulnerabilities identified are either addressed in the 30 day window or are documented as to why they were not addressed.  Determine that rescanning and retesting are performed after any patching has been completed.  Remember, only your external quarterly scans need to be done by an ASV.  Any other scanning done can be done by qualified internal resources, so you do not have to incur additional costs of an ASV for scans outside of the quarterly scans.
  • Review change control records and determine if any significant changes have been made to either PCI in-scope applications or the in-scope networks.  If significant changes have occurred, match the completion dates of those changes to vulnerability scans and penetration tests to ensure that scanning and testing was performed after those significant changes were implemented.
  • If a vulnerability is not patched, obtain documentation explaining why the vulnerability is not patched, the risk presented by the vulnerability, what has been implemented to mitigate any additional risks presented by the vulnerability and, if possible, when is the vulnerability expected to be addressed.  Determine that management has approved that the vulnerability has not been patched and they accept any additional risk presented.

If an organization can provide all of this documentation and proof, in my opinion they are meeting their PCI compliance obligations regardless of what their vulnerability scans and penetration tests document.


Wireless Security – Random Thoughts On How To Fix

This has possibly been the hardest post yet to write.  Mainly because I am at a loss for answers.  There just does not seem to be a lot of solutions out there to address real wireless attacks.  So, I have done my best to come up with some thoughts on how to conduct a wireless assessment that will provide some reasonable level of assurance that your network is not compromised.  Note, I said ‘reasonable’ as I do not think there is a way to get absolute assurance that your network cannot be compromised when wireless is involved.

  • Document the business reasons for implementing a wireless network.  Just because you can, does not always mean you should.  In a significant number of situations, you will find that the only reason for implementing wireless is just for the convenience it offers.  Does your organization really need wireless ‘guns’ that update inventory in real time or can you use guns that record inventory and then upload it in batch when the ‘gun’ is placed in a cradle?  In most situations, the cradle works just as well as the wireless solution.  That is not to say that there are not situations that warrant a wireless solution.  I have a number of clients that use wireless terminals and handhelds in innovative ways to improve customer service.  However, until there is a real business purpose with a real return on investment, do what you can to push back and not implement wireless. But be advised, since some vendors are now only producing wireless solutions, finding a hard wired alternative may not be possible.
  • Architect your wireless network to be secure from the start.  There are ways to do this that are not as onerous as you might think.  Primarily, it needs to be isolated away from the rest of your network.  The reason is that no matter the security you implement, wireless uses the public airwaves to transmit, the key word being ‘public’.  As a public network, attackers can eavesdrop on your wireless whenever they want and they can and will make attempts to crack your security all they want and there is nothing you can do to stop it.  Once your wireless network is isolated, treat it as the public network it is and implement firewalls, IDS/IPS and any other security measures on your wireless network segment.  Make sure that you create a consistent configuration so that you minimize the potential for introducing a mistake   One of the best methods is to use those centralized, managed wireless solutions versus individual wireless access points.
  • The PCI SSC needs to change requirement 11.1 to address the realities of the real world.  First, I question the usefulness of wireless scanning in the first place and I would highly recommend that it be dropped.  But assuming it is here to stay, for all but the very smallest of merchants, scanning with a wireless analyzer quarterly is a pipe dream.  I would recommend that quarterly testing is only a requirement when it is possible.  For all other merchants that wish to perform wireless testing with an analyzer I would recommend that requirement 11.1 suggest a sampling approach to ensure that all facilities are tested when significant network changes are implemented at the facility or at least once every three to four years.  Let us face facts here, there is no way Best Buy, Wal*Mart or Target are going to test their hundreds or thousands of stores on a quarterly basis.  It is just physically impossible.  They do not even conduct individual store financial audits that often, so who thought they would get wireless scans done that often?  Next, the PCI SSC has to provide in requirement 11.1 some additional alternative solutions besides an IDS/IPS on the wireless network segment.  Based on my experience, almost all of my clients that are using wireless are creating a compensating control to satisfy requirement 11.1.  It seems to me that if the majority of organizations with wireless are using a compensating control to meet the requirement, then the PCI SSC needs to create a requirement that does not require the majority of organizations to use a compensating control to satisfy the requirement.
  • If your organization has decided to use wireless scanning with an analyzer, admit that wireless scanning requires a technical expertise that your organization likely does not have.  This is a perfect project for a qualified network security consultant to perform.  The costs for such projects are easy to control as they are driven by the location and number of facilities you need scanned.  If your facilities are widely scattered, you may want to go with a consulting firm that better covers your locations so that you can minimize travel costs.  You can also control costs by using a consistent configuration for your wireless.  That way you can use a sample of facilities versus scanning every facility.  However, since building construction usually varies from location to location, that may require making sure that all your facilities are scanned within a one or two year period.
  • Don’t be buffaloed by a consultant’s certifications.  Customers are usually baffled by all the letters following a consultant’s name (even I have a boatload of letters after my name).  While certifications are good, it’s a consultant’s practical experience with security and wireless that counts.  Nine times out of ten, the consultant that meets with you will not be the one that does the work.  So, make sure that you and someone from your technical staff review the biographies of the consultants’ that will actually work on your project and that you personally talk to them either face-to-face or by phone.  Ask them about the wireless assessment engagements they have done.  Have them describe the process and make sure that it matches the process the sales person described.  Ask them about the typical findings that result from such projects and make sure that they can explain their findings to both technical and non-technical personnel.  And of course, make sure that you are not buying the process that I’ve discussed earlier.
  • Don’t buy supposedly sophisticated looking tools.  Regardless of whether you are doing it yourself or getting a consultant to assist, don’t buy based on tools.  A lot of people do good work with NetStumbler/Kismet, and the right wireless card.  Some of these tools are just expensive solutions using the same techniques as the person with shareware tools.  So when evaluating wireless security solutions, ask the vendor tough questions about how their solution discovers rogue access points and get them to address my earlier points on why wireless scanning is flawed.  In most situations, you will find that these vendors are offering a solution no better than the one you can get for free.  When talking to consultants, be wary of the consultant that talks about their tools and does not talk much about their process.  Consultants that talk ad-nauseam about their tools typically do not have the experience to deliver the results that you desire.  They are typically going to be no better than anyone else with a scanner.
  • Get a good understanding of the consultant’s process.  Ask the consultant to describe their wireless security assessment process.  Experienced consultants will have a number of service offerings in this area from basic scanning (essentially what I describe earlier but with a much more robust analysis of the results) to a full out wireless assessment that can resemble something out of a good spy movie.  Obviously, the more sophisticated it gets, the higher the cost.  However, for some clientele such as DoD contractors and the like, a very detailed and sophisticated analysis of all things wireless is what they require in order to satisfy contractual requirements.  For most merchants, what they need is something towards the lower end of the cost scale that will provide them with a reasonable assurance that their network is secure.  For most processors, their wireless assessment will likely be a bit more robust than a merchant’s because of the added risk they have due to the data they retain.

I have taken up a lot of bandwidth on this topic, possibly too much.  However, I think you start to see that wireless is not as simple a technology to secure as some of the security standards portray.    Wireless is not a technology that you just “add on” when you need it.  In the end, the most critical aspect to wireless is that it requires significant forethought before being added to a network.


The Shortcomings Of Wireless IDS/IPS

In my first post, I discussed the wireless analyzer approach to complying with requirement 11.1.  I documented where I think the current techniques fall short and give organizations a false sense of security.  In this post, I am going to give you what I think are the shortcomings of wireless intrusion detection/prevention systems.

Wireless IDS/IPS seen to break down into two types, ones that work like the wireless analyzer approach from my previous post on this subject and the ones that work like an IDS/IPS.  Let us discuss the analyzer IDS/IPS first.

From the analyzer IDS/IPS style products, I have had demonstrations of, most of these solutions work essentially the same way as the wireless analyzer methods I discussed in my last post.  These products typically work with wireless sensors connected to your network and a central server that also provides analysis of the wired network when a suspect rogue AP is discovered.  The wireless sensor is used as a wireless spectrum analyzer to locate potential rogue APs.  The idea being that multiple sensors can triangulate on the rogue AP and provide the location.  The ability of these sensors to accurately locate APs outside of a 15-foot radius can make things dicey and potentially expensive.  Therefore, for large facilities, you can expect to have to spend a lot on sensors for full protection.  For example, an average Wal*Mart is around 100,000 square feet.  In order to provide adequate coverage, the average Wal*Mart store would require approximately 445 sensors.

On the wired side of things, these analyzer IDS/IPS solutions along with their exclusively wired solutions are looking for rogue network traffic, ICMP response, MAC address and/or SNMP information that indicates the device is a rogue AP.  In the end, they sound sophisticated, but they still rely on the fact that the rogue access point is configured to be discovered.

Attackers know how these solutions operate and configure their rogue APs to deter or even avoid these identification techniques.  As a result, these more sophisticated solutions are also blind to the truly rogue AP.

In addition to these obvious issues, false positives can be quite a problem for those solutions that conduct monitoring of the wireless spectrum.  This is particularly true in situations where APs are added on a regular basis outside of your facilities.  And with wireless becoming more and more common, that can keep your security team quite occupied while they sort through the false positives to find the real potential threats.

And then there is the whole issue of 802.11 devices being the only source of compromise.  If an attacker is going to go to the length of compromising your network, why would they not use cellular technology and avoid 802.11 all together?  With 3G cellular networking all the rage, the speed of these cellular solutions are no longer a limiting factor.  None of these solutions truly addresses the cellular issue, so there is still a vulnerability.  Unfortunately, the security vendors, PCI SSC and card brands seem to only react to incidents, not think ahead.  So, until a breach occurs involving cellular, we will likely not see anything to address this risk.

And what about other forms of wireless such as Bluetooth and satellite?  Before you write them off as either not having any transmission distance or being too complicated and expensive, it is that short sidedness that will get you in trouble.  Believe it or not, there are Bluetooth USB adapters that have ranges of up to 350’.  In addition, pairing and security codes are well documented by vendors so attaching to any Bluetooth device is an easy proposition.  Bluetooth can be used to load malware on a system and begin the compromise process.  If you think satellite is the last safe wireless solution, at this year’s Black Hat, Adam Laurie discussed not just hacking satellite TV but also data transmissions.

In the end, the important thing to remember is that the public airwaves are just that – ‘public’.  And you must treat them as public or you will get burned.

In a future post, I will discuss my thoughts on how I think the PCI DSS should address these shortcomings.


Vulnerability Scanning and Penetration Testing

For whatever reason, vulnerability scanning and penetration testing are areas that just seem to continue to confuse people, even information technology personnel.

Vulnerability scanning is the act of identifying potential vulnerabilities in network devices such as firewalls, routers, switches, servers and applications. The operative word is ‘potential’. Vulnerability scanners merely identify potential vulnerabilities; they do not always assess the ability to exploit the vulnerability. To conduct a vulnerability scan requires the use of a vulnerability scanning tool such as Qualys, Internet Scanner, SAINT or Nessus. Moreover, while almost anyone with networking experience can run a vulnerability scanner, it requires someone with significant networking and security experience to interpret the results from a vulnerability scanner.

External vulnerability scans are required quarterly or whenever significant changes are made to the network or applications and must be performed by an ASV against any PCI in-scope systems. Operative word, ‘in-scope’. We have seen many instances where an organization has no Internet presence what so ever and yet they are conducting external vulnerability scans. While not a bad practice, there is no PCI compliance reason to conduct external vulnerability scanning if the organization does not process, store or transmit cardholder data via the Internet. Internal vulnerability scans are also required quarterly or whenever significant changes are made to a network or applications. However, internal vulnerability scanning can be done by anyone that is deemed qualified. Results from vulnerability scanning are to be addressed as soon as possible. This used to be 30 days, but that was found to be a problem as a lot of organizations use off the shelf solutions that require vendors to modify their solutions and that typically does not occur in 30 days or less.

So, what then is penetration testing? Penetration testing takes the results of a vulnerability scan and then the penetration tester, using one or more tools, attempts to use the vulnerabilities identified to compromise the devices with the vulnerabilities. Penetration testing requires the use of tools, sometimes a lot of tools. But it also requires an extremely experienced person to conduct the penetration testing. And yes, penetration testing does have a higher than average chance of causing outages. However, the goal of vulnerability scanning and penetration testing should never be to deliberately put an organization’s online assets out of business.

Penetration testing tools include such software as Metasploit, Core Impact, SAINTexploit and Canvas. Penetration testing tools are much more sophisticated than vulnerability scanners and require a significant amount of experience to use effectively. Most require a good amount of knowledge regarding the exploits that will be used and the environment that they target. Some can directly input the vulnerability results to simplify their use. However, they still require a lot of experience to ensure that they do not create more problems than they solve. The reason? These tools are designed to compromise systems. Metasploit is open source and is used by penetration testers and hackers. For the most part, the exploits for Metasploit are the real McCoy, written by hackers and penetration testers alike. As a result, if you do not know what you are doing, you could leave behind software that keeps the system compromised. Commercial tools typically run ‘sanitized exploits’ that do not fully compromise the system, but they, too, can leave behind software that may also leave a system at high risk of compromise. It takes experience with the exploits, the operating systems and other relevant knowledge to clean up after these tools and ensure that a system will not suffer higher than expected risk to compromise.

For PCI compliance, external and internal penetration testing is required at least annually or whenever significant changes are made to a network or applications. Penetration testing can be performed by any qualified individual. As with vulnerability scanning, the results of penetration testing need to be addressed as soon as possible.

Hopefully, I have clarified what these two methodologies are and are not as well as improved your understanding as to the results they provide.


Requirement 6.6 – Part 3

Because an application firewall is not an option for every application or every organization, the PCI DSS allows for code reviews conducted either manually or using automated tools.  Where I see most trouble with this is that people treat their applications like their networks and use their application vulnerability assessment tool after the application is put into production.  But that is not what the PCI SSC stated in their April 15, 2008 Information Supplement.

“The reviews or assessments should be incorporated into the SDLC and performed prior to the application’s being deployed into the production environment. The SDLC must incorporate information security throughout, per Requirement 6.3. Change control processes must ensure that software developers are not able to bypass the code review/application assessment step and deploy new software directly into the production environment. Change control processes must also enforce the correction and retesting of vulnerabilities before implementation.”

The reason the PCI DSS wants code reviews or application vulnerability testing done before an application is put into production is that, unlike the network, application changes can take significant time to implement.  If you do your code reviews or vulnerability testing after the fact, it could be quite a while before a fix is implemented to close a security hole.
So, to ensure that code reviews and vulnerability testing gets done at the appropriate time, these activities need to be an integrated part of your organization’s system development lifecycle (SDLC).  Most organizations conduct code reviews or application vulnerability testing as part of the quality assurance (QA) process before the application goes into production.  If any issues are discovered during the QA process, they can be sent back to the developers for remediation before the application goes into production.
Some application vulnerability tools provide integration with developer platforms such as Microsoft’s Visual Studio.  And, while these tools will notify the developer of potential security issues, it’s just good practice to fully assess the completed application during the QA process to be safe.
And speaking of application vulnerability assessment tools, remember, they are typically only useful against browser-based applications.  These tools typically have nothing to do with middleware or applications that do not interact through a browser.  This is where vulnerability tools that integrate with the developer’s platform can have a role as well as the use of code reviews.  And don’t forget the use of network vulnerability scanners and penetration testing tools for testing non-browser-based applications.  Even these non-browser-based applications still communicate over the network, so the use of vulnerability scanners and penetration testing tools can assist in identifying any security issues with these applications.
Hopefully this three part series has described the importance of requirement 6.6 and what you need to do to comply.


Requirement 6.6 – Part 2

In regarding the options you have in protecting your applications, let’s talk about option 2 of 6.6 first, the Web application firewall (WAF).  For those that have the budget, this is where they typically go to achieve compliance.
The first thing that people get wrong is that they assume that their existing firewall will also serve as an application firewall.  Wrong!  What most organizations have is a network firewall that works at ISO layers 1 through 4 (remember the ISO 7 layer model?).  Application firewalls work at ISO layers 4 through 7.  And while some network firewall vendors do have application firewall add-ons, experience with these add-ons indicates that you really need a dedicated application firewall to ensure performance.  That’s because like their network firewall cousins, application firewalls perform stateful inspection and that requires significant computational horsepower to conduct.  Even then, an application firewall can create issues for Web sites such as those that have extremely high transaction volumes or deliver streaming content, so an application firewall is not the answer for everyone.
Where I see application firewalls fall down the most is that they are not appropriately monitored.  And they are not monitored because they are not properly configured and maintained and therefore generate too many false positives.  Another big difference is that application firewalls need a significant amount of periodic ‘tweaking’, particularly after updates, in order to ensure false positives are kept to a minimum.  And since most organizations don’t periodically ‘tweak’ their application firewalls, the alerts generated take too much time to determine if they are real or false and people naturally ignore them all.
Unlike network firewalls, application firewalls require knowledge of how browser-based applications work at ISO levels 4 through 7.  Such a knowledge base is not something a typical network administrator has a clue and, to be fair, many application developers also do not understand how their applications work at these ISO levels.  Therefore, organizations need to find people with such knowledge to assist in configuring and maintaining their application firewalls.  This is likely best handled by a qualified consultant on a retainer basis.
And just because you chose the application firewall route, that does not get you off the hook regarding code reviews or the use of an application vulnerability testing tool.  I would highly recommend that you also conduct these processes as well.  An application firewall is not justification to toss out just any solution you want.  Even an application firewall has limits and cannot protect poorly constructed applications.
In my final posting, I’ll discuss the use of code reviews and application vulnerability testing tools as methods of compliance.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023