Posts Tagged ‘penetration testing

02
Aug
15

Pushing The Limits

Here are some situations that QSAs encounter way too often.

  • Organizations that conduct their annual penetration test 30 days before the deadline to file their self-assessment questionnaire (SAQ) or Report On Compliance (ROC).
  • Organizations that conduct their final quarter vulnerability scan 30 days before the deadline to file their SAQ or ROC.
  • Organizations that decide to implement a compensating control worksheet (CCW) within 30 days before the deadline to file their SAQ or ROC.

Why are these situations a problem?

For the first two conditions, it is because if any of these results in a remediation effort, you either (a) have to remediate the findings and retest before filing your SAQ/ROC or (b) you have to put compensating controls in place and test those to ensure you are mitigating the risk of not remediating. Both of these situations can easily result in missing an organization’s compliance filing date.

For any CCW, it is a problem because you need to test all of the controls you are using to compensate for not being able to comply with a requirement and prove they are functioning as designed. In a lot of cases, those controls are going to be new controls and will take time to implement and then test.

While this “fire drill” is going on, your QSA sits and waits for you to complete remediation or implement your compensating controls so that they can test things and ensure that you are in compliance. Unfortunately, while your QSA waits, the stress of getting things done accumulates and people lash out at the QSA for their organization’s poor planning with the expectation that the QSA will just look the other way.

As a QSA, I would really like to help you. But as the old adage goes, poor planning on your part does not create an emergency on my part. Unfortunately, clients never see it that way when they are trying to hit a deadline, but it is still true.

So the way to minimize these situations is to plan ahead. Make sure that your annual penetration test is conducted no later than the start of the fourth quarter of your reporting period. This will give you around 90 days to address any issues and retest. For vulnerability scanning, I would also highly recommend doing your final quarterly vulnerability scan no later than the start of the fourth quarter of your reporting period as well for the same reason. These both are particularly true if your organization has a history of not getting passing results.

For CCWs, you want your QSA to identify those as soon as possible so that you can work on getting the controls in place and functioning. As a result, you should schedule your assessment to start no later than the third quarter of your reporting period and as early in that quarter as possible. If your organization is large, you may even want to start in the second quarter of your reporting period.

If you have CCWs in place and you will keep them in place for your coming assessment, you should be conduct your own testing prior to your QSA arriving to make sure you can fix any out of compliance situations.

So before you chew off the head of your QSA for some self-inflicted wound, think about how you got into this predicament. Next year plan better and your assessment will likely go better.

Advertisement
03
Apr
15

PCI Issues Penetration Testing Information Supplement

On Thursday, March 26, the PCI SSC issued their latest information supplement. This one is focused on the topic of penetration testing. Not surprisingly, I have a number of comments based on what I have read. There are some good things that have come out of this effort. But in my opinion, the committee missed the boat in a few areas and those will lead to even more confusion between penetration testers, QSAs and clients.

Scoping

Talk about making a mess of a mess, I believe the writers accomplished that amazingly well in their discussion on scoping. This is probably the area that creates the most confusion and will likely create the most problems.

On page 5 and the first bullet, talk about confusing. It took me a number of readings to even get what I think they are trying to communicate.

“The scope of the internal penetration test is the internal perimeter of the CDE from the perspective of any out-of-scope LAN segment that has access to a unique type of attack on the CDE perimeter. Critical systems or those systems that may impact the security of the CDE should also be included in the scope. Testing must include both application-layer and network-layer assessments.”

Out of scope LAN segments are out of scope because there are no rules that allow them access to the CDE. I agree that you need to test that these LAN segments are truly out of scope, but if there is a unique type of attack available from such a LAN segment into the CDE, that would seem to imply that the LAN segment was not out of scope. Confusing and likely to cause a lot of pen testers to force a lot of unnecessary testing and fishing for bizarre issues on out of scope network segments.

The next sentence paired with the first seems to just add to the confusion because it implies again that the out of scope LANs are not out of scope. These should have been two separate bullet points in my opinion.

In the end, what I think the writers are saying is that even if deemed out of scope, all LANs and VLANs need to be confirmed to be out of scope. However, how they have said that in my opinion leads to a lot of confusion.

If the last one did not cause your head to spin around, this one probably will. It is the third bullet at the top of page 5.

“To be considered out of scope for PCI DSS, a system component must be isolated (segmented) from the CDE, such that even if the out-of-scope system component was compromised it could not impact the security of the CDE. Therefore, the penetration test may include systems not directly related to the processing, transmission or storage of cardholder data to ensure these assets, if compromised, could not impact the security of the CDE.”

Correct me if I’m wrong, but this seems to imply that everything is in scope for penetration testing. I know the writers of this supplement have used the word ‘MAY’, but we also know how ‘MAY’ tends to be interpreted by the Council (i.e., MUST) and how it will be interpreted by a lot of our clients (i.e., do not do it). Until a few penetration testing reports are reviewed by the Council, the verdict will be out on the correct approach. However, coupled with the theme created by these bullets, I am guessing that the committee could not agree on how to express how to scope the penetration test.

But if those bullets were not confusing enough on scoping, the writers add insult to injury. After the bulleted list at the top of page 5, the first paragraph states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

An internal pen test does not require testing from within the cardholder data environment (CDE)? This sounds like the “Three Hop Rule” all over again. But then we twist it further somehow justifying the first sentence.

All this diatribe does is just further obfuscates scoping. What simply should have been said is that everything in-scope for PCI compliance is to be penetration tested. Instead, we got a lot of words that, in my very humble opinion, do nothing to clarify scoping.

PA-DSS

On page 6 under 2.3.2 PA-DSS Compliant Applications we get the following guidance.

“If a payment application has been PA-DSS validated, the application’s functionality does not need to be tested as part of the entity’s PCI DSS compliance validation. However, the implementation of the application does need to be tested. This includes both the operating system and any exposed services, but not the payment application’s functionality (e.g., authentication, key management, transaction processing, etc.) since this was validated as part of the PA-DSS application validation.”

Under the PCI DSS, even PA-DSS validated applications still must be tested to prove that their implementations were completed per the vendor’s PA-DSS implementation guide. Yet for penetration testing, PA-DSS validated applications, their operating systems and exposed services are exempt?

I hope the acquiring banks are geared up for the fights this is going to cause between the penetration testers and their clients. For example, in the restaurant and hospitality industries, I can see the situation where penetration testing only occurs on firewalls, routers and switches because everything else is PA-DSS validated.

I do not believe this is what the Council wanted, but that is what this supplement is stating. If we are truly talking about security of our networks and systems, this is not how you achieve it.

Social Engineering

I was pleasantly surprised that this was included in this supplement. However, as I read through this section, I got more concerned that it was even included.

For section 2.5 Social Engineering on page 7, we encounter the following.

“Social engineering is the attempt to gain information, access, or introduce unauthorized software into the environment through the manipulation of end users. PCI DSS v3.0 reconfirms testing by requiring industry accepted penetration testing approaches (many of which include social engineering as part of their approach) and to have an approach to penetration testing that “considers the threats and vulnerabilities experienced by merchants in the last 12 months.” This may include social-engineering attacks as a method used for introducing malware into the environment.”

This makes social engineering attacks legitimate if the client has been breached by such methods in the past year. But does this include physical as well as logical social engineering? Since it is not explicitly called out, I would assume so, but it should have been defined. Regardless of that being explicitly called out, this is a great thing to see in this document.

On page 8, the last paragraph of the discussion on social engineering states:

“Social-engineering testing may not be appropriate or provide a meaningful result for all organizations. Although social-engineering testing is not a requirement of PCI DSS, an organization may consider documenting the reason(s) for foregoing social-engineering testing and include applicable documentation with the internal and external penetration test reports, particularly if social-engineering attacks were encountered in the last 12 months.”

What started with such promise ends on a very sour note. This last paragraph gives organizations an out. So I guess we can look forward to a lot of organizations issuing inventive explanations to avoid social engineering testing.

Some Miscellaneous Issues

On page 2, the seventh bullet down in the Terminology we have the following.

National Vulnerability Database (NVD): The U.S. government repository of standards based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance (e.g., FISMA).”

I know they just grabbed the description from the National Vulnerability Database (NVD) Web site, but that does not mean that it is appropriate for use when discussing penetration testing. The Common Vulnerabilities and Exposures (CVE) database maintained by Mitre Corporation would have been the better reference since the NVD uses the CVE as the catalog of vulnerabilities.

I have no idea how the Federal Information Systems Management Act (FISMA) ties into the NVD. If they had truly pulled the content exactly from the NVD Web site they would have grabbed the fact that it ties to the Security Content Automation Protocol (SCAP) not FISMA.

My last comment on this topic is regarding the fact that the NVD (or the CVE) enable automation of vulnerability management. Had they brought in the reference to SCAP, I might have had an easier time going along with things. The NVD/CVE do not contain the actual exploits, they are merely a catalog of the vulnerabilities. As a result, I question the value of either the NVD or the CVE as to it automating vulnerability management. Yes, the CVE catalogs all of the vulnerabilities, but that is the value. In my opinion, the use of either of these in regards to the automation of the vulnerability management process is a stretch. They are a foundation that will allow for automation, but they do not in and of themselves actually enable automation.

Under 2.3.3 Web Applications on page 6, we have the following.

“It is common for an environment to host a web application that was not specifically coded for the organization such as commercial, off-the-shelf web-mail interfaces, document-sharing tools, file-transfer services, network-device administrative interfaces, etc. In these instances, the web application does not typically need an application-layer penetration test as the entity is not responsible for the source code of this type of software. Instead, the tester should perform a network-layer test and ensure the software was implemented, configured, and is currently being maintained in a secure manner (disabling or uninstalling unused services, blocking unused ports, applying current updates, etc.).”

I can see it now, organizations arguing over the pen testing of IBM Websphere, Oracle eCommerce or similar Web application frameworks because they are a package. It is a framework that you then must write an application for, but penetration testers will get into arguments over that aspect.

A Few Good Things

Yes, I have complaints, but there are some good things that are contained in this supplement.

Probably the best thing about this information supplement is the checklist at the end of the document for evaluating penetration testing reports. The Council should have a similar item for vulnerability scanning reports as well.

The discussion of the penetration testing methodology in section 4 of the document is a good read for both penetration testers and the people that contract for penetration testing services. But keep in mind it is a framework and not the entire rule/procedure set for conducting a proper penetration test. Yes, there are some good details in areas, but it is not the detailed roadmap that some people will portray it.

Section 3 has a good discussion on how to evaluate the qualifications of a penetration tester. It was nice to see that while they document the common certifications a penetration tester may hold, they caution readers that certifications alone does not make a penetration tester qualified.

The case studies in section 6 probably do more to clarify than the previous five sections. They are good examples of properly conducted penetration tests.

Finally, there is a good discussion on the definition of a “significant change”. However, I think this discussion as well as a discussion about the words “periodic” and “periodically” should be put into a discussion about risk management and assessment, rather than brought up here. I wrote a post on this topic a while back to provide some guidance to people and it was nice to see that my thoughts jive with what they wrote in the supplement.

What Was Missed

The biggest item I think that was missed was how to handle the penetration testing of huge environments. Case study 6.3 discusses the concept of sampling somewhat, but it deals with a retailer with only six locations, not thousands of locations. There are also veiled references to sampling in section 2.3.4 in discussing a separate testing environment. But there is no deliberate and complete discussion on this very important topic.

Retailers such as Wal-Mart, Target and Home Depot have more than a thousand locations with 15+ checkout lanes meaning there are, at a minimum, 15K devices out in the field that technically need to be penetration tested. How does an organization penetration test such a large number of devices annually? The answer is that they do not test them all. It is physically impossible. However, this information supplement does not address that situation and provide guidance as to what the Council deems as acceptable.

While I use some of the largest retailers in the world as my example, it is not just them as there are mid-sized retailers as well as hospitality organizations that face this same issue. I had hoped that how to approach these situations would have been explicitly addressed in greater detail but it was not.

The writers also missed a huge topic regarding social engineering testing. Unlike vulnerability scanning and penetration testing, in my experience, it is virtually impossible for any organization to “pass” social engineering testing. So what would be considered “passing” in a social engineering test? That is not discussed but should have been. Since this document broaches the subject, it is probably a good topic to propose for a SIG for this year’s SIG recommendations so we get social engineering covered in an information supplement rather than as a sidebar to penetration testing.

It is not that this information supplement is an entirely bad document. But the advice it provides needs to be taken with a bit of caution. As usual, the shortcomings will be leveraged by all of the “check boxers” to do as little as possible so that you get that box checked for 11.3.

21
Jan
11

Why Stuxnet Matters To PCI Compliance

There is an article in the Sunday, January 16, 2011, New York Times that says the American and Israeli governments were behind Stuxnet, confirming a rumor that has been running around ever since Stuxnet was identified.  The consensus of the experts is that Stuxnet is to cyber warfare what the airplane was to conventional warfare, a radical game changer.

So why does Stuxnet matter to merchants, processors, acquiring banks and the card brands?  A whole lot if you think about it.

First, Stuxnet proves beyond a shadow of a doubt that governments are investing in cyber war and that in cyber war anything on the Internet is fair game.  While Stuxnet was built to target Iran’s centrifuges that are used in refining Uranium, there were a number of incidents of “collateral damage.”  This “collateral damage” was the result of Stuxnet attacking anything that used the same Siemens SCADA controllers.  Granted, Stuxnet was not as nasty to devices that were not centrifuges, but it still caused problems.  Imagine if an entity wrote an attack for a common device or protocol hoping to actually target another particular entity.  Do you think your organization could become “collateral damage” in such an attack?  I would say it is highly likely.

Second, imagine then the damage that could be done if a terrorist group or a government decided to go after another country’s financial systems using a Stuxnet type of attack.  Does such an attack sound unrealistic?  It should not given that the quickest way to bring any country to its knees is through its financial systems.  And what makes things doubly worse is that, thanks to outsourcing, most banks use a very limited number of application solutions thus making such an attack all that much easier.  Given the reliance of countries on their economic capabilities, such an attack would likely only be carried out by a rogue nation such as North Korea that has nothing to lose or even any other country if it is provoked long and hard enough.

But what if the attack was directed against credit card terminals?  While a lot of people would say that would be farfetched, it also is not as wild as it might seem.  All you need is someone on the inside at Ingenico, Verifone and the like to doctor the card terminals’ software to do whatever you want it to do.  Even large merchants do not necessarily monitor their card terminals, so such an attack could go on for quite a while before it was noticed, if it even ever was noticed.  Criminal gangs have been producing limited numbers of doctored terminals for the last four to five years.  Imagine this done on a large scale and you start to understand how nasty a threat this could be.  If introduced from the manufacturers into the distribution stream, there would be no way of knowing that you had been compromised unless you were monitoring your network properly which most organizations do not do.

Finally, there is the doctoring of integrated point of sale (POS) solutions or similar applications.  Again, not as farfetched as you might think.  There have been a number of instances over the years where software was written to provide backdoors or other openings in systems that allowed information to be leaked.  This is why certain governments have gone into the software business.  This is also why there are now valid concerns about how you confirm that your software is only doing what it is supposed to be doing.

The bottom line in all of this is that these concerns are no longer the ramblings of the paranoid among us.  These once imaginary seeming scenarios have actually come to pass and we need to address what to do to mitigate them.  So from a PCI perspective, what should an organization be doing?  While all of the PCI DSS provides methods to protect an organization, the following are what I consider the most important regarding inside attacks.

  • Monitor your internal network – This is the single, most important way to protect your network from doctored devices and applications.  A doctored device or application must transfer the information it has collected either in real time or in a batch process.  Transmission may be outbound or via an inbound process.  Outbound monitoring should be the easiest because most organizations know what external IP addresses to allow.  Inbound communications are always disputed by most organizations.  But if you remind people of how GoToMyPC and others of its ilk operate, they begin to understand how their networks could be easily compromised from the outside.
  • Analyze your logs – This is probably the next area where a lot of organizations are not doing a good enough job.  Most organizations do a good job collecting log data, but then do a mediocre or poor job analyzing that data to find exceptions.  The reason for this poor performance is a lack of defining criteria for attacks.  While the initial implementation did a good job creating initial definitions, as time goes on, the log analyses are not enhanced or updated to reflect changes in attacks and new attacks.
  • Tighten internal controls – Once inside most organization’s security perimeters, security gets pretty loose and free, if it even exists at all beyond a logon.  Unfortunately, attackers understand this fact, hence why they focus on getting inside.  Because once an attacker is inside, it is pretty much a cake walk to get whatever they want.  This is why locking down ports, reviewing firewall and ACL rules, disabling or removing unused services and disabling or removing unused user accounts become so important.  The fewer attack points you provide on your internal network, the more resilient it will be should an attacker get inside.

Remember, while the City of Troy was protected by insurmountable walls, Troy fell because they were attacked from the inside; an attack vector that Troy felt was not realistic or possible.  Troy’s short sidedness was the result of their arrogance and a failure to understand that an enemy determined to defeat you will find a way to overcome the insurmountable.  Learn from Troy’s mistake.

21
Nov
10

The Threat Landscape Is Changing – Cloud Cracking

There was an article published on Threat Post this past week regarding a German security researcher that used a new feature of the Amazon.com EC2 cloud computing environment to crack SHA1 password hashes.   I am sure a lot of you are asking yourselves, “So what?  This is just another Chicken Little warning that come out all of the time.”  I would agree with you that publications use highly threatening headlines to hype these sorts of articles to attract readers.  But if you read these articles closely, ignoring the hype factor and think through the concepts that they are discussing, you can understand the threat they might bring to your environment.

The thing that caught my eye about this threat is that it cost the researcher less than an hour of their time and a whole $2.10 to crack six character length 160-bit SHA1 hashes.  Even more disconcerting was that all of the necessary hardware and software was readily available through the EC2 service.  And if the researcher had desired, they could have used even more GPUs to shorten the time to crack these hashes.  Granted, the researcher only cracked 14 of these hashes, but what if those hashes were to one or more administrator accounts?

I am sure a lot of you are now saying, “Yeah, but, this is all theory and not a ‘real’ threat.”  No doubt about it.  I too have been known to toss out my famous, “In theory, theory works,” at this point.  But the only way to determine if the research really is a real threat is to read the article or research paper and then determine if the threat can truly be applied in the real world.  Based on what I have read about this threat, I would say that there is a great potential for misuse of this EC2 service for all sorts of encryption attacks, not just SHA1 hashes.

A lot of you are now probably pointing to the fact that these were all six character long items that were hashed.  I would agree but then also point out that they were 160-bit hashes, not less than 128-bit.  A lot of security professionals mistakenly believe that if they get hashes or encryption above 128-bit, that everything is secure.  However, the number of bits is not the only factor; there is also the strength of the key used.  If the key is not very long or is easy to guess, then it does not matter how many bits are used by the algorithm.

A lot of security professionals blow off threats because they just assume that if they are scanning regularly that any new threat will be caught by their scanning.  Unfortunately, scanning only looks for vulnerabilities to devices based on known attack vectors, not a threat like this one.  This threat comes into play with any encrypted data or transmissions that an attacker can come across and may or may not have a lot to do with vulnerabilities.

You can argue that because you scan and you comply with the PCI DSS and do not have any vulnerabilities of CVE 4 or higher, you are therefore secure.  However, as any network security professional that conducts penetration testing will attest, vulnerabilities with a CVE of less than 4 can be put together and used to compromise a network.  So, just because you do not have any vulnerabilities of CVE 4 or higher, does not necessarily mean that you are secure.

The real threat here is that should an attacker be able to get a hold of your encrypted data or data flows, they can just load it up and use Amazon.com’s EC2 cloud to attempt to break the encryption.  As a result, all of those claims over the years by security pundits that attackers would have to have access to supercomputers have apparently been realized with the advent of cloud computing.

But again, this is all theory, right?  You wish.  Even more shocking was a tidbit tucked away in the article.  There is a site entitled WPACracker.com that offers a 400 CPU cluster of computers and 284 million word dictionary and promises to crack WPA passwords on an average of 20 minutes for $17.  As a result, all of you relying on WPA to secure your wireless should be considering upgrading to WPA2 as soon as you can.

The next thing that rolled through my mind is what if some enterprising individual decided to conduct some “research” for themselves by using the Berkley Open Infrastructure for Network Computing (BOINC) platform?  This is the platform that runs SETI@home, Einstein@home and other worthwhile scientific research projects that need lots of computing power.  Based on the BOINC home page, that would put the potential of over 4,000 teraflops at the command of our “researcher.”  However, like all things technology based, our enterprising researcher would need to find a potential vulnerability in BOINC and leverage that to gain access to all of that computing power.  Either that or make their research project look enticing to the BOINC user population so that they add that workload to their systems.  But given that BOINC is open source, it is also possible that attackers could create their own BOINC network for the purposes of cracking encryption.  I could imagine botnets being put to this purpose.

This whole threat plays out best when the attacker has inside access to an organization’s network and data.  Reading the latest reports from Verizon Business Services and Trustwave, the majority of breaches have an inside component, so it is not too farfetched that an insider would have access.  So, if you are not monitoring your network and sensitive data and not strictly controlling access to your data, then it is anyone’s guess as to whether or not someone has taken your data and is now attempting to decrypt it.

If you learned anything at all from this post is the fact that attackers are leveraging cloud computing just like the rest of us.  The unfortunate aspect is that attackers are leveraging the cloud to continue their questionable and sometimes illegal activities.  And in so leverage this new technology, the potential that they are successful is only going to go up.

26
Mar
10

The Purpose Of Penetration Testing

I have received a number of questions regarding my penetration testing post.  All of the questions seem to concern why penetration testing is required.  Penetration testing has always been a bone of contention within the network security community.  So it should be expected that questions would arise regarding why it is required under the PCI DSS and why the requirement was expanded to internal testing under v1.2.

As I point out in my original post, penetration testing can leave the target compromised if testing is not conducted properly.  Conducting penetration testing properly usually involves using non-production systems configured the same as the production systems.  Unfortunately, a lot of organizations including some large ones do not have such environments and penetration testing must be performed against the production environment.  Either way, once the penetration testing is completed the penetration tester needs to “clean up,” if possible, after the testing by removing any exploits from the devices tested.  Unfortunately, a lot of tools do not allow for satisfactory clean up and the only alternative is to rebuild the device that was compromised.  It is because of this that a lot of IT professionals do not like penetration testing.

Unfortunately, penetration testing is a necessary evil to ensure that an organization’s network infrastructure and servers are secure.  If all you ever conduct is vulnerability scanning, how do you know that the vulnerabilities identified cannot be used to further someone’s attack against your infrastructure?  And that is the problem.  Without conducting a penetration test to attempt to leverage the vulnerabilities discovered, you have no way of knowing the true risk presented to your network by those vulnerabilities.

The worse problem with vulnerability scanning is that non-security professionals assume that because a vulnerability is rated medium or low, it is no big deal.  They get a false sense of security because the vulnerabilities are not considered serious.  Without the penetration test, there is no way to show them that the low and/or medium risk vulnerabilities can be leveraged to compromise their network and potentially gain access to their cardholder data environment.  I have seen many an example where networks were compromised through supposedly low risk vulnerabilities that ultimately allowed the penetration tester a beachhead from which to launch even more sophisticated attacks from inside the network.  And if you think this is unrealistic if you have properly secured your network, there are people successfully attacking just such networks.

With v1.2 of the PCI DSS, the penetration testing requirement was expanded to include the internal cardholder data environment.  The reason for expanding penetration testing to the internal network was in response to the breaches that have occurred.  Verizon Business Services and Trustwave had analyzed the breaches they had forensically investigated and came to the conclusion that around 70% of those breaches were the result of insider human error.  And while the initial compromise did not necessarily directly involve the cardholder data environment, once the attacker was inside, there were limited barriers to compromising the cardholder data environment.  As a result, the PCI SSC revised the PCI DSS requirements and mandated penetration testing as well as quarterly vulnerability testing for external and internal assets in the cardholder data environment.

21
Feb
10

What Is Penetration Testing?

This question comes up more than I would like so it is probably a good discussion topic.  And it pairs up nicely with my previous post regarding passing vulnerability scans.

First, let us get the obvious out of the way.  A penetration test is not a vulnerability scan and a vulnerability scan is not a penetration test.  Yes, I know that the lines are blurring between vulnerability scanning and penetration testing with certain tools.  However, in the true definition of a penetration test, a penetration test is one where you take the output of a vulnerability scan and using the vulnerabilities identified, you attempt to leverage those vulnerabilities to obtain control of the device and ultimately gain control of your objective.  In the case of PCI compliance, the objective is to obtain cardholder data.

Next, penetration testing is not all about the tools.  I do not care whether you use tools like Metasploit, SAINTexploit or Core Impact.  The best penetration testers use very few tools.  But the one tool that all good penetration testers share in common and rely on most is the world’s most high powered computer on the face of the Earth, their brain.  With penetration testing it is not always about the tools, but tools can help.  The key to successful penetration testing is being able to connect the dots based on what vulnerabilities you have available.  You have to change your mindset from one of being a “good guy” to one of “what kind of opportunity do I have and how far can I take it?”  In most cases, the target has very, very few or even no vulnerabilities, but other devices around the target may have vulnerabilities that ultimately may lead to the target being compromised.  It is the ability of the penetration tester to put the path together that is important as the path to a compromise is never a straight line.

Here is a real world example of what I am talking about.

I was working with a company and one of the things that they had not performed was an internal penetration test.  As their QSA, I obviously asked them to conduct a penetration test.  The first question their Director of Security asked was if it was legal for his group to conduct such a test.  This is the best first question to ask and something you should always ask whether you are a consultant or an employee.  Regardless of whether you are a consultant or employee, you should always have a letter signed by an officer of the organization that states you are allowed to conduct the penetration test.  Under federal law, it is a felony to conduct such testing without such permission.  There are a number of examples where well meaning people have been arrested and put in jail because they did not have such approval.

The next question from the Director was what tool they should use.

I stopped the Director right there and said, “Your next question should have been, is my staff qualified to conduct a penetration test?”  Just because people have certifications such as CISSP or CISM does not mean that a person can qualify as a penetration tester.  If a person has a current GIAC Certified Penetration Tester (GPEN) certification, I would consider that person qualified to be a penetration tester.  By the same token, just because someone is a great hacker also does not necessarily qualify them to be a penetration tester.  A good penetration tester needs to not only have the skills, but also needs to document everything about how they got in.  I worked with a great hacker a number of years ago that was probably one of the best at getting into whatever system he put his mind to get into.  However, he was a lousy penetration tester because he failed to document how he did what he did to compromise systems.  As a result, once he was done, he had no documentation to show for all of his work other than the compromised target and a very vague memory of how he got there.  It is the documentation of the compromise that is worth its weight in gold and what you need as a result of a penetration test.  Without such documentation, there is no way to address the shortcomings in security that was used to compromise the target.  In reviewing the Director’s staff, he had a couple of people that I deemed qualified, so we moved forward.

Back to the tool question.  The first tool out of the Director’s mouth was Nessus.  Nessus is a great vulnerability scanner and can do some penetration-like testing, but it is not a true penetration testing tool.  So we used Nessus to get a list of potential vulnerabilities of the in-scope PCI devices and systems and they downloaded a copy of Metasploit to use as their penetration testing tool.  One of the things I do not like about Metasploit is that not all exploits are necessarily available under Metasploit.  Another thing about Metasploit that troubles me is that a lot of the Metasploit exploits are “live” exploits and if successful, compromise the target.  If you want to “neuter” Metasploit exploits, it is up to you and your programming expertise to identify the problem areas and then remove them and still have a valid test.  Regardless, once that target gets compromised, the only option to correct the problem is rebuild that device.  As a result, another set of vulnerability scans and penetration testing have to be done that could cause the whole process to start over again.  Some of the commercial penetration testing tools use “widgets” that get installed in memory for conducting their compromises.  Since these “widgets” are only memory resident, the target only needs to be rebooted to clear them out of the system.  The key thing to note though is that regardless of approach, once a penetration test is done, there is clean up afterwards that must be done in order to ensure security.

Planning an attack is very important.  Anyone can use a tool and get nowhere.  The art in penetration testing is how an attack is constructed.  There needs to be a good analysis of the vulnerability scan to see what opportunities are available.  In the case of my client, there were a very limited number of vulnerabilities with which to work.  There were a couple of low rated vulnerabilities that showed some promise.  One of the penetration testers asked, “Low rated vulnerabilities, how can those be used?”  It all depends on what those low rated vulnerabilities are.  In this case, there were a couple of SMB and NetBIOS vulnerabilities that while rated low, could be used to escalate privileges.  And that is exactly where I suggested they start.  It took the penetration testers a couple of days, but ultimately they were able to leverage those original vulnerabilities to the point that they were able to escalate their privileges to where they were able to penetrate a server that stored cardholder data.

The first piece of good news is that the data stored in the compromised server is encrypted and they could not get to the keys, so the data remained secure even though the server was compromised.  Another piece of good news is that these penetration testers kept good notes on everything they did and had plenty of information on what needed to be fixed to improve their security posture.  The final piece of good news was that no devices were harmed (i.e., crashed) during the conducting of the penetration test.  There were two devices that required replacement after the test because the escalation of privileges left them in an unsecure state.  This was not too onerous since the environment is redundant and the backups were used for testing.

But there was also bad news during the penetration test.  The worst piece of bad news was that even though the penetration test set off a number of alerts, those alerts were not addressed by the network administration group.  The common refrain we heard during the debriefing was that they knew it was a penetration test and therefore just ignored them.  When asked how they knew it was the penetration test and not a valid alert, we got looks of incredulity as though it was just painfully obvious.  However, given that the attack was launched outside of the organization, management found it hard to believe that these people knew this was not a real attack.  As a result, management is taking a hard look at this group as well as looking at ways to make the alerting more effective and require that all alerts be addressed.

So, what are the lessons learned from this exercise?

  • Vulnerability scanning is not penetration testing.  Even if the vulnerability scanner allows for the use of credentials and can correlate internal and external vulnerabilities, it does not replace a properly conducted penetration test.
  • Not everyone can be a penetration tester.  Certifications do not necessarily matter.
  • Penetration testing is not a license to crash every device you test.  Penetration testing is to prove that a compromise can occur and that an objective achieved, not that devices can be crashed.  Crashing devices only proves that a denial of service can be conducted and anyone can prove that.
  • Regardless of the risk rating on vulnerabilities, you cannot discount their value in a penetration test.
  • Planning of the penetration test is like planning a campaign during a war.  Not only does the path to the ultimate objective need to be plotted, but contingencies planned in the event the path to the object is blocked at any point.  Planning includes considering any likely points where the compromise might be noticed.
  • Penetration testing will likely result in service outages and those outages should be taken into consideration during the planning process.  If possible, the penetration test should be conducted in a replica of the production environment.  If the penetration test is conducted in production, then management needs to understand and approve of the likely service outages that will occur as a result of this testing.
  • Penetration testing is not just running some tools and producing a report.  Even with great tools, penetration testing takes time and can take a lot of time compared to vulnerability scanning.  Patience is required.  However, if you are spending more than five days conducting a penetration test, you are likely taking too much time.
  • Regardless of whether you inform others in the organization that a penetration test is being conducted or they are not informed, you should expect that any alerts that are generated are addressed and notification of management occurs just as it should based on your incident response plan.

UPDATE: Here is a great article on 10 tips for successful penetration testing.  http://www.csoonline.com/article/636040/penetration-tests-10-tips-for-a-successful-program

14
Feb
10

“Passing” Vulnerability Scans

What is a “passing” scan?  The PCI DSS requirement 11.2.b defines a passing scan as a scan that does not report any urgent, critical, or high vulnerabilities, i.e., any vulnerability with a CVSS base score of 4.0 or greater.  So what is the big deal and why is a passing scan so difficult to obtain?

The National Vulnerability Database (NVD) maintained by the National Institute of Standards and Technology (NIST) has 40,029 vulnerabilities cataloged from 1999 through the end of 2009.  Of those vulnerabilities, almost 94% (37,523) have a CVSS base score of 4.0 or greater.  As a result, statistics say that there are going to be times when the vulnerability scan comes back with an urgent, critical or high vulnerability.  While requirement 11.2.b allows a minimum of quarterly scanning, requirement 6.1.b requires that all urgent, critical or high patches must be applied within a month.  As a result, once a vulnerability is identified by your scan, you essentially have 30 days to apply a patch and you must rescan to ensure that the patch has been applied.

Under a quarterly scanning program, when a non-passing scan occurs, you must now schedule a scan at least 30 days later to prove that the urgent, critical or high vulnerability was patched.  Given statistics say that 94% of all vulnerabilities have a CVSS base score of 4.0 or greater, it is highly likely that you will have to scan at least eight times during the year.  The four quarterly scans plus four more remediation scans.  However, given those previous statistics, it is also highly likely that those four remediation scans will reveal new vulnerabilities meaning that you will have to scan at least four more times.  That means at least 12 scans, possibly more.  This is why a lot of organizations just do monthly scans.

But, this is not the entire patching story.  Most of the time, vendors have a patch within days or a week or two of identification.  However, there are instances where vendors have taken months or even years to deliver a patch.  As a result, in certain instances, patches may simply not be available from a given vendor.  In some rare and bizarre scenarios, we have seen certain patches from vendors remove patches thus reintroducing old vulnerabilities.  When systems were reviewed, the system patch records still indicated that all patches had been applied however the vulnerability had reappeared and had to be patched again.

Addressing vulnerabilities can get even more delayed when we talk about packaged software.  For organizations running packaged solutions, they typically do not have the option to patch their software within the 30 day window required in 6.1.b.  This is because packaged software vendors need to test operating system patches and other system patches with their software prior to telling their customers that the patch is compatible with the packaged solution.  In some cases, the software vendor issues their own service packs on a quarterly, semi-annual or other periodic basis that contain compatible system patches as well as any updates to their own software.

This is where the experience of the QSA comes into play.  An experienced QSA understands the realities and that scans showing new vulnerabilities are a fact of life.  As a result, I recommend the following guidelines to determine if an organization is meeting their PCI compliance obligations regarding patching and scanning.

  • Determine that vulnerability scanning, penetration testing and patch management processes are documented.  Obtain and review all policies, standards and procedures related to vulnerability scanning, penetration testing and patch management.
  • Determine that there is proof that supports that the patch management process works as documented.  Proof that a patch management process is working includes any reports from tools such as Microsoft WSUS, Big Fix, Lumension, GFI LANguard, Shavlik HfnetchkPro and the like as well as reviews of system patching records from the systems themselves and the vulnerability scanning and penetration testing reports.
  • Determine that the vulnerability scanning and penetration testing processes are functioning by reviewing all available reports from those processes for the PCI compliance reporting period.  Confirm that any new vulnerabilities identified are either addressed in the 30 day window or are documented as to why they were not addressed.  Determine that rescanning and retesting are performed after any patching has been completed.  Remember, only your external quarterly scans need to be done by an ASV.  Any other scanning done can be done by qualified internal resources, so you do not have to incur additional costs of an ASV for scans outside of the quarterly scans.
  • Review change control records and determine if any significant changes have been made to either PCI in-scope applications or the in-scope networks.  If significant changes have occurred, match the completion dates of those changes to vulnerability scans and penetration tests to ensure that scanning and testing was performed after those significant changes were implemented.
  • If a vulnerability is not patched, obtain documentation explaining why the vulnerability is not patched, the risk presented by the vulnerability, what has been implemented to mitigate any additional risks presented by the vulnerability and, if possible, when is the vulnerability expected to be addressed.  Determine that management has approved that the vulnerability has not been patched and they accept any additional risk presented.

If an organization can provide all of this documentation and proof, in my opinion they are meeting their PCI compliance obligations regardless of what their vulnerability scans and penetration tests document.

13
Jun
09

Wireless Security – Random Thoughts On How To Fix

This has possibly been the hardest post yet to write.  Mainly because I am at a loss for answers.  There just does not seem to be a lot of solutions out there to address real wireless attacks.  So, I have done my best to come up with some thoughts on how to conduct a wireless assessment that will provide some reasonable level of assurance that your network is not compromised.  Note, I said ‘reasonable’ as I do not think there is a way to get absolute assurance that your network cannot be compromised when wireless is involved.

  • Document the business reasons for implementing a wireless network.  Just because you can, does not always mean you should.  In a significant number of situations, you will find that the only reason for implementing wireless is just for the convenience it offers.  Does your organization really need wireless ‘guns’ that update inventory in real time or can you use guns that record inventory and then upload it in batch when the ‘gun’ is placed in a cradle?  In most situations, the cradle works just as well as the wireless solution.  That is not to say that there are not situations that warrant a wireless solution.  I have a number of clients that use wireless terminals and handhelds in innovative ways to improve customer service.  However, until there is a real business purpose with a real return on investment, do what you can to push back and not implement wireless. But be advised, since some vendors are now only producing wireless solutions, finding a hard wired alternative may not be possible.
  • Architect your wireless network to be secure from the start.  There are ways to do this that are not as onerous as you might think.  Primarily, it needs to be isolated away from the rest of your network.  The reason is that no matter the security you implement, wireless uses the public airwaves to transmit, the key word being ‘public’.  As a public network, attackers can eavesdrop on your wireless whenever they want and they can and will make attempts to crack your security all they want and there is nothing you can do to stop it.  Once your wireless network is isolated, treat it as the public network it is and implement firewalls, IDS/IPS and any other security measures on your wireless network segment.  Make sure that you create a consistent configuration so that you minimize the potential for introducing a mistake   One of the best methods is to use those centralized, managed wireless solutions versus individual wireless access points.
  • The PCI SSC needs to change requirement 11.1 to address the realities of the real world.  First, I question the usefulness of wireless scanning in the first place and I would highly recommend that it be dropped.  But assuming it is here to stay, for all but the very smallest of merchants, scanning with a wireless analyzer quarterly is a pipe dream.  I would recommend that quarterly testing is only a requirement when it is possible.  For all other merchants that wish to perform wireless testing with an analyzer I would recommend that requirement 11.1 suggest a sampling approach to ensure that all facilities are tested when significant network changes are implemented at the facility or at least once every three to four years.  Let us face facts here, there is no way Best Buy, Wal*Mart or Target are going to test their hundreds or thousands of stores on a quarterly basis.  It is just physically impossible.  They do not even conduct individual store financial audits that often, so who thought they would get wireless scans done that often?  Next, the PCI SSC has to provide in requirement 11.1 some additional alternative solutions besides an IDS/IPS on the wireless network segment.  Based on my experience, almost all of my clients that are using wireless are creating a compensating control to satisfy requirement 11.1.  It seems to me that if the majority of organizations with wireless are using a compensating control to meet the requirement, then the PCI SSC needs to create a requirement that does not require the majority of organizations to use a compensating control to satisfy the requirement.
  • If your organization has decided to use wireless scanning with an analyzer, admit that wireless scanning requires a technical expertise that your organization likely does not have.  This is a perfect project for a qualified network security consultant to perform.  The costs for such projects are easy to control as they are driven by the location and number of facilities you need scanned.  If your facilities are widely scattered, you may want to go with a consulting firm that better covers your locations so that you can minimize travel costs.  You can also control costs by using a consistent configuration for your wireless.  That way you can use a sample of facilities versus scanning every facility.  However, since building construction usually varies from location to location, that may require making sure that all your facilities are scanned within a one or two year period.
  • Don’t be buffaloed by a consultant’s certifications.  Customers are usually baffled by all the letters following a consultant’s name (even I have a boatload of letters after my name).  While certifications are good, it’s a consultant’s practical experience with security and wireless that counts.  Nine times out of ten, the consultant that meets with you will not be the one that does the work.  So, make sure that you and someone from your technical staff review the biographies of the consultants’ that will actually work on your project and that you personally talk to them either face-to-face or by phone.  Ask them about the wireless assessment engagements they have done.  Have them describe the process and make sure that it matches the process the sales person described.  Ask them about the typical findings that result from such projects and make sure that they can explain their findings to both technical and non-technical personnel.  And of course, make sure that you are not buying the process that I’ve discussed earlier.
  • Don’t buy supposedly sophisticated looking tools.  Regardless of whether you are doing it yourself or getting a consultant to assist, don’t buy based on tools.  A lot of people do good work with NetStumbler/Kismet, and the right wireless card.  Some of these tools are just expensive solutions using the same techniques as the person with shareware tools.  So when evaluating wireless security solutions, ask the vendor tough questions about how their solution discovers rogue access points and get them to address my earlier points on why wireless scanning is flawed.  In most situations, you will find that these vendors are offering a solution no better than the one you can get for free.  When talking to consultants, be wary of the consultant that talks about their tools and does not talk much about their process.  Consultants that talk ad-nauseam about their tools typically do not have the experience to deliver the results that you desire.  They are typically going to be no better than anyone else with a scanner.
  • Get a good understanding of the consultant’s process.  Ask the consultant to describe their wireless security assessment process.  Experienced consultants will have a number of service offerings in this area from basic scanning (essentially what I describe earlier but with a much more robust analysis of the results) to a full out wireless assessment that can resemble something out of a good spy movie.  Obviously, the more sophisticated it gets, the higher the cost.  However, for some clientele such as DoD contractors and the like, a very detailed and sophisticated analysis of all things wireless is what they require in order to satisfy contractual requirements.  For most merchants, what they need is something towards the lower end of the cost scale that will provide them with a reasonable assurance that their network is secure.  For most processors, their wireless assessment will likely be a bit more robust than a merchant’s because of the added risk they have due to the data they retain.

I have taken up a lot of bandwidth on this topic, possibly too much.  However, I think you start to see that wireless is not as simple a technology to secure as some of the security standards portray.    Wireless is not a technology that you just “add on” when you need it.  In the end, the most critical aspect to wireless is that it requires significant forethought before being added to a network.

06
Jun
09

The Shortcomings Of Wireless IDS/IPS

In my first post, I discussed the wireless analyzer approach to complying with requirement 11.1.  I documented where I think the current techniques fall short and give organizations a false sense of security.  In this post, I am going to give you what I think are the shortcomings of wireless intrusion detection/prevention systems.

Wireless IDS/IPS seen to break down into two types, ones that work like the wireless analyzer approach from my previous post on this subject and the ones that work like an IDS/IPS.  Let us discuss the analyzer IDS/IPS first.

From the analyzer IDS/IPS style products, I have had demonstrations of, most of these solutions work essentially the same way as the wireless analyzer methods I discussed in my last post.  These products typically work with wireless sensors connected to your network and a central server that also provides analysis of the wired network when a suspect rogue AP is discovered.  The wireless sensor is used as a wireless spectrum analyzer to locate potential rogue APs.  The idea being that multiple sensors can triangulate on the rogue AP and provide the location.  The ability of these sensors to accurately locate APs outside of a 15-foot radius can make things dicey and potentially expensive.  Therefore, for large facilities, you can expect to have to spend a lot on sensors for full protection.  For example, an average Wal*Mart is around 100,000 square feet.  In order to provide adequate coverage, the average Wal*Mart store would require approximately 445 sensors.

On the wired side of things, these analyzer IDS/IPS solutions along with their exclusively wired solutions are looking for rogue network traffic, ICMP response, MAC address and/or SNMP information that indicates the device is a rogue AP.  In the end, they sound sophisticated, but they still rely on the fact that the rogue access point is configured to be discovered.

Attackers know how these solutions operate and configure their rogue APs to deter or even avoid these identification techniques.  As a result, these more sophisticated solutions are also blind to the truly rogue AP.

In addition to these obvious issues, false positives can be quite a problem for those solutions that conduct monitoring of the wireless spectrum.  This is particularly true in situations where APs are added on a regular basis outside of your facilities.  And with wireless becoming more and more common, that can keep your security team quite occupied while they sort through the false positives to find the real potential threats.

And then there is the whole issue of 802.11 devices being the only source of compromise.  If an attacker is going to go to the length of compromising your network, why would they not use cellular technology and avoid 802.11 all together?  With 3G cellular networking all the rage, the speed of these cellular solutions are no longer a limiting factor.  None of these solutions truly addresses the cellular issue, so there is still a vulnerability.  Unfortunately, the security vendors, PCI SSC and card brands seem to only react to incidents, not think ahead.  So, until a breach occurs involving cellular, we will likely not see anything to address this risk.

And what about other forms of wireless such as Bluetooth and satellite?  Before you write them off as either not having any transmission distance or being too complicated and expensive, it is that short sidedness that will get you in trouble.  Believe it or not, there are Bluetooth USB adapters that have ranges of up to 350’.  In addition, pairing and security codes are well documented by vendors so attaching to any Bluetooth device is an easy proposition.  Bluetooth can be used to load malware on a system and begin the compromise process.  If you think satellite is the last safe wireless solution, at this year’s Black Hat, Adam Laurie discussed not just hacking satellite TV but also data transmissions.

In the end, the important thing to remember is that the public airwaves are just that – ‘public’.  And you must treat them as public or you will get burned.

In a future post, I will discuss my thoughts on how I think the PCI DSS should address these shortcomings.

08
Mar
09

Vulnerability Scanning and Penetration Testing

For whatever reason, vulnerability scanning and penetration testing are areas that just seem to continue to confuse people, even information technology personnel.

Vulnerability scanning is the act of identifying potential vulnerabilities in network devices such as firewalls, routers, switches, servers and applications. The operative word is ‘potential’. Vulnerability scanners merely identify potential vulnerabilities; they do not always assess the ability to exploit the vulnerability. To conduct a vulnerability scan requires the use of a vulnerability scanning tool such as Qualys, Internet Scanner, SAINT or Nessus. Moreover, while almost anyone with networking experience can run a vulnerability scanner, it requires someone with significant networking and security experience to interpret the results from a vulnerability scanner.

External vulnerability scans are required quarterly or whenever significant changes are made to the network or applications and must be performed by an ASV against any PCI in-scope systems. Operative word, ‘in-scope’. We have seen many instances where an organization has no Internet presence what so ever and yet they are conducting external vulnerability scans. While not a bad practice, there is no PCI compliance reason to conduct external vulnerability scanning if the organization does not process, store or transmit cardholder data via the Internet. Internal vulnerability scans are also required quarterly or whenever significant changes are made to a network or applications. However, internal vulnerability scanning can be done by anyone that is deemed qualified. Results from vulnerability scanning are to be addressed as soon as possible. This used to be 30 days, but that was found to be a problem as a lot of organizations use off the shelf solutions that require vendors to modify their solutions and that typically does not occur in 30 days or less.

So, what then is penetration testing? Penetration testing takes the results of a vulnerability scan and then the penetration tester, using one or more tools, attempts to use the vulnerabilities identified to compromise the devices with the vulnerabilities. Penetration testing requires the use of tools, sometimes a lot of tools. But it also requires an extremely experienced person to conduct the penetration testing. And yes, penetration testing does have a higher than average chance of causing outages. However, the goal of vulnerability scanning and penetration testing should never be to deliberately put an organization’s online assets out of business.

Penetration testing tools include such software as Metasploit, Core Impact, SAINTexploit and Canvas. Penetration testing tools are much more sophisticated than vulnerability scanners and require a significant amount of experience to use effectively. Most require a good amount of knowledge regarding the exploits that will be used and the environment that they target. Some can directly input the vulnerability results to simplify their use. However, they still require a lot of experience to ensure that they do not create more problems than they solve. The reason? These tools are designed to compromise systems. Metasploit is open source and is used by penetration testers and hackers. For the most part, the exploits for Metasploit are the real McCoy, written by hackers and penetration testers alike. As a result, if you do not know what you are doing, you could leave behind software that keeps the system compromised. Commercial tools typically run ‘sanitized exploits’ that do not fully compromise the system, but they, too, can leave behind software that may also leave a system at high risk of compromise. It takes experience with the exploits, the operating systems and other relevant knowledge to clean up after these tools and ensure that a system will not suffer higher than expected risk to compromise.

For PCI compliance, external and internal penetration testing is required at least annually or whenever significant changes are made to a network or applications. Penetration testing can be performed by any qualified individual. As with vulnerability scanning, the results of penetration testing need to be addressed as soon as possible.

Hopefully, I have clarified what these two methodologies are and are not as well as improved your understanding as to the results they provide.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031