Archive for the 'Requirement 11 – Regularly test security systems and processes' Category



03
Apr
15

PCI Issues Penetration Testing Information Supplement

On Thursday, March 26, the PCI SSC issued their latest information supplement. This one is focused on the topic of penetration testing. Not surprisingly, I have a number of comments based on what I have read. There are some good things that have come out of this effort. But in my opinion, the committee missed the boat in a few areas and those will lead to even more confusion between penetration testers, QSAs and clients.

Scoping

Talk about making a mess of a mess, I believe the writers accomplished that amazingly well in their discussion on scoping. This is probably the area that creates the most confusion and will likely create the most problems.

On page 5 and the first bullet, talk about confusing. It took me a number of readings to even get what I think they are trying to communicate.

“The scope of the internal penetration test is the internal perimeter of the CDE from the perspective of any out-of-scope LAN segment that has access to a unique type of attack on the CDE perimeter. Critical systems or those systems that may impact the security of the CDE should also be included in the scope. Testing must include both application-layer and network-layer assessments.”

Out of scope LAN segments are out of scope because there are no rules that allow them access to the CDE. I agree that you need to test that these LAN segments are truly out of scope, but if there is a unique type of attack available from such a LAN segment into the CDE, that would seem to imply that the LAN segment was not out of scope. Confusing and likely to cause a lot of pen testers to force a lot of unnecessary testing and fishing for bizarre issues on out of scope network segments.

The next sentence paired with the first seems to just add to the confusion because it implies again that the out of scope LANs are not out of scope. These should have been two separate bullet points in my opinion.

In the end, what I think the writers are saying is that even if deemed out of scope, all LANs and VLANs need to be confirmed to be out of scope. However, how they have said that in my opinion leads to a lot of confusion.

If the last one did not cause your head to spin around, this one probably will. It is the third bullet at the top of page 5.

“To be considered out of scope for PCI DSS, a system component must be isolated (segmented) from the CDE, such that even if the out-of-scope system component was compromised it could not impact the security of the CDE. Therefore, the penetration test may include systems not directly related to the processing, transmission or storage of cardholder data to ensure these assets, if compromised, could not impact the security of the CDE.”

Correct me if I’m wrong, but this seems to imply that everything is in scope for penetration testing. I know the writers of this supplement have used the word ‘MAY’, but we also know how ‘MAY’ tends to be interpreted by the Council (i.e., MUST) and how it will be interpreted by a lot of our clients (i.e., do not do it). Until a few penetration testing reports are reviewed by the Council, the verdict will be out on the correct approach. However, coupled with the theme created by these bullets, I am guessing that the committee could not agree on how to express how to scope the penetration test.

But if those bullets were not confusing enough on scoping, the writers add insult to injury. After the bulleted list at the top of page 5, the first paragraph states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

An internal pen test does not require testing from within the cardholder data environment (CDE)? This sounds like the “Three Hop Rule” all over again. But then we twist it further somehow justifying the first sentence.

All this diatribe does is just further obfuscates scoping. What simply should have been said is that everything in-scope for PCI compliance is to be penetration tested. Instead, we got a lot of words that, in my very humble opinion, do nothing to clarify scoping.

PA-DSS

On page 6 under 2.3.2 PA-DSS Compliant Applications we get the following guidance.

“If a payment application has been PA-DSS validated, the application’s functionality does not need to be tested as part of the entity’s PCI DSS compliance validation. However, the implementation of the application does need to be tested. This includes both the operating system and any exposed services, but not the payment application’s functionality (e.g., authentication, key management, transaction processing, etc.) since this was validated as part of the PA-DSS application validation.”

Under the PCI DSS, even PA-DSS validated applications still must be tested to prove that their implementations were completed per the vendor’s PA-DSS implementation guide. Yet for penetration testing, PA-DSS validated applications, their operating systems and exposed services are exempt?

I hope the acquiring banks are geared up for the fights this is going to cause between the penetration testers and their clients. For example, in the restaurant and hospitality industries, I can see the situation where penetration testing only occurs on firewalls, routers and switches because everything else is PA-DSS validated.

I do not believe this is what the Council wanted, but that is what this supplement is stating. If we are truly talking about security of our networks and systems, this is not how you achieve it.

Social Engineering

I was pleasantly surprised that this was included in this supplement. However, as I read through this section, I got more concerned that it was even included.

For section 2.5 Social Engineering on page 7, we encounter the following.

“Social engineering is the attempt to gain information, access, or introduce unauthorized software into the environment through the manipulation of end users. PCI DSS v3.0 reconfirms testing by requiring industry accepted penetration testing approaches (many of which include social engineering as part of their approach) and to have an approach to penetration testing that “considers the threats and vulnerabilities experienced by merchants in the last 12 months.” This may include social-engineering attacks as a method used for introducing malware into the environment.”

This makes social engineering attacks legitimate if the client has been breached by such methods in the past year. But does this include physical as well as logical social engineering? Since it is not explicitly called out, I would assume so, but it should have been defined. Regardless of that being explicitly called out, this is a great thing to see in this document.

On page 8, the last paragraph of the discussion on social engineering states:

“Social-engineering testing may not be appropriate or provide a meaningful result for all organizations. Although social-engineering testing is not a requirement of PCI DSS, an organization may consider documenting the reason(s) for foregoing social-engineering testing and include applicable documentation with the internal and external penetration test reports, particularly if social-engineering attacks were encountered in the last 12 months.”

What started with such promise ends on a very sour note. This last paragraph gives organizations an out. So I guess we can look forward to a lot of organizations issuing inventive explanations to avoid social engineering testing.

Some Miscellaneous Issues

On page 2, the seventh bullet down in the Terminology we have the following.

National Vulnerability Database (NVD): The U.S. government repository of standards based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance (e.g., FISMA).”

I know they just grabbed the description from the National Vulnerability Database (NVD) Web site, but that does not mean that it is appropriate for use when discussing penetration testing. The Common Vulnerabilities and Exposures (CVE) database maintained by Mitre Corporation would have been the better reference since the NVD uses the CVE as the catalog of vulnerabilities.

I have no idea how the Federal Information Systems Management Act (FISMA) ties into the NVD. If they had truly pulled the content exactly from the NVD Web site they would have grabbed the fact that it ties to the Security Content Automation Protocol (SCAP) not FISMA.

My last comment on this topic is regarding the fact that the NVD (or the CVE) enable automation of vulnerability management. Had they brought in the reference to SCAP, I might have had an easier time going along with things. The NVD/CVE do not contain the actual exploits, they are merely a catalog of the vulnerabilities. As a result, I question the value of either the NVD or the CVE as to it automating vulnerability management. Yes, the CVE catalogs all of the vulnerabilities, but that is the value. In my opinion, the use of either of these in regards to the automation of the vulnerability management process is a stretch. They are a foundation that will allow for automation, but they do not in and of themselves actually enable automation.

Under 2.3.3 Web Applications on page 6, we have the following.

“It is common for an environment to host a web application that was not specifically coded for the organization such as commercial, off-the-shelf web-mail interfaces, document-sharing tools, file-transfer services, network-device administrative interfaces, etc. In these instances, the web application does not typically need an application-layer penetration test as the entity is not responsible for the source code of this type of software. Instead, the tester should perform a network-layer test and ensure the software was implemented, configured, and is currently being maintained in a secure manner (disabling or uninstalling unused services, blocking unused ports, applying current updates, etc.).”

I can see it now, organizations arguing over the pen testing of IBM Websphere, Oracle eCommerce or similar Web application frameworks because they are a package. It is a framework that you then must write an application for, but penetration testers will get into arguments over that aspect.

A Few Good Things

Yes, I have complaints, but there are some good things that are contained in this supplement.

Probably the best thing about this information supplement is the checklist at the end of the document for evaluating penetration testing reports. The Council should have a similar item for vulnerability scanning reports as well.

The discussion of the penetration testing methodology in section 4 of the document is a good read for both penetration testers and the people that contract for penetration testing services. But keep in mind it is a framework and not the entire rule/procedure set for conducting a proper penetration test. Yes, there are some good details in areas, but it is not the detailed roadmap that some people will portray it.

Section 3 has a good discussion on how to evaluate the qualifications of a penetration tester. It was nice to see that while they document the common certifications a penetration tester may hold, they caution readers that certifications alone does not make a penetration tester qualified.

The case studies in section 6 probably do more to clarify than the previous five sections. They are good examples of properly conducted penetration tests.

Finally, there is a good discussion on the definition of a “significant change”. However, I think this discussion as well as a discussion about the words “periodic” and “periodically” should be put into a discussion about risk management and assessment, rather than brought up here. I wrote a post on this topic a while back to provide some guidance to people and it was nice to see that my thoughts jive with what they wrote in the supplement.

What Was Missed

The biggest item I think that was missed was how to handle the penetration testing of huge environments. Case study 6.3 discusses the concept of sampling somewhat, but it deals with a retailer with only six locations, not thousands of locations. There are also veiled references to sampling in section 2.3.4 in discussing a separate testing environment. But there is no deliberate and complete discussion on this very important topic.

Retailers such as Wal-Mart, Target and Home Depot have more than a thousand locations with 15+ checkout lanes meaning there are, at a minimum, 15K devices out in the field that technically need to be penetration tested. How does an organization penetration test such a large number of devices annually? The answer is that they do not test them all. It is physically impossible. However, this information supplement does not address that situation and provide guidance as to what the Council deems as acceptable.

While I use some of the largest retailers in the world as my example, it is not just them as there are mid-sized retailers as well as hospitality organizations that face this same issue. I had hoped that how to approach these situations would have been explicitly addressed in greater detail but it was not.

The writers also missed a huge topic regarding social engineering testing. Unlike vulnerability scanning and penetration testing, in my experience, it is virtually impossible for any organization to “pass” social engineering testing. So what would be considered “passing” in a social engineering test? That is not discussed but should have been. Since this document broaches the subject, it is probably a good topic to propose for a SIG for this year’s SIG recommendations so we get social engineering covered in an information supplement rather than as a sidebar to penetration testing.

It is not that this information supplement is an entirely bad document. But the advice it provides needs to be taken with a bit of caution. As usual, the shortcomings will be leveraged by all of the “check boxers” to do as little as possible so that you get that box checked for 11.3.

Advertisement
01
Jan
15

The Three Hop Rule

At the 2014 Community Meeting, the PCI SSC responded to a question about network segmentation with what has come to be termed the “Three Hop Rule”.  The statement was made that if a device/system was “three hops or more” away from the cardholder data environment (CDE), then it was out of scope.  A lot of us in the room were taken aback by this statement.  And based on some questions of late regarding this subject, there is a lot of confusion out there regarding what the Council was trying to say.

First, the term “hop” is not a network security term nor does it even have any security implications.  The term “hop” is defined as:

“Data packets pass through routers and gateways on the way.  Each time packets are passed to the next device, a hop occurs.”

The count of three therefore is the number of hops or “hop count” between devices.  Hop count is defined as:

“Each router along the data path constitutes a hop, as the data is moved from one Layer 3 network to another.  Hop count is therefore a basic measurement of distance in a network.”

Nowhere in these definitions is there any statement about hops, the number of hops between devices and any correlation of hops and hop count as some form of security.  Hence why a lot of us were really concerned about this statement and likely why there is so much confusion and discussion resulting from the comment.

What we believe the Council was getting at was the number of network segments there are between a device/system and the CDE.  However, having three network layers between the CDE and devices/systems is also no guarantee of security.

What provides security at Layer 3 are the access control lists (ACL) or rules that allow or deny packets to traverse particular paths of the network.  ACLs can be implemented to control what devices and/or ports and services can communicate between various networks.  But just because there are ACLs implemented at each hop is also no guarantee that the number of hops between devices also secure the devices.

This is why the requirements in requirement 1 of the PCI DSS require that the QSA review all relevant ACLs to ensure that the network is truly segmented.  It is also why in v3, requirement 11.3 requires that the penetration testing also prove that the network is truly segmented.  As a result, the number of hops between the CDE and a device should not be considered a guarantee and never will be a guarantee that a device is out of scope.

The bottom line is that, in order to be truly out of scope, there needs to be ZERO hops between a device and the CDE.

21
Dec
14

Forensic Examinations And Facts

I am watching the news reports on the Sony breach and laughing at all of the “facts” that are being bandied about.  I want to use the Sony breach as a teachable moment and explain that the “facts” may not be as factual as represented by the media, forensic examiners or even the FBI.  I have done a number of forensic investigations and from my own experience there is a lot of effort required to prove conclusively that a particular device or actor is the actual attacker.

So let us take a look at the “evidence” we have at this point and see if the conclusions drawn should be treated as facts.

My first issue is how quickly the FBI and Mandiant have come out with the “fact” that North Korea is behind the attack.  According to the timelines I have seen, it was on November 21. 2014 when Sony was told by the attackers, GOP, that Sony had been hacked.  So in around three weeks of time the FBI and Mandiant have figured out, definitively, it was North Korea that was behind the attack.  Granted, Mandiant and the Bureau could have been investigating this long before, but given the way the news reports were written, I have to believe that Sony had no idea anything was wrong until November 21.

Why do I find this timeline spurious?  It took Mandiant over three years to trace things back to the Chinese for their report, APT1, last year and we are to believe that the FBI has the skill and manpower to trace a “sophisticated attack” (Kevin Mandia’s words to Sony) back to North Korea?  I find that hard to believe.  Not because the Bureau and Mandiant are not skilled, but that it is just impossible to cram a year’s worth of investigation into a few weeks, regardless of the manpower tossed at the investigation.

In my own experience, I typically had ideas as to what and how things happened within a few weeks, but now the difficult work of determining exactly how things went down began.  It can take months or even years to figure out an attack if it is ever figured out.  It is why NTSB investigations of airplane crashes take at least a year to have a report issued.  Any attack may not be as simple or uncomplicated as you initially think.

“Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.”

We do know for a fact that hackers reuse other attackers’ code.  Why reinvent the wheel if you do not need to?  Hence the variants of all of the attack code to not only evade anti-virus but to also enhance or improve techniques and methods.  Just because there are similarities in some lines of code, algorithms, methods, etc., does not mean that it was the North Koreans that were the actual actors.  It just means that the attackers used code attributed to North Korea.  Key word, “attributed”.  To me, a far better piece of evidence would have been if the code had been written in Korean or a North Korean dialect.

“The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack.”

Hard coded IP addresses are evidence?  So does that mean that everyone is guilty if I write their telephone number on a napkin and that turns up as evidence?  No.  A better piece of evidence would have been log data that actually can tie those IP addresses to the data that was exfiltrated out of Sony.  Just because IP addresses are hardcoded in an application does not necessarily imply that the IP end point was in fact the actual endpoint.  Hackers regularly own other organizations’ and governments’ servers to obfuscate their actual location.  Just because there’s a hardcoded IP address in a piece of code does not necessarily mean that is the endpoint.  It just means that a device could be involved.

“Separately, the tools used in the SPE attack have similarities to a cyber attack in March of last year against South Korean banks and media outlets, which was carried out by North Korea.”

The attack on certain South Korean banks and TV stations in 2013 was never definitively pinned on North Korea, it was just suspected.  The prime piece of evidence was a Chinese IP address that was assumed to implicate North Korea.  So using the South Korean attack as though it was definitively proved to be done by North Korea is not a fact.

While I had some issues with the Mandiant report on China and their investigation methods, the information being offered as “facts” that North Korea is behind the Sony breach are positively appalling.  People want an answer immediately and so one is given regardless of accuracy or even believability.  However, this is a technology issue and so it is easy to feed the public supposed “facts” since only the true technology people in the world will know the difference.

Unfortunately a breach such as the one at Sony will take time, probably a lot of time.  I would not be surprised if we end up with a lot of “suspicions” and “assumptions” when a final analysis is done and released, if we ever get a definitive answer.  The reason I believe that is that I do not think Sony had the kind of security implemented and working given the amount of information that has been supposedly gathered by the attackers.  The other clue in this is that it was November 21 when Sony was notified by the attackers they had been breached.

The key take away here is that forensic examinations very rarely prove WHO the bad actor was that caused the breach.  This is particularly true when the attacker is outside the organization.  There are just too many ways that an attacker can obfuscate their actual identity/location.

What forensic examinations do provide is a road map of improvements and enhancements in an organization’s security measures and procedures to minimize future attacks.  Note that I did not say “prevent” future attacks.  I use minimize because security is never an absolute.  Anyone with an extreme desire to attack an organization will do so regardless of how well your security program is constructed and executed.

Bruce Schneier points out this very fact about determined attackers in his post on the Sony breach.  I have always referred to this as the ‘98-2 Rule’.  Properly implemented and managed information security keeps 98% of attackers out.  However it is the remaining 2% that are determined enough to figure out how to work around even the best security.  All any organizations can do about that remaining 2% is to put controls in place so that when the 2% get through, they are detected as soon as possible and their impact minimized.  This is why security frameworks are so important because they provide organizations with guidance as to what it does take to only have the 2% to worry about.

Given the limited evidence provided thus far, could it be that this is all a sophisticated marketing ruse that went sideways?  Would it not be apropos if Seth Rogen and his production company did the attack as a promotional stunt and the attackers they hired found out that Sony was ripe for such an attack and then went further than what they were supposed to?

Something to think about.

06
Nov
14

The ASV Process Is Broken – Part 3

So what are my ideas on fixing the ASV process?

Modify The ASV Program

The conditions that drove the ASV process originally made sense.  Vulnerability scanning tools were predominately open source and anyone could do scanning and just about anyone was doing vulnerability scanning.  The results produced out of the open source tools could be highly questionable at best and the reporting was haphazard and about as trustworthy at times as a three dollar bill.  Even in large organizations, the people doing the vulnerability scanning did not necessarily have networking, security or even IT backgrounds.  Then there was a tremendously high false positive rate out of the open source tools.  As a result, most organizations ignored the results they received because they found that they could not be trusted.

The purpose of the ASV program was to bring some sanity and professionalism to the vulnerability scanning process.  MasterCard invented the ASV program (it was not called ASV then) back in 2005.  A test network was built and prospective ASVs were required to run their vulnerability scanners against this network and produce results which were then reviewed by MasterCard.  It was a much a test of the vulnerability scanning tool as it was of the person running the tool.  When the program transitioned to the PCI SSC, the Council added a multiple choice test to the process, but the virtual network testing and report review is still part of the process.

The trouble with this process is that the vulnerability scanning tool is no longer the problem.  Every ASV uses a commercial vulnerability scanning tool from either Tenable, Qualys, Saint, Tripwire or similar commercial tool vendor these days because they cannot afford to do otherwise.  Since these tool vendors are also ASVs, requiring a vulnerability scan for ASV certification has become a truly pointless exercise.  Other than the possibility of not properly entering the IP addresses to be scanned and running the wrong scanning policy, there really is very little that someone can screw up with a scanning tool.

The skill in vulnerability scanning today is reviewing the results, dealing with false positive results, working to address results with compensating controls and, with the Councils new edict on combining reports, working to get passing quarterly scans.

Therefore, in my opinion, training and testing of ASVs should be focused on the following.

  •  Determining the scope of vulnerability scanning.
  • Vulnerability scanning methodology.
  • Interpreting vulnerability scanning reports to confirm knowledge of the process and the meaning of the results.
  • What constitutes a false positive result and how to document a false positive result.
  • Development and documentation of an appropriate compensating control for a vulnerability.
  • Process for how to produce an acceptable passing scanning report from multiple reports.

And let us not limit ASV certification to just independent consulting firms.  As with the internal security assessor (ISA) program, open the ASV program to internal personnel as well.  Most large companies have independent vulnerability scanning teams that are as capable to more than capable than their ASV brethren.  There is no longer any reason that these internal people cannot do the ASV scans particularly if they meet the same standards and qualifications.

Approved Vulnerability Scanning Tools

I am not suggesting that the Council needs to develop a certification process for these tools as there are already plenty of sources that assess such tools.

The Council would publish a list based on the criteria developed by one or more independent tool assessment sources.  This list would define those tools acceptable to use for ASV vulnerability scanning.  The PCI DSS should then require that the QSA confirm that the vulnerability scanner used by the ASV is on the list in addition to confirming scope and the scanning policy used.

Require A Vulnerability Scanning Methodology

With the PCI DSS v3, the Council now requires penetration testers to use a documented and industry accepted penetration testing methodology.  Yet, there is no such requirement for vulnerability testing.

Most vulnerability scanning is done using what I call the “toss it against the wall and see what sticks” approach.  Basically, every possible vulnerability is run against every device.  Most commercial vulnerability scanners interpret banners, signatures and other markers to trim the list of vulnerabilities to be tested based on what they believe the target to be.  However, when you are scanning an external network blind, scanners cannot always properly interpret what an IP address resolves to as a device because of the mix of responses that they receive.  As a result, scanners do not necessary trim tests increasing false positive results or they trim them too much and the test is not complete.

Then there is the automated nature of today’s vulnerability scanning.  While I understand the desire to reduce costs of vulnerability scanning, the “point and click” nature of today’s ASV scanning has made it flawed.  And it gets worse as organizations get passing scans.  As a QSA, I cannot tell you how many passing scans I have reviewed where an organization could be hacked six ways to Sunday with the remaining vulnerabilities.  As a security professional, it scares me to death.  But as a QSA, while I can bring these up, they get no play because they do not have a CVSS of 4.0 or greater.  You hope that these vulnerabilities get picked up in an organization’s penetration test.

But there is no guarantee of that happening because the penetration tester’s vulnerability scanner may or may not pick up the same vulnerabilities.  As a result, part of the penetration testing methodology should include a review of all vulnerabilities found since the last penetration test and those should be tested for in the current penetration test to ensure they have been addressed.

Obviously, I have a preference to the methodology I discussed back in Part 2.  But there are a number of methodologies posted out on the Internet from a variety of good sources.  All I ask is that the vulnerability scanning methodology be integrated with the penetration testing methodology so that there are not gaps in coverage.

Require Monthly External Vulnerability Scanning

Before everyone panics, I am not asking that ASV scans be run monthly.  Although if the ASV program is modified, for organizations with internal ASVs that is a possibility.  I would still require the quarterly ASV scan, but I would add in monthly scans run by anyone deemed qualified as is allowed for internal vulnerability scans.

My primary rationale for this recommendation is driven by this simple fact.  When the dominant solution vendor releases patches on the second Tuesday of every month and the vast majority of those fixes have a CVSS score of 4.0 or greater, anyone that thinks quarterly scanning keeps them secure is seriously kidding themselves.  Not that a lot of security professionals bought into the quarterly vulnerability scanning requirement, even as a bare minimum.  But without the standard requiring it, a QSA has no leg to stand on other than to intimidate and shame people into doing monthly scanning.

Even if you are not Microsoft centric in your external environment, with the breaches that have occurred and the revelations of Shellshock and Poodle, it is painfully obvious that the quarterly requirement is not going to keep organizations secure.  I got a lot of calls after both of these vulnerabilities were announced with clients asking if their passing scans were no longer valid.  I was a bit schizophrenic in my thoughts.  On the one hand, I was glad they were at least thinking about the security implications of these vulnerabilities.  But their concern about their passing scans just highlights the importance of meeting a PCI requirement and passing their PCI assessment versus being secure.  Because, while I only got a few calls, you know that there are too many people that are congratulating themselves on dodging the bullets of Shellshock and Poodle because of the fortuitous timing of their  quarterly scans and that they got an additional 30, 60 or even 90 days to address them.

Then there are those organizations that run solutions such as IBM’s Websphere or Oracle’s eCommerce suites.  Both of these vendors not only patch their own application frameworks, but they also release those patches to the underlying operating systems that are compatible with their application frameworks.  But worse, these vendors do not release monthly patch releases, they do patch releases on quarterly, semi-annual or even annual bases.  As a result, there is a high likelihood that some operating system patches could be left out of these releases due to compatibility or timing issues.  The work around is to mitigate any remaining vulnerabilities through additional logging, additional monitoring, changes in firewall rules, changes in IDS/IPS rules, etc.  The additional vulnerability scanning could help organizations identify these issues and address them quicker than quarterly.

A side benefit of monthly scanning will be improving the ability of organizations and their QSAs to determine if an organization’s patching and mitigation processes are working according to requirement 6.1.  Quarterly scans typically document a lot of vulnerabilities, mostly those under a CVSS of 4.0.  As a result, whether or not an organization is properly managing their environment can be very difficult and time consuming leading to missing items that should be addressed.  Having reports more often can facilitate getting these issues addressed sooner rather than later and keeping the volume lower and less daunting.

The bottom line in all of this is that monthly scanning is required to even have a chance at being secure these days.  Yet the vast majority of organizations are only doing quarterly scans and thinking they are secure.  That practice must change.

So there we have it.  My thoughts on the ASV process and how I would go about fixing it.

01
Nov
14

The ASV Process Is Broken – Part 2

The next reason I believe the process is broken is with the automated scanning processes.  They do not seem to be accurately assessing the security of Web servers, firewalls, routers and other externally facing devices that form an organization’s perimeter.  In my opinion, it seems that in our drive to bring down the cost, we have created a false sense of security by having the concept of a “passing” scan and only requiring scanning on a quarterly basis.

It is hard to believe that external vulnerability scanning came into vogue more than a decade ago.  There were no automated solutions.  No popping out to a Web site, entering IP addresses and scheduling a scan.  It was all manual.  A person was required to configure the scanning solution and then launch the scanner.

There was a proven methodology used for approaching vulnerability scanning.  That methodology involved approaching the organization as an attacker would. .  The security professional would go out to domain registrars and other official sources to obtain potential IP address ranges, registered domains and other public information.  Then the professional would develop a profile of the organization’s external security posture using Nmap and similar tools as well as running non-intrusive scans to identify systems and potential entry points.  But even more insidious, the security professional would do what became to be known as “Google Hacking” to find out what, if any, information was available on the Internet that might facilitate the attack.  Log data, firewall configurations and even authentication credentials could be obtained through Google Hacking.  Even today, while Google does its best to minimize the information they collect, it is amazing the amount of information that can still be obtained with specially crafted Google searches.

Once the preliminaries were done, it was time to actually scan.  The most accepted approach used at that time was to run two scans, one fully external with security measures enabled and one with testing done inside the DMZ.  The scan done with the security measures enabled was to confirm that an organization’s firewalls, load balancers or other front ends were functioning as configured.  Essentially this scan showed what an attacker would discover in their scanning of the organization’s external presence.  There were two purposes of the second scan.  The first was to provide an inventory of vulnerabilities that were being obfuscated by the security measures so that they could be addressed or mitigated.

The second purpose of the DMZ scan was for what some would call today a form of “threat intelligence”.  The second scan provided a view of what an attacker would encounter if they were able to circumvent the organization’s external security.  It answered the question of how large the attack surface was in the DMZ should it be compromised?  What else could be subverted and used to further an attacker’s goal of getting even deeper into a network?  The point of this part of the exercise was to provide the organization with an idea of what to look for should anomalies begin to appear in log data.

Throughout this scanning process, separate scans were run for firewalls, routers, load balancers, Windows devices, Linux devices, etc.  The rationale for that was to obtain clear results based on the type of device.  No reason to toss a bunch of Windows vulnerabilities against a Cisco firewall or F5 load balancer and then have to sift through the results to remove the false positive entries generated by the Windows devices they protect.

All of this information would be collated, analyzed and an actionable report produced for management and security professionals to absorb and understand the organization’s true external security posture.  Once the draft report was delivered there would be one or more calls or meetings with the organization’s security personnel to tweak the message of the results, develop action plans to address the results and then a final meeting with management and security personnel to deliver the final report.  At the end of this process, management and security personnel knew exactly where they stood from a security perspective and what needed to be done to ensure that they remained secure going forward.

Obviously, such an approach is fairly comprehensive and that costs money to produce.  And it was the cost that drove the automation of the process so that it was less labor intensive.

In the mad dash to the lowest possible cost, the following is what typically happens today.

Organizations subscribe to an ASV vulnerability scanning service.  If the organization is lucky, they go through a scoping call with a human being at the ASV service to confirm the IP addresses that are in-scope for PCI compliance.  Having been on a number of these calls with ASVs, it is more of a questionnaire approach and there is very little done by the ASV to actually confirm the scope of the scanning.  It is more or less a conversation that explains how to enter the IP addresses into the scanning tool’s Web interface and to schedule a scan, not a diligent conversation on whether or not a device is in scope or not.  There is typically a brief discussion on how to use the Web site to dispute scanning results that caused a non-passing scan.  In total, the call takes 20 to 30 minutes at which point customers are essentially on their own.

Scans are run on a quarterly time period.  Scans may be rerun if vulnerabilities are identified that result in a non-passing scan.  Once a passing scan is obtained, a passing certificate is generated and the organization moves on to the next quarter.  Every now and then a result may be disputed, but those situations are typically rare.  The bottom line is that the ASV process is fairly automated with very little, if any, human intervention.

From a tool perspective, the vulnerability scanner is tossing every vulnerability at every IP address they are given.  Firewalls, load balancers, switches and routers are tested for Windows and Linux vulnerabilities.  False positive results can be rampant but most scanners weed out the obvious false positive results based on OS and device signatures.  But even if a few false positives remain, as long as they have a CVSS score of less than 4.0, no one cares.  And the scanner is configured to run as quickly as possible because it needs to be available for the next customer so that the ASV does not have to have an individual scanner for every customer.

All of this is driven by customers’ desire to minimize cost as much as possible.  Thus, in order to stay in business, the ASV scanning service providers have done an admirable job of driving down cost.  But at what other costs?

While I think we can all agree that the current approach is flawed, the Council would point to the fact that the quarterly ASV scanning requirement is only a minimum and that organizations should be doing their own external vulnerability scanning more often than quarterly.  And that is what is missed by a lot of organizations is that they need to do more than what the PCI DSS requires to be secure.  And no matter how loud that message is repeated, it seems to get missed over and over again.

However, a lot of organizations do not do anything more than the PCI DSS requires.  We have this “bare minimum” mentality because, if it is not mandated in the standard, then we must be secure if we do only what is required.  This results in the false sense of security that organizations have when they only do what the PCI DSS or any other security framework for that matter mandates.

In addition, I cannot tell you how many organizations totally write off vulnerabilities that have a CVSS of less than 4.0 as “trivial” representing no threat.  There is this mistaken belief that, because the CVSS score does not warrant immediate patching, it is not a threat.  However, when you have them in relative abundance and of certain types, those “trivial” vulnerabilities can be put together in a particular sequence and used to compromise a network.  To the shock and horror of a lot of organizations, their security is breached in moments during their penetration test using those “trivial” vulnerabilities.

Then we have the limitations of the tools themselves.  Their vulnerabilities are constructed for the general audience, not a specific network.  As a result, depending on security measures and configurations, a particular vulnerability for say HTTP or FTP might not register due to a load balancer or other devices in the communication path.  Yet with a minor tweak, the same vulnerability could actually work.  And that was another point of doing that second scan inside the DMZ back in the “old” days.  Taking the two scans allowed you to correlate that while HTTP or FTP was open; it was vulnerable because that vulnerability was shown on the internal DMZ scan.  As a result, you could take additional steps to ensure that the vulnerability could not be exploited from the Internet.

The bottom line in all of this is that ASV scans have become a pointless piece of the compliance assessment process.  It has not been because the ASVs are bad, it is that the process they are forced to follow is not doing what it once did at the beginning.  It does not help that merchants and service providers treat ASV scans as a “check the box” activity and that because they do something quarterly they are secure.

If the process is broken, what should be done to fix it?  That is the subject of my next post.

19
Oct
14

The ASV Process Is Broken – Part 1

The topic of ASV scanning came up as usual at the 2014 PCI Community Meeting.  The questions all seemed to revolve around how to obtain a passing scan.  What the Council representatives suggested is that multiple scans can be put together to create a passing scan.  Unfortunately, what the Council keeps suggesting as the solution is impossible to implement and here is why.

In a typical environment, an ASV customer logs onto their account with the ASV and schedules their ASV scans of their PCI in-scope assets.  The customer may also add or subtract the number of IP addresses that are scanned as the scope of their external environment may change.  Depending on a number of factors, there may be one scan or multiple scans.  The vulnerability scans are executed on the schedule and the results are returned to the customer.

If there are false positive results or results the customer does not agree, they can apply back to the ASV to have those results removed.  If there are actual vulnerabilities, the customer can contact the ASV with how they have mitigated the vulnerabilities and the ASV can either accept those mitigates and give the customer a passing scan or allow the results to stand.

So where are the problems?

Whether or not the Council acted on facts that cheating was occurring or anecdotal evidence is unknown.  But because of the potential for cheating by customers, the Council mandated a number of years ago that ASVs lock down their scanning solutions so that customers cannot modify anything regarding testing other than the IP addresses involved.  The ASV Program Guide v2.0 on page 11, states:

“However, only an authorized ASV employee is permitted to configure any settings (for example, modify or disable any vulnerability checks, assign severity levels, alter scan parameters, etc), or modify the output of the scan.  Additionally, the ASV scan solution must not provide the ability for anyone other than an authorized ASV employee to alter or edit any reports, or reinterpret any results.”

So right off the bat, the Council’s recommendation of “putting together multiple reports” is not as easily accomplished based on their earlier directives.  That is because it will require the ASV’s customer to get the ASV to agree to put together multiple reports so that they can achieve a passing scan.  That implies that the ASV’s solution will even accommodate that request, but then the ASV needs to be agreeable to even do that task.  Based on the Council’s concerns regarding manipulation of scanning results and the threat of the Council putting ASVs in remediation, I do not believe the ASVs will be agreeable to combining reports as that would clearly be manipulating results to achieve a passing scan.

But it gets worse.  As a lot of people have experienced, they can scan one day and get a passing scan and then scan a day or even hours later and get a failing scan.  The reason this happens is that the vulnerability scanning vendors are adding vulnerabilities to their signature sets as soon as they can, sometimes even before vendors have a patch.  As a result, it is very easy to encounter different results from scan to scan including failing due to a vulnerability that does not yet have a solution or the vendor only just provided a patch.

But if that is not enough, it gets even worse.  Statistically, the odds of getting a passing scan are nearly impossible and gets even worse if you are only doing quarterly scanning.  A review of the National Vulnerability Database (NVD) shows that 94% of vulnerabilities from 2002 to 2014 have a common vulnerability scoring system (CVSS) score of 4.0 or greater.  That means that it is almost impossible to obtain a passing vulnerability scan, particularly if you are only scanning quarterly, when vulnerabilities are announced almost daily and vendors such as Microsoft are coming out monthly with patches.  Those of you scanning monthly can attest that even on a 30 day schedule, a passing scan is nearly impossible to get.

For an organization that has only one Web site, this situation is likely not a problem.  But when organizations have multiple Web sites which a lot of organizations large and small have, you are really struggling in some cases to get passing scans.

But let us add insult to injury.  A lot of organizations have their eCommerce environments running on multiple platforms such as Oracle eCommerce or IBM Websphere.  In those examples, this situation becomes a nightmare.

Platforms such as those from Oracle and IBM may run on Windows or Linux, but Oracle and IBM do not allow the customer to patch those underlying OSes as they choose.  These vendors ship quarterly, semi-annually or on some other schedule, a full update that patches not only their eCommerce frameworks, but also the underlying OS.  The vendors test the full compatibility of their updates to ensure that the update will not break their frameworks.  In today’s 24x7x365 world, these vendors can run into serious issues if eCommerce sites begin to not function due to an update.  However, that also means there is the possibility that critical patches may be left out of an update due to compatibility and stability reasons.  As a result, it is not surprising that in some updates, vulnerabilities may still be present both those that are new and those that have been around for a while.

But if Oracle and IBM are not patching on 30 day schedules, that means there is a high likelihood that the scans will not be passing.  This means that the customer must go to their ASV with compensating controls (CCW) to mitigate these vulnerabilities to obtain passing scans.

The bottom line is that the deck is stacked against an organization obtaining a passing scan.  While the Council and the card brands do not recognize this, the rest of the world sure has come to that determination.

In Part 2, I will discuss the whole ASV approach and how I believe the drive to be the cheapest has turned the ASV process into a mess.

12
Oct
14

Lawyer Or Security Professional?

“It depends upon what the meaning of the word ‘is’ is. If ‘is’ means ‘is and never has been’ that’s one thing – if it means ‘there is none’, that was a completely true statement.” –President of The United States of America, William Clinton

It has been an interesting time as the December 31, 2014 deadline approaches and version 2 of the PCI DSS comes to its end of life.  I have started to notice that there are a lot of security professionals and others that are closet lawyers based on the discussions I have had with some of you regarding compliance with the PCI DSS.

The first thing I want to remind people of is that if you do not want to comply with one or more of the PCI DSS requirements, all you have to do is write a position paper defining for each requirement you find onerous, why it is not relevant or not applicable for your environment and get your management and acquiring bank to sign off on that paper.  But stop wasting your QSA’s or ISA’s time with your arguments.  It is not that we do not care, but without such approval from your management and acquiring bank, QSAs and ISAs cannot let you off the hook for any requirement.

With that said, the first lawyerly argument we are dealing with these days revolves around the December deadline.  We continue to get into arguments over what the deadline actually means.

It appears that the PCI SSC and card brands’ repeatedly saying that version 2 is done as of December 31, 2014 was not clear enough for some of you.  And further clarifications from them that any reports submitted after that date must be under version 3 are also apparently too much for some of you to handle.  I do not know how there could be any misinterpretation of ‘DEADLINE’, ‘DONE’ or “AFTER THAT DATE’ but apparently, there are a lot of people out in the world that do not understand such words and phrases.  Then there are the amazing contortions that some people will go to in a twisted dance to the death to get around this deadline.

Where have you been?  How could you have missed this deadline?  It has been known since the PCI SSC announced their change when standard updates would be issued back with the release of the PCI DSS v2 more than three years ago.  But even assuming you were not involved back then, the PCI SSC announced the deadline over a year ago with the release of PCI DSS v3.  Either way, it certainly should not have been a surprise as there has been plenty of warning.

But then do not take this out on your QSA.  QSAs are just the messenger in this process and had nothing to do with setting the deadline.  The PCI SSC and the card brands set that deadline.  You have a problem with the deadline, complain to them.  But if you are willing to listen, I can save you that discussion.  They will politely tell you the deadline is the deadline.  You are out of luck.  If you do not like that answer, then stop taking credit/debit cards for payment for your organization’s goods and services.

The next lawyerly argument is around the June 30, 2015 deadlines for requirements 6.5.10, 8.5.1, 9.9, 11.3 and 12.9.  Again, it is as though these dates were kept from you, which they were not.  I even wrote a post about these requirements titled ‘Coming Attractions’ back in September 2013.

For those that are calendar challenged, June 30, 2015 is practically just around the corner in business terms.  If you had years to get ready for the PCI DSS v3, what makes you think that you can just turn something on in a year and a half?  Yet we continually see people arguing that until that date, they are not going to address any of these requirements.  All as though, like a light switch, something magical will occur on July 1, 2015 that will meet those requirements.

For merchants, requirements 9.9 and 11.3 are going to be huge issues particularly for those of you with large networks and lots of retail outlets.  If you have not gotten started on these requirements now, there is no way you will be compliant with these requirements by July 1.  Both of these require thought, planning and training.  They cannot just be started overnight resulting in compliance.

For requirement 11.3, the new approach required for penetration testing is resulting in vulnerabilities being uncovered.  Organizations that did not want to get caught flat footed are finding that their network segmentation is not as segmented as they once believed.  They are also finding new “old” vulnerabilities because of these network segmentation issues.  The bottom line is that these early adopters are scrambling to address their penetration testing issues.  In some cases ACLs need to be adjusted, but I have a few that have found they need to re-architect their networks in order to get back to compliance.  Obviously the latter is not an overnight kind of fix.

Requirement 9.9 is all about ensuring the security of points of interaction (POI) as card terminals are referred.  Because of all of the POI tampering and hacks that have occurred, the Council has added the requirements in 9.9 to minimize that threat.  The biggest problems early adopters are running into is getting their retail management and cashiers trained so that they understand the threats and know how to deal with those threats.  This requires creating new procedures for daily or more often inventorying of the POIs and visually inspecting them to ensure they have not been tampered with.  Companies are rolling out serialized security tape that must be applied to the seams of POIs so that any opening of the case can be visually determined.  Locking cradles are being installed for every POI to secure them to the counter.  Let alone implementing those new procedures for doing at least daily inspections and what to do if you suspect tampering and how to inform corporate of potential issues.  Again, not something that just happens and works day one.

For service providers, besides 11.3, requirement 8.5.1 is going to be their biggest issue.  This requires the service provider to use different remote access credentials for every customer.  This is in response to the breaches that occurred at a number of restaurants in Louisiana a few years ago as well as more recent breaches.

The problem that early adopters of 8.5.1 are finding is that implementing enterprise-wide credential vaults is not as simple as it appears.  The biggest impact with these implementations is that service providers start missing their service level agreements (SLA).  Missing SLAs typically costs money.  So these service providers are not only incurring the costs related to implementing the credential vault solution, but they are suffering SLA issues that just pile on the injuries.

But the final straw is all of the people that closely parse the PCI DSS and only the DSS.  You saw this with some of the questions asked at the latest Community Meeting.  You also see it in the questions I get on this blog and the prospects and I clients I deal with daily.  These people are hunting for a way to get around complying with a particular requirement.

This occurs because people only read the DSS and not the Glossary, information supplements and other documents provided by the Council.  At least with v3 of the DSS the Council included the Guidance for each of the requirements.  Not that adding Guidance makes a whole lot of difference based on the arguments laid out by some people.  The Council could do us all a favor if they generally published the Reporting Template with all of the other documents.  Not so much that people would necessarily read it, but it would give QSAs and ISAs more ammunition to use when these discussions come up.

Successful security professionals understand the purpose of security frameworks.  These frameworks are meant to share the collective knowledge and lessons learned regarding security with everyone so that everyone can have a leg up and know ways of detecting and mitigating threats.  Successful security professionals use these frameworks to get things done, not waste their time developing scholarly legal arguments or twisting the English language as to why they do not need to meet some security requirement.  They put their heads down, review the frameworks, develop plans to implement the changes necessary to improve security, work the plan and deliver results.  Do those plans always meet requirement deadline dates?  Not always, but they are close or as close as they can get given other business issues.

The bottom line is that security professionals are not lawyers and good security professionals certainly do not sound like lawyers.  But if you constantly find yourself sounding like a lawyer digging so deep to split legal hairs, in my very humble opinion, you really need to re-examine your career or lack thereof.  I say lack thereof because, in my experience, security professionals that operate like lawyers do not have long careers.  They move around a lot because once people realize that they cannot deliver, they are forced to move on.  Eventually a reputation is developed and after that point these people end up forced to find a new career because the security community knows their modus operandi.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

07
Dec
13

POS Botnets

Just in time for the holidays.

An article came out this past week regarding botnets that are specifically targeting point of sale (POS) systems.  The reason I bring this up is because of this quote.

“StarDust developers have intimate knowledge of the inner workings of PoS applications such as Clearview PoS.  As a result, the malware can ferret out where in computer memory sensitive data, in some cases in cleartext form, is stored.  StarDust can also sniff network traffic and is able to extract Track1 and Track2 card data.  To remain covert, the software transfers card details only when the terminal is inactive and the screensaver is on. It also uses the RC4 cipher to encrypt data before sending it to the control server.”

Obviously, if your organization uses Clearview POS software you should probably be examining your systems and networks to ensure that they have not been compromised by StarDust.

However, the larger issue is that most merchants do not see themselves as targets of such attacks, let alone have they constructed a secure environment for their POS systems.  Some of this is not entirely the merchant’s fault.  A lot of merchants outsource the maintenance and management of their POS systems to a value added reseller (VAR) and that VAR is the one responsible for the POS network configuration.  Regardless of responsibility, a merchant needs to be aware of these threats and take appropriate action either internally or with their VAR to address these threats and minimize risk.

Regardless of whether it is StarDust or a similar threat, here are some steps you can take to minimize and detect such threats.

  • Segment your POS network from the rest of your internal network and limit POS network segment traffic to only communication to your processor and internal network and system support and operations systems.  This will require the development of network access rules so that traffic can only reach your processor or internal system support and operations systems.  This will limit the number of systems that could compromise your POS environment.
  • Monitor your POS network segment for any traffic that is directed to an external network other than your processor or system support and operations systems.  Your firewall rules should only allow secure connections between your POS network and your processor or your system support and operations systems.  Network traffic going anywhere else should be stopped and reported for further investigation.
  • Monitor your POS systems for any file or configuration changes.  Most anti-virus solutions can provide this capability, but there are also solutions that are specifically engineered for this task.  Regardless of which you choose, configure the tool to alert you as soon as it identifies a potential change to files or configuration of the POS system.  If approved changes were not made to the POS systems and you received an alert, you likely have been compromised.
  • Develop an incident response plan should you receive an alert indicating that your POS systems have been compromised.  An incident response plan provides the organization with a “battle plan” should a compromise occur.  This type of plan is key to minimize the potential reputational impact to the organization should such an attack be confirmed.  A good incident response plan can keep you from making mistakes as you navigate the mine field that is the media circus that comes with a breach.

Three straight forward and simple steps that can minimize the threat of StarDust and a documented incident response process should you unfortunately be breached.

Security does not have to be rocket science.

09
Nov
13

Removing The Drama Of A PCI DSS Assessment

I had to prepare a presentation for a client a while back giving them some tips on how to prepare and get through a PCI assessment as easy as possible.  I thought it might be good to share those thoughts.

Trust But Verify

This famous quote from US President Ronald Reagan is the mantra of a PCI assessment.

The PCI DSS is based on the “trust” that organizations are complying with the PCI DSS.  However self-assessment processes and QSAs are used to “verify” that the organization is, in fact, complying with the PCI DSS.  As a result, the organization being assessed not only has to produce documentation to that effect, but the QSA must also observe that the PCI DSS requirements are being followed.

The net is that, just because you say something is fact, your QSA must substantiate your statements so that they, too, will treat them as fact.  If you remember nothing else but this simple truth, you will understand why a QSA must do what they do.

Scope

If PCI assessments go wrong for any reason, this is probably the primary reason.  It fascinates me that people often profess ignorance of the PCI DSS, yet somehow become experts on the subject when it comes to scoping.

Remember point number one, trust but verify.  Under that premise, the PCI SSC makes a QSA’s primary responsibility to confirm the scope of the PCI assessment as they verify the facts.  As a result, in order to confirm that scope, the QSA must look at everything and then, through investigation and evaluation, determine that the areas you deem out of scope are, in fact, truly out of scope.

Let your QSA ask their questions and conduct their observations without arguing with them about scope.  They are only doing this because they are required to confirm the facts and your fighting with them about scope is only going to making them wonder what you are trying to hide.  The bottom line is that arguing with your QSA about scope only makes your assessment all the more painful and time consuming.

If you truly want to avoid arguing over scoping, get a copy of the Open Source PCI Scoping Toolkit.  Go through your environment and determine the categories of all of your systems and networks.  This is a good annual exercise because you need to prove your scope every year.

Applicability

According to the PCI SSC, there are five PCI DSS requirements that can never, ever be marked as ‘Not Applicable’: 1.2.3, 3.2.1, 3.2.2, 3.2.3 and 11.1.  I have discussed these all before but they deserve another quick discussion here.

Clients will argue ad nauseam that wireless is not implemented or is out of scope and therefore refuse to discuss wireless.  For requirement 1.2.3, a QSA is required to document the procedures they followed to rule wireless in or out of scope.  That of course means the QSA must investigate any wireless networks and evaluate if the controls are rigorous enough to keep wireless out of scope.  For requirement 11.1, the QSA must investigate and evaluate if the organization’s controls surrounding the detection of rogue wireless are appropriate regardless of whether or not the organization has implemented wireless networking.

3.2.1, 3.2.2 and 3.2.3 are all related to the securing of cardholder data when it is stored.  Even if an organization is not storing cardholder data on their systems, a QSA must document the procedures they used to confirm that cardholder data is not stored on the organization’s systems.  This usually involves a review of flat files and database schemas and the running of utilities and queries against those systems and databases looking for cardholder data.

The bottom line is do not argue about something being ‘Not Applicable’ and then hinder the QSA’s investigation to prove it is ‘Not Applicable’.  Do not get me wrong, you need to keep your QSA on point, but remember that QSAs are required to evaluate the situation and then document the process used to determine that a particular requirement is ‘Not Applicable’.  All you do by complicating that investigation is add more time to your assessment and, potentially, cause a requirement to be marked as ‘Not In Place’ instead of ‘Not Applicable’.

Yes, I Did Kind Of Ask That Earlier

Like security, the PCI DSS also works from a ‘defense in depth’ approach.  A lot of the questions QSAs ask are very similar just asked from a different perspective.  The people that develop assessment and audit programs will tell you that this is the most effective way to uncover the level of compliance with a given program.  The reason is that organizations who have not integrated a compliance program into their day-to-day operations will typically provide inconsistent or confusing answers to the similar questions.  Not that this is a perfect technique mind you, but it does work the majority of the time.

Please be patient with your QSA.  They did not write these procedures, but they are required to execute them.

Answer The Question

Most people suck when being questioned, particularly in a legal proceeding, including yours truly.  Lawyers always instruct anyone that will be called to testify in a legal proceeding to take their time, focus on the question being asked and only answer the question being asked.  Never, ever, ever provide any information outside of the question, i.e., do not elaborate.  The trouble is that lawyers know that silence is a vacuum and it is human nature to fill that vacuum with extraneous information.  Hence why they typically have long pauses between questions.

QSAs and auditors tend to operate under the same principle as a lawyer.  People get into trouble when they start talking about things that are outside of the question, out of scope or not relevant to the assessment.  Such responses will at first confuse the QSA for a moment as they try to reconcile your remarks.  But then, the QSA may question whether they truly understand the environment and, possibly, the scope of the assessment.  It is then that they may start quizzing you and your staff as they go back and reconfirm their understanding of the environment.  All of this takes time, time away from the assessment process as you cover old ground while the QSA re-verifies the facts.

The lesson to be learned here is that there is nothing wrong with saying, “I do not know.”  Or “I will have to look into that question and get back to you.”  The worst thing you can do is try and “tap dance” around the question or never really answer the question.  If you do not have the answer, then find out who does have the answer and point the QSA to that person.

Prepare

And finally, the best thing you can do to avoid all of these issues is to walk through the PCI assessment process and requirements with those of your staff that will be interviewed/observed and make sure they understand the questions to be asked and how they should be answered.

If you really want to know what the QSA will ask, why they will ask and the evidence they will require, get a copy of the PCI DSS ROC Reporting Instructions from the PCI SSC Document Library.  The Reporting Instructions document is the “Bible” for QSAs as it documents how they will be assessed in a PCI SSC Quality Assurance review.  Reviewing and understanding this document will go a long way to minimizing the “What do you need that for?” questions that all QSAs encounter.

For each requirement’s tests, the Reporting Instructions will tell you:

  • What observations, if any, need to be performed and documented.
  • What documents, if any, need to be collected and reviewed and what information needs to be identified in those documents.
  • What people, if any, need to be interviewed and about what topic(s).
  • What processes, actions taken or states of equipment, if any, need to be observed and documented.
  • Whether or not sampling can be used.

Using the Reporting Instructions, you can also gather a lot of the observations ahead of time.  Your QSA will still have to conduct some observations such as that default passwords are not used, that timeouts occur, that change management operates and the like.  But by gathering screen shots and documenting what you used as testing conditions will go a long way to making your assessment go much more smoothly and quickly.

Hopefully this discussion will help you get through your next PCI assessment without all of the associated drama that can come from such an exercise.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031