Posts Tagged ‘security

14
Mar
15

The 2015 Verizon PCI Report

A lot has been written about this year’s Verizon PCI Compliance Report particularly about how 80% of organizations cannot maintain their compliance. And at the very end of the report are a number of issues raised by Verizon regarding why maintaining compliance is so difficult for most organizations. It is those issues that I would like to discuss.

Scale and Complexity of Requirements

“I just don’t understand why this ERP upgrade is going to take 18 months to complete. Can’t we just put the DVD in the drive and upgrade it like Microsoft Office?” – Anonymous Executive to IT Management

The same could be said about any security framework. If organizations are struggling with PCI compliance, imagine how they are struggling with HIPAA, FISMA or ISO 27K compliance. Compliance with any of the security frameworks is not easy.

I disagree with Verizon’s claim that it is related to the fact that most organizations do not know the PCI DSS. After six years and three versions, I rarely run into an organization today that does not have a basic, overall understanding of the PCI DSS. These organizations may have some interesting ideas on what sections and requirements of the DSS mean, but they have definitely studied it and read about it. Therefore the idea that organizations are ignorant on the subject is far from the truth in my experience.

In my opinion, where the problem lies is that most organizations have not truly managed their technology environments thanks to interference with mergers and acquisitions, partially implemented applications, bring your own device (BYOD), the Cloud and the plethora of other disruptions that complicate organizations. Today, IT is a very important part of any organization, but it is not managed like it was in the “good old days”. There are too many stakeholders and the consumerization of technology has not helped the situation by making everyone an IT “expert”.

Most organization’s IT operations these days are a hodge-podge of technologies, applications and networks. I would equate it to the technological equivalent of a house’s attic and garage combined. We all know we should clean and straighten them out, but that project always sits on the back burner as there are other, more important or fun things to do.

As a result, for most organizations, there is just no easy way to simplify, segregate and isolate cardholder data (CHD) and comply with the PCI DSS without making the environment even more complex. Starting over is not an option for a lot of organizations.

That said I have encountered a few very brave organizations that have done just that, started over. Management at these organizations came to the realization that fixing the problem was too complex and expensive and that starting over was the cheaper, safer and easier way to go.

Uncertainty about Scope and Impact

“I don’t know much about PCI, but I do know my scope.” – Anonymous Manager to QSA

When application developers cannot explain how their applications work on a technical level. When anyone in any department can be in the IT business. When security personnel are order takers for firewall configuration changes reviewed and approved by management that have no clue as to the implications of those changes. When network people are providing a communications utility for communications traffic but have no idea how that traffic traverses the network.

Is it any wonder we have no idea how to scope a PCI assessment?

But there are larger problems as to why scoping is difficult. The root cause of why scoping is such an issue is that everyone’s risk tolerance is different. I drive race cars at very obscene speeds on race tracks (mostly) that I am sure a lot of people would view as insane. However, I think that people that skydive and do rock climbing are the insane ones. All of this points to everyone’s acceptance and avoidance of risk based on their own views.

There is a sidebar in the Verizon report calling the PCI SSC to provide guidance about scoping. Good luck with that. The Council had a scoping SIG a number of years ago that imploded due to the aforementioned issues with everyone’s risk tolerance. The result was a small band of people from the SIG that published the PCI Open Scoping Toolkit. The PCI Open Scoping Toolkit is not perfect, but it provides a framework to have an intelligent discussion about how to go about scoping and determine what is in-scope and why.

The key to solving the scoping issue resides with the organization, not their QSA, acquiring bank or any other external entity. Organizations need to use the PCI Open Scoping Toolkit to come up with their scoping framework and definitions. Once that has been agreed, then an organization needs to map out their applications and networks to determine their true scope. This is where tools from vendors such as Tufin, FireMon, SolarWinds and the like can provide assistance by documenting the network and then simulating data flows over the network.

With that approach, it is incumbent on QSAs and other auditors to accept these definitions for their assessment unless there is some significant or gross error in the organizations definitions. This will address the complaint that organizations have with QSAs. How often have we heard something such as, “The last QSA told us this was compliant.” If we all play by the same risk definitions the client has provided, then statements like that should go away.

Once an organization truly understands and has defined its scope, it can then understand the impact of existing operations and any changes.

The Compliance Cycle

This is what the Council is attempting to address with business as usual (BAU). The idea is that with security practices and monitoring embedded within an organization’s operations, security issues can be quickly identified and addressed before they become serious.

However, for this to work, organizations need to have their scope known as well has how their IT environment actually works. Without that knowledge, embedding the PCI DSS into the organization is a futile exercise.

Lack of Resources

Every organization is running “lean and mean” these days. Cost control is king. As a result, resources are stretched, sometimes to the point that any additional activities just cannot be accommodated without hiring someone. And hiring is not allowed. So implementing BAU is not going to go well if it goes at all.

On the information security front, finding qualified people is nearly impossible, even for consultancies. Organizations are finding that most information security professionals are heading to consultancies because the pay is better. Since security is hard on both the mind and the body, most people want to be reimbursed as much as possible for their efforts. As a result, most organizations cannot pay for in-house security resources. And then, even if they do ante up, typically the person that takes the position either gets bored once they fix everything, or gets frustrated when the organization refused to make required changes to ensure or enhance security.

Enter the managed security services provider or MSSP. The concept is that the MSSP provides the security talent at a more reasonable price yet organizations get the quality personnel needed to enhance and stabilize their security.

Where this goes wrong is that the MSSP and the customer are not on the same page as to each other’s responsibilities. This is from a mixture of sales people over promising as well as prospective customers hearing what they want to hear. Never mind that it is all documented in a contract.

To address this situation, the PCI SSC has come up with a new requirement, 12.8.5, which states:

“Verify the entity maintains information about which PCI DSS requirements are managed by each service provider, and which are managed by the entity.”

Under the v3 Attestation Of Compliance (AOC) form, this will not be as big a problem for an organization to maintain. However, if an organization has a lot of service providers and/or the service providers have v2 AOCs; this could be a very daunting task.

Lack of Insight in Existing Business Processes

“I’ve only been in this position for [2, 3 or 4] months. So I’m not fully up to speed on everything we do.” – Anonymous Manager to QSA

“I’d give you an organization chart, but it would be out of date by the time I printed it.” – Anonymous Human Resources Manager to QSA

In today’s fast changing business world, people get shuffled out of departments and divisions faster than people can manage the changes. As a result, finding anyone with any sort of insight into an organization’s business processes can be extremely difficult, if not impossible.

Then we go back to my earlier comment about lack of IT management. With the advent of the Cloud, some business divisions and departments have totally sidestepped the formal IT organization and set up their own operations in the Cloud. Did they know what they were doing? No! But that was beside the point, they at least now have IT solutions, never mind if they are secure or implemented properly. The only way to find these rogue operations is to quiz everyone in the organization about how they operate and what they use to operate.

Even then, I have run into situations where a new payment channel pops out of the woodwork at the last moment. Next year’s assessment issue or we will not get the one we are currently doing out the door.

Misplaced Confidence in Existing Information Security Maturity

A lot of organizations that have been doing IT for years and years get caught in this trap. Just because you have been doing IT for an eternity does not mean that you have been doing it right for the same amount of time or that you are doing it correctly now.

In a lot of IT organizations it is an unfortunate fact of life that areas such as special projects, business continuity planning or information security were used as those “safe” places to put the former IT Vice President or Manager out to pasture so they could retire. It did not matter if the individual could handle the job; it was a place to park someone and provide a gentle way out of the organization.

A rare few individuals made the transition and actually took up the challenge of mastering their new responsibilities. However, the vast majority just checked out, collected their pay check and then retired. This left the organization with a very immature security operation compared to the rest of IT’s operations. Add into the mix the changing landscape of IT with business divisions and departments doing their own thing unbeknownst to anyone and you can see how the maturity of information security could be easily misunderstood.

Then along comes the QSA to do the PCI gap analysis and it all comes to a head as the organization comes to the rude awakening that all is not as good as they thought and that significant gaps exist. To add insult to injury, the organization finds that fixing the gaps is going to take a lot longer than the 90 days they had set aside for that activity so that they could get their Report On Compliance (ROC) done in the same year.

The Verizon report is a great read and provides a lot of insights. Everyone should get a copy and read it, take it to heart and address your organization’s security shortcomings.

15
Feb
15

New PCI Compliance Study

Dr. Branden Williams and the Merchants Acquirer Committee (MAC) have issued a new report on PCI compliance and the impact of breaches on merchants and MAC members.  I had the pleasure of getting a preview of the survey results from Dr. Williams a few weeks before its publication.  Based on some of the online chatter I have seen, the study is being both applauded and chastised for its results.

First, who is the MAC?

“The MAC community includes acquirers/merchant banks, processors, independent sales organizations (ISOs), and others. MAC membership exceeds 500 firms.”

What was the response rate for the study?

“Approximately 20% of MAC members participated in the survey (although not all survey responses could be used in the analysis due to incomplete responses).”

While 20% might seem an awful low response rate for a survey, for those of us that conduct surveys, 20% is actually quite good.

One set of facts that was missing in the survey that I felt was important was how many merchants do the 100+ survey respondents cover and what is their breakdown by merchant level?  Branden very kindly ran a query and sent me back the following.

Level 1 Merchants:                  73

Level 2 Merchants:                153

Level 3 Merchants:             3,832

Level 4 Merchants:      1,140,623

Total:                              1,144,681

Based on this information, I would say that it reasonably represents the breakdown of merchant levels out in the real world.

The biggest finding of the study and what most people are pointing to is the low compliance percentages across the MAC members’ merchants.  Level 1, 2 and 3 merchants are only compliant around 67% to 69% of the time during their assessments.  However, most troubling is that Level 4 merchants are only 39% compliant.

Depending on the merchant level, these figures are not even close to what Visa last reported back in 2011.  Back then, Visa was stating that 98% of Level 1 merchants were reported as compliant.  Level 2 merchants were reported to be at 91% compliance.  Level 3 merchants were reported at 57% compliance.  As is Visa’s practice, it only reported that Level 4 merchants were at a “moderate” level of compliance.

So how do we square the difference in compliance percentages between the MAC and Visa numbers?  We do not because the numbers are like comparing apples to oranges.

The purpose of the study was to examine breaches and their impact on merchants.  As such, the study’s numbers indicate not only PCI compliance but also the number of organizations breached that were deemed PCI compliant, hence the much lower PCI compliance rates.

Visa’s numbers are based on filings of PCI Attestation Of Compliance (AOC) forms with processors and acquiring banks who then report those statistics up to Visa.  Visa, or any card brand for that matter, has never shared the complete equation of the number of merchants that were breached but filed an AOC indicating they were PCI compliant.  As a result, the figures posted by Visa are not representative of the study’s results and vice versa.

I think this study provides a much better look into PCI compliance than we have had from the card brands.  It shows that merchants have a significant amount of work to do maintaining PCI compliance.  I would highly recommend you download a copy of the report and share it with your management.

15
Nov
14

Security Or Checking A Box?

“Better to remain silent and be thought a fool than to speak out and remove all doubt.” Abraham Lincoln

What is your organization interested in?  Security or checking a box?

Not surprisingly, most people answer “security” and then go on to prove with their actions and words that they are only interested in checking a box.

For all of you out there that argue ad nausea about the meaning of PCI DSS testing requirements and the requisite documentation are interested in one thing and one thing only; checking a box.  I am not talking about the few that have honest differences of opinion on a few of the requirements and how a QSA is interpreting them and assessing them.  I am talking about those of you that fight constantly with your QSA or acquiring bank on the process as a whole.

If you were to step back and listen to your arguments, you would hear someone that is splitting hairs in a vain attempt to avoid having to do something that would improve your organization’s security posture.  In essence, you want to only be judged PCI compliant, not actually be secure.

To add insult to injury, these are also typically the people that argue the most vehemently over the fact that the PCI DSS is worthless because it does not make an organization secure.  Wow!  Want to have your cake and eat it too!  Sorry, but you cannot have it both ways.

Everyone, including the Council, has been very clear that the PCI DSS is a bare minimum for security, not the “be all to end all” for securing an organization.  Organizations must go beyond the PCI DSS to actually be secure.  This where these people and their organizations get stumped because they cannot think beyond the standard.  Without a detailed road map, they are totally and utterly lost.  And heaven forbid they should pay a consultant for help.

But I am encountering a more insidious side to all of this.  As you listen to the arguments, a lot of you arguing about PCI compliance appear to have no interest in breaking a sweat and doing the actual work that is required.  More and more I find only partially implemented security tools, only partially implemented monitoring and only partially implemented controls.  And when you dig into it as we must do with the PCI assessment process, it becomes painfully obvious that when it got hard is when the progress stopped.

“It’s supposed to be hard. If it wasn’t hard, everyone would do it.” Jimmy Duggan – A League Of Their Own

Security guru Bruce Schneier was speaking at a local ISSA meeting recently and when asked about why security is not being addressed better he stated that one of the big reasons is that it is hard and complex at times to secure our technology.  And he is right, security is hard.  It is hard because of our poor planning, lack of inclusion, pick the reason and I am sure there is some truth to it.  But he went on to say that it is not going to get any easier any time soon.  Yes, we will get better tools, but the nature of what we have built and implemented will still make security hard.  We need to admit it will be hard and not sugar coat that fact to management.

Management also needs to clearly understand as well that security is not perfect.  The analogy I like to use is banks.  I point out to people the security around banks.  They have one or more vaults with time locks.  They have video cameras.  They have dye packs in teller drawers.  Yet, banks still get robbed.  But, the banks only stock their teller drawers with a minimal amount of money so the robber can only get a few thousand dollars in one robbery.  Therefore to be successful, a robber has to rob many banks to make a living which increases the likelihood they will get caught.  We need to do the same thing with information security and recognize that breaches will still occur, but because we have controls in place that minimizes the amount or type of information they can obtain.

“There’s a sucker born every minute.” David Hannum

Finally, there is the neglected human element.  It is most often neglected because security people are not people, people.  A lot of people went into information security so that they did not have to interact a lot with people – they wanted to play with the cool tools.  Read the Verizon, Trustwave, etc. breach analysis reports and time and again, the root cause of a breach comes down to human error, not a flaw in one of our cool tools.  Yet what do we do about human error?  Little to nothing.  The reason being that supposedly security awareness training does not work.  Security awareness training does not work because we try to achieve success only doing it once per year not continuously.

To prove a point, I often ask people how long it took them to get their spouse, partner or friend to change a bad habit of say putting the toilet seat down or not using a particular word or phrase.  Never in my life have I ever gotten a response of “immediately”, “days” or “months”, it has always been measured in “years”.  And you always get comments about the arguments over the constant harping about changing the habit.  So why would any rational person think that a single annual security awareness event is going to be successful in changing any human habits?  It is the continuous discussion of security awareness that results in changes in people’s habits.

Not that you have to harp or drone on the topic, but you must keep it in the forefront of people’s mind.  The discussion must be relevant and explain why a particular issue is occurring, what the threat is trying to accomplish and then what the individual needs to do to avoid becoming a victim.  If your organization operates retail outlets, explaining a banking scam to your clerks is pointless.  However, explaining that there is now a flood of fraudulent coupons being generated and how to recognize phony coupons is a skill that all retail clerks need to know.

  • Why are fraudulent coupons flooding the marketplace? Because people need to reduce expenses and they are using creative ways to accomplish that including fraudulent ways.
  • What do the fraudulent coupons do to our company? People using fraudulent coupons are stealing from our company.  When we submit fraudulent coupons to our suppliers for reimbursement, they reject them and we are forced to absorb that as a loss.
  • What can you do to minimize our losses? Here are the ways to identify a fraudulent coupon.  [Describe the characteristics of a fraudulent coupon]  When in doubt, call the store manager for assistance.

Every organization I know has more than enough issues that make writing these sorts of messages easy to come up with a topic at least once a week.  Information security personnel need to work with their organization’s Loss Prevention personnel to identify those issues and then write them up so that all employees can act to prevent becoming victims.

Those of you closet box checkers need to give it up.  You are doing your organizations a huge disservice because you are not advancing information security; you are advancing a check in a box.

06
Nov
14

The ASV Process Is Broken – Part 3

So what are my ideas on fixing the ASV process?

Modify The ASV Program

The conditions that drove the ASV process originally made sense.  Vulnerability scanning tools were predominately open source and anyone could do scanning and just about anyone was doing vulnerability scanning.  The results produced out of the open source tools could be highly questionable at best and the reporting was haphazard and about as trustworthy at times as a three dollar bill.  Even in large organizations, the people doing the vulnerability scanning did not necessarily have networking, security or even IT backgrounds.  Then there was a tremendously high false positive rate out of the open source tools.  As a result, most organizations ignored the results they received because they found that they could not be trusted.

The purpose of the ASV program was to bring some sanity and professionalism to the vulnerability scanning process.  MasterCard invented the ASV program (it was not called ASV then) back in 2005.  A test network was built and prospective ASVs were required to run their vulnerability scanners against this network and produce results which were then reviewed by MasterCard.  It was a much a test of the vulnerability scanning tool as it was of the person running the tool.  When the program transitioned to the PCI SSC, the Council added a multiple choice test to the process, but the virtual network testing and report review is still part of the process.

The trouble with this process is that the vulnerability scanning tool is no longer the problem.  Every ASV uses a commercial vulnerability scanning tool from either Tenable, Qualys, Saint, Tripwire or similar commercial tool vendor these days because they cannot afford to do otherwise.  Since these tool vendors are also ASVs, requiring a vulnerability scan for ASV certification has become a truly pointless exercise.  Other than the possibility of not properly entering the IP addresses to be scanned and running the wrong scanning policy, there really is very little that someone can screw up with a scanning tool.

The skill in vulnerability scanning today is reviewing the results, dealing with false positive results, working to address results with compensating controls and, with the Councils new edict on combining reports, working to get passing quarterly scans.

Therefore, in my opinion, training and testing of ASVs should be focused on the following.

  •  Determining the scope of vulnerability scanning.
  • Vulnerability scanning methodology.
  • Interpreting vulnerability scanning reports to confirm knowledge of the process and the meaning of the results.
  • What constitutes a false positive result and how to document a false positive result.
  • Development and documentation of an appropriate compensating control for a vulnerability.
  • Process for how to produce an acceptable passing scanning report from multiple reports.

And let us not limit ASV certification to just independent consulting firms.  As with the internal security assessor (ISA) program, open the ASV program to internal personnel as well.  Most large companies have independent vulnerability scanning teams that are as capable to more than capable than their ASV brethren.  There is no longer any reason that these internal people cannot do the ASV scans particularly if they meet the same standards and qualifications.

Approved Vulnerability Scanning Tools

I am not suggesting that the Council needs to develop a certification process for these tools as there are already plenty of sources that assess such tools.

The Council would publish a list based on the criteria developed by one or more independent tool assessment sources.  This list would define those tools acceptable to use for ASV vulnerability scanning.  The PCI DSS should then require that the QSA confirm that the vulnerability scanner used by the ASV is on the list in addition to confirming scope and the scanning policy used.

Require A Vulnerability Scanning Methodology

With the PCI DSS v3, the Council now requires penetration testers to use a documented and industry accepted penetration testing methodology.  Yet, there is no such requirement for vulnerability testing.

Most vulnerability scanning is done using what I call the “toss it against the wall and see what sticks” approach.  Basically, every possible vulnerability is run against every device.  Most commercial vulnerability scanners interpret banners, signatures and other markers to trim the list of vulnerabilities to be tested based on what they believe the target to be.  However, when you are scanning an external network blind, scanners cannot always properly interpret what an IP address resolves to as a device because of the mix of responses that they receive.  As a result, scanners do not necessary trim tests increasing false positive results or they trim them too much and the test is not complete.

Then there is the automated nature of today’s vulnerability scanning.  While I understand the desire to reduce costs of vulnerability scanning, the “point and click” nature of today’s ASV scanning has made it flawed.  And it gets worse as organizations get passing scans.  As a QSA, I cannot tell you how many passing scans I have reviewed where an organization could be hacked six ways to Sunday with the remaining vulnerabilities.  As a security professional, it scares me to death.  But as a QSA, while I can bring these up, they get no play because they do not have a CVSS of 4.0 or greater.  You hope that these vulnerabilities get picked up in an organization’s penetration test.

But there is no guarantee of that happening because the penetration tester’s vulnerability scanner may or may not pick up the same vulnerabilities.  As a result, part of the penetration testing methodology should include a review of all vulnerabilities found since the last penetration test and those should be tested for in the current penetration test to ensure they have been addressed.

Obviously, I have a preference to the methodology I discussed back in Part 2.  But there are a number of methodologies posted out on the Internet from a variety of good sources.  All I ask is that the vulnerability scanning methodology be integrated with the penetration testing methodology so that there are not gaps in coverage.

Require Monthly External Vulnerability Scanning

Before everyone panics, I am not asking that ASV scans be run monthly.  Although if the ASV program is modified, for organizations with internal ASVs that is a possibility.  I would still require the quarterly ASV scan, but I would add in monthly scans run by anyone deemed qualified as is allowed for internal vulnerability scans.

My primary rationale for this recommendation is driven by this simple fact.  When the dominant solution vendor releases patches on the second Tuesday of every month and the vast majority of those fixes have a CVSS score of 4.0 or greater, anyone that thinks quarterly scanning keeps them secure is seriously kidding themselves.  Not that a lot of security professionals bought into the quarterly vulnerability scanning requirement, even as a bare minimum.  But without the standard requiring it, a QSA has no leg to stand on other than to intimidate and shame people into doing monthly scanning.

Even if you are not Microsoft centric in your external environment, with the breaches that have occurred and the revelations of Shellshock and Poodle, it is painfully obvious that the quarterly requirement is not going to keep organizations secure.  I got a lot of calls after both of these vulnerabilities were announced with clients asking if their passing scans were no longer valid.  I was a bit schizophrenic in my thoughts.  On the one hand, I was glad they were at least thinking about the security implications of these vulnerabilities.  But their concern about their passing scans just highlights the importance of meeting a PCI requirement and passing their PCI assessment versus being secure.  Because, while I only got a few calls, you know that there are too many people that are congratulating themselves on dodging the bullets of Shellshock and Poodle because of the fortuitous timing of their  quarterly scans and that they got an additional 30, 60 or even 90 days to address them.

Then there are those organizations that run solutions such as IBM’s Websphere or Oracle’s eCommerce suites.  Both of these vendors not only patch their own application frameworks, but they also release those patches to the underlying operating systems that are compatible with their application frameworks.  But worse, these vendors do not release monthly patch releases, they do patch releases on quarterly, semi-annual or even annual bases.  As a result, there is a high likelihood that some operating system patches could be left out of these releases due to compatibility or timing issues.  The work around is to mitigate any remaining vulnerabilities through additional logging, additional monitoring, changes in firewall rules, changes in IDS/IPS rules, etc.  The additional vulnerability scanning could help organizations identify these issues and address them quicker than quarterly.

A side benefit of monthly scanning will be improving the ability of organizations and their QSAs to determine if an organization’s patching and mitigation processes are working according to requirement 6.1.  Quarterly scans typically document a lot of vulnerabilities, mostly those under a CVSS of 4.0.  As a result, whether or not an organization is properly managing their environment can be very difficult and time consuming leading to missing items that should be addressed.  Having reports more often can facilitate getting these issues addressed sooner rather than later and keeping the volume lower and less daunting.

The bottom line in all of this is that monthly scanning is required to even have a chance at being secure these days.  Yet the vast majority of organizations are only doing quarterly scans and thinking they are secure.  That practice must change.

So there we have it.  My thoughts on the ASV process and how I would go about fixing it.

01
Nov
14

The ASV Process Is Broken – Part 2

The next reason I believe the process is broken is with the automated scanning processes.  They do not seem to be accurately assessing the security of Web servers, firewalls, routers and other externally facing devices that form an organization’s perimeter.  In my opinion, it seems that in our drive to bring down the cost, we have created a false sense of security by having the concept of a “passing” scan and only requiring scanning on a quarterly basis.

It is hard to believe that external vulnerability scanning came into vogue more than a decade ago.  There were no automated solutions.  No popping out to a Web site, entering IP addresses and scheduling a scan.  It was all manual.  A person was required to configure the scanning solution and then launch the scanner.

There was a proven methodology used for approaching vulnerability scanning.  That methodology involved approaching the organization as an attacker would. .  The security professional would go out to domain registrars and other official sources to obtain potential IP address ranges, registered domains and other public information.  Then the professional would develop a profile of the organization’s external security posture using Nmap and similar tools as well as running non-intrusive scans to identify systems and potential entry points.  But even more insidious, the security professional would do what became to be known as “Google Hacking” to find out what, if any, information was available on the Internet that might facilitate the attack.  Log data, firewall configurations and even authentication credentials could be obtained through Google Hacking.  Even today, while Google does its best to minimize the information they collect, it is amazing the amount of information that can still be obtained with specially crafted Google searches.

Once the preliminaries were done, it was time to actually scan.  The most accepted approach used at that time was to run two scans, one fully external with security measures enabled and one with testing done inside the DMZ.  The scan done with the security measures enabled was to confirm that an organization’s firewalls, load balancers or other front ends were functioning as configured.  Essentially this scan showed what an attacker would discover in their scanning of the organization’s external presence.  There were two purposes of the second scan.  The first was to provide an inventory of vulnerabilities that were being obfuscated by the security measures so that they could be addressed or mitigated.

The second purpose of the DMZ scan was for what some would call today a form of “threat intelligence”.  The second scan provided a view of what an attacker would encounter if they were able to circumvent the organization’s external security.  It answered the question of how large the attack surface was in the DMZ should it be compromised?  What else could be subverted and used to further an attacker’s goal of getting even deeper into a network?  The point of this part of the exercise was to provide the organization with an idea of what to look for should anomalies begin to appear in log data.

Throughout this scanning process, separate scans were run for firewalls, routers, load balancers, Windows devices, Linux devices, etc.  The rationale for that was to obtain clear results based on the type of device.  No reason to toss a bunch of Windows vulnerabilities against a Cisco firewall or F5 load balancer and then have to sift through the results to remove the false positive entries generated by the Windows devices they protect.

All of this information would be collated, analyzed and an actionable report produced for management and security professionals to absorb and understand the organization’s true external security posture.  Once the draft report was delivered there would be one or more calls or meetings with the organization’s security personnel to tweak the message of the results, develop action plans to address the results and then a final meeting with management and security personnel to deliver the final report.  At the end of this process, management and security personnel knew exactly where they stood from a security perspective and what needed to be done to ensure that they remained secure going forward.

Obviously, such an approach is fairly comprehensive and that costs money to produce.  And it was the cost that drove the automation of the process so that it was less labor intensive.

In the mad dash to the lowest possible cost, the following is what typically happens today.

Organizations subscribe to an ASV vulnerability scanning service.  If the organization is lucky, they go through a scoping call with a human being at the ASV service to confirm the IP addresses that are in-scope for PCI compliance.  Having been on a number of these calls with ASVs, it is more of a questionnaire approach and there is very little done by the ASV to actually confirm the scope of the scanning.  It is more or less a conversation that explains how to enter the IP addresses into the scanning tool’s Web interface and to schedule a scan, not a diligent conversation on whether or not a device is in scope or not.  There is typically a brief discussion on how to use the Web site to dispute scanning results that caused a non-passing scan.  In total, the call takes 20 to 30 minutes at which point customers are essentially on their own.

Scans are run on a quarterly time period.  Scans may be rerun if vulnerabilities are identified that result in a non-passing scan.  Once a passing scan is obtained, a passing certificate is generated and the organization moves on to the next quarter.  Every now and then a result may be disputed, but those situations are typically rare.  The bottom line is that the ASV process is fairly automated with very little, if any, human intervention.

From a tool perspective, the vulnerability scanner is tossing every vulnerability at every IP address they are given.  Firewalls, load balancers, switches and routers are tested for Windows and Linux vulnerabilities.  False positive results can be rampant but most scanners weed out the obvious false positive results based on OS and device signatures.  But even if a few false positives remain, as long as they have a CVSS score of less than 4.0, no one cares.  And the scanner is configured to run as quickly as possible because it needs to be available for the next customer so that the ASV does not have to have an individual scanner for every customer.

All of this is driven by customers’ desire to minimize cost as much as possible.  Thus, in order to stay in business, the ASV scanning service providers have done an admirable job of driving down cost.  But at what other costs?

While I think we can all agree that the current approach is flawed, the Council would point to the fact that the quarterly ASV scanning requirement is only a minimum and that organizations should be doing their own external vulnerability scanning more often than quarterly.  And that is what is missed by a lot of organizations is that they need to do more than what the PCI DSS requires to be secure.  And no matter how loud that message is repeated, it seems to get missed over and over again.

However, a lot of organizations do not do anything more than the PCI DSS requires.  We have this “bare minimum” mentality because, if it is not mandated in the standard, then we must be secure if we do only what is required.  This results in the false sense of security that organizations have when they only do what the PCI DSS or any other security framework for that matter mandates.

In addition, I cannot tell you how many organizations totally write off vulnerabilities that have a CVSS of less than 4.0 as “trivial” representing no threat.  There is this mistaken belief that, because the CVSS score does not warrant immediate patching, it is not a threat.  However, when you have them in relative abundance and of certain types, those “trivial” vulnerabilities can be put together in a particular sequence and used to compromise a network.  To the shock and horror of a lot of organizations, their security is breached in moments during their penetration test using those “trivial” vulnerabilities.

Then we have the limitations of the tools themselves.  Their vulnerabilities are constructed for the general audience, not a specific network.  As a result, depending on security measures and configurations, a particular vulnerability for say HTTP or FTP might not register due to a load balancer or other devices in the communication path.  Yet with a minor tweak, the same vulnerability could actually work.  And that was another point of doing that second scan inside the DMZ back in the “old” days.  Taking the two scans allowed you to correlate that while HTTP or FTP was open; it was vulnerable because that vulnerability was shown on the internal DMZ scan.  As a result, you could take additional steps to ensure that the vulnerability could not be exploited from the Internet.

The bottom line in all of this is that ASV scans have become a pointless piece of the compliance assessment process.  It has not been because the ASVs are bad, it is that the process they are forced to follow is not doing what it once did at the beginning.  It does not help that merchants and service providers treat ASV scans as a “check the box” activity and that because they do something quarterly they are secure.

If the process is broken, what should be done to fix it?  That is the subject of my next post.

12
Oct
14

Lawyer Or Security Professional?

“It depends upon what the meaning of the word ‘is’ is. If ‘is’ means ‘is and never has been’ that’s one thing – if it means ‘there is none’, that was a completely true statement.” –President of The United States of America, William Clinton

It has been an interesting time as the December 31, 2014 deadline approaches and version 2 of the PCI DSS comes to its end of life.  I have started to notice that there are a lot of security professionals and others that are closet lawyers based on the discussions I have had with some of you regarding compliance with the PCI DSS.

The first thing I want to remind people of is that if you do not want to comply with one or more of the PCI DSS requirements, all you have to do is write a position paper defining for each requirement you find onerous, why it is not relevant or not applicable for your environment and get your management and acquiring bank to sign off on that paper.  But stop wasting your QSA’s or ISA’s time with your arguments.  It is not that we do not care, but without such approval from your management and acquiring bank, QSAs and ISAs cannot let you off the hook for any requirement.

With that said, the first lawyerly argument we are dealing with these days revolves around the December deadline.  We continue to get into arguments over what the deadline actually means.

It appears that the PCI SSC and card brands’ repeatedly saying that version 2 is done as of December 31, 2014 was not clear enough for some of you.  And further clarifications from them that any reports submitted after that date must be under version 3 are also apparently too much for some of you to handle.  I do not know how there could be any misinterpretation of ‘DEADLINE’, ‘DONE’ or “AFTER THAT DATE’ but apparently, there are a lot of people out in the world that do not understand such words and phrases.  Then there are the amazing contortions that some people will go to in a twisted dance to the death to get around this deadline.

Where have you been?  How could you have missed this deadline?  It has been known since the PCI SSC announced their change when standard updates would be issued back with the release of the PCI DSS v2 more than three years ago.  But even assuming you were not involved back then, the PCI SSC announced the deadline over a year ago with the release of PCI DSS v3.  Either way, it certainly should not have been a surprise as there has been plenty of warning.

But then do not take this out on your QSA.  QSAs are just the messenger in this process and had nothing to do with setting the deadline.  The PCI SSC and the card brands set that deadline.  You have a problem with the deadline, complain to them.  But if you are willing to listen, I can save you that discussion.  They will politely tell you the deadline is the deadline.  You are out of luck.  If you do not like that answer, then stop taking credit/debit cards for payment for your organization’s goods and services.

The next lawyerly argument is around the June 30, 2015 deadlines for requirements 6.5.10, 8.5.1, 9.9, 11.3 and 12.9.  Again, it is as though these dates were kept from you, which they were not.  I even wrote a post about these requirements titled ‘Coming Attractions’ back in September 2013.

For those that are calendar challenged, June 30, 2015 is practically just around the corner in business terms.  If you had years to get ready for the PCI DSS v3, what makes you think that you can just turn something on in a year and a half?  Yet we continually see people arguing that until that date, they are not going to address any of these requirements.  All as though, like a light switch, something magical will occur on July 1, 2015 that will meet those requirements.

For merchants, requirements 9.9 and 11.3 are going to be huge issues particularly for those of you with large networks and lots of retail outlets.  If you have not gotten started on these requirements now, there is no way you will be compliant with these requirements by July 1.  Both of these require thought, planning and training.  They cannot just be started overnight resulting in compliance.

For requirement 11.3, the new approach required for penetration testing is resulting in vulnerabilities being uncovered.  Organizations that did not want to get caught flat footed are finding that their network segmentation is not as segmented as they once believed.  They are also finding new “old” vulnerabilities because of these network segmentation issues.  The bottom line is that these early adopters are scrambling to address their penetration testing issues.  In some cases ACLs need to be adjusted, but I have a few that have found they need to re-architect their networks in order to get back to compliance.  Obviously the latter is not an overnight kind of fix.

Requirement 9.9 is all about ensuring the security of points of interaction (POI) as card terminals are referred.  Because of all of the POI tampering and hacks that have occurred, the Council has added the requirements in 9.9 to minimize that threat.  The biggest problems early adopters are running into is getting their retail management and cashiers trained so that they understand the threats and know how to deal with those threats.  This requires creating new procedures for daily or more often inventorying of the POIs and visually inspecting them to ensure they have not been tampered with.  Companies are rolling out serialized security tape that must be applied to the seams of POIs so that any opening of the case can be visually determined.  Locking cradles are being installed for every POI to secure them to the counter.  Let alone implementing those new procedures for doing at least daily inspections and what to do if you suspect tampering and how to inform corporate of potential issues.  Again, not something that just happens and works day one.

For service providers, besides 11.3, requirement 8.5.1 is going to be their biggest issue.  This requires the service provider to use different remote access credentials for every customer.  This is in response to the breaches that occurred at a number of restaurants in Louisiana a few years ago as well as more recent breaches.

The problem that early adopters of 8.5.1 are finding is that implementing enterprise-wide credential vaults is not as simple as it appears.  The biggest impact with these implementations is that service providers start missing their service level agreements (SLA).  Missing SLAs typically costs money.  So these service providers are not only incurring the costs related to implementing the credential vault solution, but they are suffering SLA issues that just pile on the injuries.

But the final straw is all of the people that closely parse the PCI DSS and only the DSS.  You saw this with some of the questions asked at the latest Community Meeting.  You also see it in the questions I get on this blog and the prospects and I clients I deal with daily.  These people are hunting for a way to get around complying with a particular requirement.

This occurs because people only read the DSS and not the Glossary, information supplements and other documents provided by the Council.  At least with v3 of the DSS the Council included the Guidance for each of the requirements.  Not that adding Guidance makes a whole lot of difference based on the arguments laid out by some people.  The Council could do us all a favor if they generally published the Reporting Template with all of the other documents.  Not so much that people would necessarily read it, but it would give QSAs and ISAs more ammunition to use when these discussions come up.

Successful security professionals understand the purpose of security frameworks.  These frameworks are meant to share the collective knowledge and lessons learned regarding security with everyone so that everyone can have a leg up and know ways of detecting and mitigating threats.  Successful security professionals use these frameworks to get things done, not waste their time developing scholarly legal arguments or twisting the English language as to why they do not need to meet some security requirement.  They put their heads down, review the frameworks, develop plans to implement the changes necessary to improve security, work the plan and deliver results.  Do those plans always meet requirement deadline dates?  Not always, but they are close or as close as they can get given other business issues.

The bottom line is that security professionals are not lawyers and good security professionals certainly do not sound like lawyers.  But if you constantly find yourself sounding like a lawyer digging so deep to split legal hairs, in my very humble opinion, you really need to re-examine your career or lack thereof.  I say lack thereof because, in my experience, security professionals that operate like lawyers do not have long careers.  They move around a lot because once people realize that they cannot deliver, they are forced to move on.  Eventually a reputation is developed and after that point these people end up forced to find a new career because the security community knows their modus operandi.

08
Aug
14

The Dilemma Of PCI Scoping – Part 3

In part 2 we discussed the criticality of a risk assessment and started on implementing the framework with fixing monitoring and alerting so that we can properly manage the risk we will be accepting.  In this part I will deal with Category 2 and 3 systems and how to manage their risk.

One of the big problems I have with the security purists is with their arguments over Category 3 systems.  Do not get me wrong, I understand their reasons for their statements.  However to me, their arguments seem to imply that Category 3 systems are somehow not secure or not as secure as other systems and therefore cannot be trusted.  This further implies that organizations are implementing some sort of multi-tier security structure which should not be the case.  Taking such an approach would create an operational and support nightmare, let alone increase the risk of compromise.

In today’s world, all devices need to have a certain basic level of security comprised of, at a minimum, hardened OS, minimized ports and services, access control, anti-virus and anti-malware (as appropriate and necessary), personal firewall (if mobile), logging and other similar security measures as necessary.  All of this is something I like to refer to as “Security 101”.  If you are not doing Security 101 across the board, then you likely have bigger issues than just PCI compliance.  But if you are following a Security 101 approach, then all of your devices under your direct control should have a solid, basic level of security and should be able to be trusted when on your network.  As a result, with the proper logical access controls in place, you should be able to trust Category 3 systems coexisting with certain Category 2 systems.

Notice I said “certain Category 2 systems” not “all” Category 2 systems.

Category 2b systems that have inbound connectivity to the CDE are the most risky because of that direct inbound connectivity.  As such, should any Category 2b system become compromised, the CDE is also compromised by default.  Most Category 2b systems are gateways to the CDE such as with out-of-band management systems or “jump boxes” for network and system administration and Citrix or other virtual desktop technologies that you can use to grant business user access to CDE applications.

These gateways need to be configured such that if they become compromised they are immediately taken out of service until they can be rebuilt and/or reconfigured as secure.  This is typically accomplished not just with anti-virus, but also through file integrity monitoring, application whitelisting and other similar measures.  But this needs to be monitored at the virtual desktop infrastructure level (aka, Citrix, VMware, Terminal Services, etc. servers), as well as the virtual desktop image level.  If a file changes in the virtual environment or virtual desktop image that should not have changed, access to the virtual desktop environment is suspended until the infrastructure or image can be recertified as secure.  But that means very tight monitoring on the critical files, however this should be relatively easy to do as long as the virtual desktop image is restricted to only a single cardholder data (CHD) application.

The other thing that needs to happen with virtual desktop is that organizations must be extremely diligent about who has access to the virtual desktop.  This means very timely user management, something a lot of organizations are very lax about.  The PCI DSS states in requirement 8.1.4 that organizations must “Remove/disable inactive user accounts at least every 90 days.”  90 days is a very long time when you look at what attackers are able to do in such a timeframe.  As a result, organizations need to step up their game in this regard and when someone’s job function changes, that needs to have a help desk ticket generated for changing that person’s access.  A 90 day review process is fine, but if you are relying on that sort of thing as a control, you are not being diligent enough in managing your users.

But what about Category 2a systems you might ask?  Category 2a systems are directory servers, authentication servers, DNS and DHCP servers, anti-virus master servers, system update servers and other network and system management/monitoring support systems.  These systems are those that only system or network administrators should ever have direct access.  These systems should be configured as “bastion” servers, security hardened so that they are not easily compromised by vulnerabilities.  Most vendors of these systems understand this and have set up their installation routines accordingly – restricting services, allowing only necessary applications, etc.  The bottom line with Category 2a systems is that, if any of them become compromised, it is game over because the attacker has the privileges of an administrator.

I am not a fan of separate directory systems.  These systems double as the access control management system and the more complicated access control becomes, the more likely that access control becomes less reliable and secure.  As such, I prefer a common access control system managed through an integrated directory system such as Microsoft’s Active Directory (AD).  Over the years, the amount of ports required to make AD work between domain controllers have gotten much better and easier to implement.  Plus there is an ability starting with Windows Server 2008 to even limit and control the dynamic ports used.  As such, I recommend using a common domain approach rather than separate domains with trust relationships as it just makes management so much easier and more reliable.  Whether or not you choose to put domain controllers inside your CDE is up to you to decide.

However, there is a larger consideration here that needs to be put on the table.  Think about what the security purists are saying with their statements about Category 3 systems.  If an attacker can jump from a Category 3 system to owning a Category 2a system such as a directory server, DNS server, DHCP server, etc. what does that say about an organization’s security posture to begin with?  If an attacker gains control of a system/network administrator’s credentials, it is game over.  That is not a device failure issue, that is a social engineering failure.  As a result, you can have whatever controls of separation all you want, but in the end if an administrator gets owed, what did all of that separation do for you?  Nothing.  The best we can do is to manage and mitigate the risks and monitor for any attempts to leverage those risks and deal with any attacks as quickly as possible.

At the end of the day, organizations need to assess the risk that their Category 2 systems provide to the CDE.  If it is higher than desired, those systems need to be put into the CDE.  Others can share networks with other Category 2 and 3 systems if necessary.

There are obviously enhancements that you can make to further improve on this strategy.  But this is just what I would consider the minimum to get by which seems to be what a lot of you desire.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

May 2015
M T W T F S S
« Apr    
 123
45678910
11121314151617
18192021222324
25262728293031

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,246 other followers


Follow

Get every new post delivered to your Inbox.

Join 1,246 other followers