Archive for November, 2014

23
Nov
14

Face It, You Are A Poor Judge Of Risk

“The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown.” HP Lovecraft

We have a pop quiz today.

  1. Are you more likely to die from an alligator attack or a shark attack?
  2. Are you more likely to win the PowerBall lottery jackpot or become a movie star?
  3. Are you more likely to die in a vending machine accident or from a lightning strike?
  4. Are you more likely to be elected President of the United States or to date a supermodel?
  5. Are you more likely to die from influenza or from drowning?
  6. Are you more likely to catch influenza or Ebola?

The purpose of this pop quiz is to demonstrate how poorly we humans evaluate and understand risks. I have to admit I got caught on a couple of these as I did the research.

If anything, the Ebola discussion has brought this issue of risk judgment to the forefront given the unfounded fear people have of Ebola. As a mathematician by schooling it has fascinated me as I watch the media reports and government officials cave into the spread of fear over something very highly unlikely to occur to anyone in the general population.

Do not get me wrong. If I were a health care worker anywhere in the world, I would have concerns about my risk of catching Ebola. After all, they are on the front line and Ebola has around a 50% fatality rate. Add into that the informative, but frightening, video that Dr. Sanjay Gupta of CNN did on the difficulty of removing a containment suit without potentially infecting yourself, and it confirms the threat a health care worker should be feeling if confronted with a potential Ebola patient that is symptomatic.

But for anyone outside of health care, there should be little if any reason to be concerned. Yet a good percentage of the public is irrational when it comes to Ebola regardless of the fact that it requires contact with a symptomatic person’s bodily fluids in order to be infected. But unlike a person with influenza, an Ebola infected person that is contagious does not have the mobility required to have contact with people unless those people come to them. As a result, all of these mental gymnastics that people go through about the possibility that an infection could occur on a bus or the subway are silly because the person with Ebola when they are contagious would look worse than a zombie off of ‘The Walking Dead’, assuming they could even walk at that point.

I am sure you are all saying that this is all good and well, but what is the point here in regards to PCI?

Glad you asked. I bring this up because the PCI DSS is heading more and more to be driven by risk and the assessment of that risk. Yet as I have hopefully shown by my quiz questions, people and their organizations are poor at understanding and determining risks. So organizations need to get much better at performing risk assessments (if they are performed at all) so that they can truly understand and manage risks. That said, a risk assessment does not have to be, nor should it be, a huge “death march” of a project. A proper risk assessment should answer the following questions.

  • What are the risks to the organization? This does not have to be an exhaustive, all inclusive list as you find in the various risk assessment methodology frameworks. But should include all of the most likely risks. For PCI compliance, this risk assessment only needs to address the risks to those things that are in-scope for the assessment. However, most organizations need the risk assessment for other reasons, so it often contains all risks, not just PCI risks. If it does contain risks outside of PCI, you should add columns for your other requirements so you can filter out just the PCI, HIPAA, GLBA, FISMA and any other risk frameworks.
  • What is the likelihood of the risk occurring? Typically, I use a scale of 1 to 5 where 1 is it occurs infrequently and 5 represents that it occurs often. If something never occurs, then it should be removed from the list.
  • If the risk occurs, what is the impact on the organization? Here I use a scale of 1 to 3 where 1 is low, 2 is moderate and 3 is high.
  • Multiply the likelihood with the impact and you get the risk rating.
  • Sort the risk ratings from highest to lowest and you have your risk assessment rating completed.

But hold on, you are not done just yet. Now you need to set your organization’s risk threshold. This will likely be a very contentious discussion as you will find that people within the organization have widely differing views on the level of risk they are willing to accept. However, it is important to capture the highlights of this discussion so that you have documentation for future discussions as you discuss future risk assessment results and reset the organization’s risk threshold.

Risks that fall below a certain risk rating are accepted and management formally agrees to accept them. Those above that level you develop methods of mitigating and managing those risks. Under my rating system, the lowest score that can be achieved is 1 and the highest score is 15. A lot of organizations might say that a total score of below 4 is to be accepted. For some organizations a better approach to accepting risk is sometimes to only accept those risks that have an impact of ‘Low’ (i.e., equal to 1). Therefore, all moderate and high impact risks are mitigated and managed.

Once you have your analysis done you will have a list of risks that require mitigation and management through monitoring and other methods.

Answers

  1. According to the Florida Museum of Natural History, between 1948 and 2005 there were 391 alligator attacks resulting in 18 fatalities whereas there were 592 shark attacks with 9 fatalities. That makes the alligator fatality rate almost three times as high as the shark fatality rate.
  2. The odds of winning the PowerBall are around one in 175M. While still incredibly long, the odds of becoming a movie star are significantly lower at one in 1.5M.
  3. Lightning is more deadly but do not underestimate that vending machine. According to the US National Oceanic and Atmospheric Administration (NOAA), the odds of being hit by lightning in the US are one in 1.9M. According to the US National Safety Council, there is a one in 112M chance of dying in a vending machine accident.
  4. The odds are in your favor if you are interested in dating a supermodel. Even better than becoming a movie star. You have a one in 88K chance of dating a supermodel according to Ask the Odds. The odds of being elected President are slim at one in 10M.
  5. The US Centers for Disease Control (CDC) estimate that the odds of drowning are one in 31.4. The CDC estimates that the odds of dying from influenza are around one in 345K.
  6. The CDC estimates that one in eight people will catch the flu in any given year and as seen in a previous answer, there is a one in 345K chance that a person will die as a result. Given the population of the US is around 315M and only four people have actually caught the Ebola virus in the US, there is around a one in 78M chance of catching Ebola in the US but that could change slightly if more infected people enter the US.
Advertisement
15
Nov
14

Security Or Checking A Box?

“Better to remain silent and be thought a fool than to speak out and remove all doubt.” Abraham Lincoln

What is your organization interested in?  Security or checking a box?

Not surprisingly, most people answer “security” and then go on to prove with their actions and words that they are only interested in checking a box.

For all of you out there that argue ad nausea about the meaning of PCI DSS testing requirements and the requisite documentation are interested in one thing and one thing only; checking a box.  I am not talking about the few that have honest differences of opinion on a few of the requirements and how a QSA is interpreting them and assessing them.  I am talking about those of you that fight constantly with your QSA or acquiring bank on the process as a whole.

If you were to step back and listen to your arguments, you would hear someone that is splitting hairs in a vain attempt to avoid having to do something that would improve your organization’s security posture.  In essence, you want to only be judged PCI compliant, not actually be secure.

To add insult to injury, these are also typically the people that argue the most vehemently over the fact that the PCI DSS is worthless because it does not make an organization secure.  Wow!  Want to have your cake and eat it too!  Sorry, but you cannot have it both ways.

Everyone, including the Council, has been very clear that the PCI DSS is a bare minimum for security, not the “be all to end all” for securing an organization.  Organizations must go beyond the PCI DSS to actually be secure.  This where these people and their organizations get stumped because they cannot think beyond the standard.  Without a detailed road map, they are totally and utterly lost.  And heaven forbid they should pay a consultant for help.

But I am encountering a more insidious side to all of this.  As you listen to the arguments, a lot of you arguing about PCI compliance appear to have no interest in breaking a sweat and doing the actual work that is required.  More and more I find only partially implemented security tools, only partially implemented monitoring and only partially implemented controls.  And when you dig into it as we must do with the PCI assessment process, it becomes painfully obvious that when it got hard is when the progress stopped.

“It’s supposed to be hard. If it wasn’t hard, everyone would do it.” Jimmy Duggan – A League Of Their Own

Security guru Bruce Schneier was speaking at a local ISSA meeting recently and when asked about why security is not being addressed better he stated that one of the big reasons is that it is hard and complex at times to secure our technology.  And he is right, security is hard.  It is hard because of our poor planning, lack of inclusion, pick the reason and I am sure there is some truth to it.  But he went on to say that it is not going to get any easier any time soon.  Yes, we will get better tools, but the nature of what we have built and implemented will still make security hard.  We need to admit it will be hard and not sugar coat that fact to management.

Management also needs to clearly understand as well that security is not perfect.  The analogy I like to use is banks.  I point out to people the security around banks.  They have one or more vaults with time locks.  They have video cameras.  They have dye packs in teller drawers.  Yet, banks still get robbed.  But, the banks only stock their teller drawers with a minimal amount of money so the robber can only get a few thousand dollars in one robbery.  Therefore to be successful, a robber has to rob many banks to make a living which increases the likelihood they will get caught.  We need to do the same thing with information security and recognize that breaches will still occur, but because we have controls in place that minimizes the amount or type of information they can obtain.

“There’s a sucker born every minute.” David Hannum

Finally, there is the neglected human element.  It is most often neglected because security people are not people, people.  A lot of people went into information security so that they did not have to interact a lot with people – they wanted to play with the cool tools.  Read the Verizon, Trustwave, etc. breach analysis reports and time and again, the root cause of a breach comes down to human error, not a flaw in one of our cool tools.  Yet what do we do about human error?  Little to nothing.  The reason being that supposedly security awareness training does not work.  Security awareness training does not work because we try to achieve success only doing it once per year not continuously.

To prove a point, I often ask people how long it took them to get their spouse, partner or friend to change a bad habit of say putting the toilet seat down or not using a particular word or phrase.  Never in my life have I ever gotten a response of “immediately”, “days” or “months”, it has always been measured in “years”.  And you always get comments about the arguments over the constant harping about changing the habit.  So why would any rational person think that a single annual security awareness event is going to be successful in changing any human habits?  It is the continuous discussion of security awareness that results in changes in people’s habits.

Not that you have to harp or drone on the topic, but you must keep it in the forefront of people’s mind.  The discussion must be relevant and explain why a particular issue is occurring, what the threat is trying to accomplish and then what the individual needs to do to avoid becoming a victim.  If your organization operates retail outlets, explaining a banking scam to your clerks is pointless.  However, explaining that there is now a flood of fraudulent coupons being generated and how to recognize phony coupons is a skill that all retail clerks need to know.

  • Why are fraudulent coupons flooding the marketplace? Because people need to reduce expenses and they are using creative ways to accomplish that including fraudulent ways.
  • What do the fraudulent coupons do to our company? People using fraudulent coupons are stealing from our company.  When we submit fraudulent coupons to our suppliers for reimbursement, they reject them and we are forced to absorb that as a loss.
  • What can you do to minimize our losses? Here are the ways to identify a fraudulent coupon.  [Describe the characteristics of a fraudulent coupon]  When in doubt, call the store manager for assistance.

Every organization I know has more than enough issues that make writing these sorts of messages easy to come up with a topic at least once a week.  Information security personnel need to work with their organization’s Loss Prevention personnel to identify those issues and then write them up so that all employees can act to prevent becoming victims.

Those of you closet box checkers need to give it up.  You are doing your organizations a huge disservice because you are not advancing information security; you are advancing a check in a box.

06
Nov
14

The ASV Process Is Broken – Part 3

So what are my ideas on fixing the ASV process?

Modify The ASV Program

The conditions that drove the ASV process originally made sense.  Vulnerability scanning tools were predominately open source and anyone could do scanning and just about anyone was doing vulnerability scanning.  The results produced out of the open source tools could be highly questionable at best and the reporting was haphazard and about as trustworthy at times as a three dollar bill.  Even in large organizations, the people doing the vulnerability scanning did not necessarily have networking, security or even IT backgrounds.  Then there was a tremendously high false positive rate out of the open source tools.  As a result, most organizations ignored the results they received because they found that they could not be trusted.

The purpose of the ASV program was to bring some sanity and professionalism to the vulnerability scanning process.  MasterCard invented the ASV program (it was not called ASV then) back in 2005.  A test network was built and prospective ASVs were required to run their vulnerability scanners against this network and produce results which were then reviewed by MasterCard.  It was a much a test of the vulnerability scanning tool as it was of the person running the tool.  When the program transitioned to the PCI SSC, the Council added a multiple choice test to the process, but the virtual network testing and report review is still part of the process.

The trouble with this process is that the vulnerability scanning tool is no longer the problem.  Every ASV uses a commercial vulnerability scanning tool from either Tenable, Qualys, Saint, Tripwire or similar commercial tool vendor these days because they cannot afford to do otherwise.  Since these tool vendors are also ASVs, requiring a vulnerability scan for ASV certification has become a truly pointless exercise.  Other than the possibility of not properly entering the IP addresses to be scanned and running the wrong scanning policy, there really is very little that someone can screw up with a scanning tool.

The skill in vulnerability scanning today is reviewing the results, dealing with false positive results, working to address results with compensating controls and, with the Councils new edict on combining reports, working to get passing quarterly scans.

Therefore, in my opinion, training and testing of ASVs should be focused on the following.

  •  Determining the scope of vulnerability scanning.
  • Vulnerability scanning methodology.
  • Interpreting vulnerability scanning reports to confirm knowledge of the process and the meaning of the results.
  • What constitutes a false positive result and how to document a false positive result.
  • Development and documentation of an appropriate compensating control for a vulnerability.
  • Process for how to produce an acceptable passing scanning report from multiple reports.

And let us not limit ASV certification to just independent consulting firms.  As with the internal security assessor (ISA) program, open the ASV program to internal personnel as well.  Most large companies have independent vulnerability scanning teams that are as capable to more than capable than their ASV brethren.  There is no longer any reason that these internal people cannot do the ASV scans particularly if they meet the same standards and qualifications.

Approved Vulnerability Scanning Tools

I am not suggesting that the Council needs to develop a certification process for these tools as there are already plenty of sources that assess such tools.

The Council would publish a list based on the criteria developed by one or more independent tool assessment sources.  This list would define those tools acceptable to use for ASV vulnerability scanning.  The PCI DSS should then require that the QSA confirm that the vulnerability scanner used by the ASV is on the list in addition to confirming scope and the scanning policy used.

Require A Vulnerability Scanning Methodology

With the PCI DSS v3, the Council now requires penetration testers to use a documented and industry accepted penetration testing methodology.  Yet, there is no such requirement for vulnerability testing.

Most vulnerability scanning is done using what I call the “toss it against the wall and see what sticks” approach.  Basically, every possible vulnerability is run against every device.  Most commercial vulnerability scanners interpret banners, signatures and other markers to trim the list of vulnerabilities to be tested based on what they believe the target to be.  However, when you are scanning an external network blind, scanners cannot always properly interpret what an IP address resolves to as a device because of the mix of responses that they receive.  As a result, scanners do not necessary trim tests increasing false positive results or they trim them too much and the test is not complete.

Then there is the automated nature of today’s vulnerability scanning.  While I understand the desire to reduce costs of vulnerability scanning, the “point and click” nature of today’s ASV scanning has made it flawed.  And it gets worse as organizations get passing scans.  As a QSA, I cannot tell you how many passing scans I have reviewed where an organization could be hacked six ways to Sunday with the remaining vulnerabilities.  As a security professional, it scares me to death.  But as a QSA, while I can bring these up, they get no play because they do not have a CVSS of 4.0 or greater.  You hope that these vulnerabilities get picked up in an organization’s penetration test.

But there is no guarantee of that happening because the penetration tester’s vulnerability scanner may or may not pick up the same vulnerabilities.  As a result, part of the penetration testing methodology should include a review of all vulnerabilities found since the last penetration test and those should be tested for in the current penetration test to ensure they have been addressed.

Obviously, I have a preference to the methodology I discussed back in Part 2.  But there are a number of methodologies posted out on the Internet from a variety of good sources.  All I ask is that the vulnerability scanning methodology be integrated with the penetration testing methodology so that there are not gaps in coverage.

Require Monthly External Vulnerability Scanning

Before everyone panics, I am not asking that ASV scans be run monthly.  Although if the ASV program is modified, for organizations with internal ASVs that is a possibility.  I would still require the quarterly ASV scan, but I would add in monthly scans run by anyone deemed qualified as is allowed for internal vulnerability scans.

My primary rationale for this recommendation is driven by this simple fact.  When the dominant solution vendor releases patches on the second Tuesday of every month and the vast majority of those fixes have a CVSS score of 4.0 or greater, anyone that thinks quarterly scanning keeps them secure is seriously kidding themselves.  Not that a lot of security professionals bought into the quarterly vulnerability scanning requirement, even as a bare minimum.  But without the standard requiring it, a QSA has no leg to stand on other than to intimidate and shame people into doing monthly scanning.

Even if you are not Microsoft centric in your external environment, with the breaches that have occurred and the revelations of Shellshock and Poodle, it is painfully obvious that the quarterly requirement is not going to keep organizations secure.  I got a lot of calls after both of these vulnerabilities were announced with clients asking if their passing scans were no longer valid.  I was a bit schizophrenic in my thoughts.  On the one hand, I was glad they were at least thinking about the security implications of these vulnerabilities.  But their concern about their passing scans just highlights the importance of meeting a PCI requirement and passing their PCI assessment versus being secure.  Because, while I only got a few calls, you know that there are too many people that are congratulating themselves on dodging the bullets of Shellshock and Poodle because of the fortuitous timing of their  quarterly scans and that they got an additional 30, 60 or even 90 days to address them.

Then there are those organizations that run solutions such as IBM’s Websphere or Oracle’s eCommerce suites.  Both of these vendors not only patch their own application frameworks, but they also release those patches to the underlying operating systems that are compatible with their application frameworks.  But worse, these vendors do not release monthly patch releases, they do patch releases on quarterly, semi-annual or even annual bases.  As a result, there is a high likelihood that some operating system patches could be left out of these releases due to compatibility or timing issues.  The work around is to mitigate any remaining vulnerabilities through additional logging, additional monitoring, changes in firewall rules, changes in IDS/IPS rules, etc.  The additional vulnerability scanning could help organizations identify these issues and address them quicker than quarterly.

A side benefit of monthly scanning will be improving the ability of organizations and their QSAs to determine if an organization’s patching and mitigation processes are working according to requirement 6.1.  Quarterly scans typically document a lot of vulnerabilities, mostly those under a CVSS of 4.0.  As a result, whether or not an organization is properly managing their environment can be very difficult and time consuming leading to missing items that should be addressed.  Having reports more often can facilitate getting these issues addressed sooner rather than later and keeping the volume lower and less daunting.

The bottom line in all of this is that monthly scanning is required to even have a chance at being secure these days.  Yet the vast majority of organizations are only doing quarterly scans and thinking they are secure.  That practice must change.

So there we have it.  My thoughts on the ASV process and how I would go about fixing it.

01
Nov
14

The ASV Process Is Broken – Part 2

The next reason I believe the process is broken is with the automated scanning processes.  They do not seem to be accurately assessing the security of Web servers, firewalls, routers and other externally facing devices that form an organization’s perimeter.  In my opinion, it seems that in our drive to bring down the cost, we have created a false sense of security by having the concept of a “passing” scan and only requiring scanning on a quarterly basis.

It is hard to believe that external vulnerability scanning came into vogue more than a decade ago.  There were no automated solutions.  No popping out to a Web site, entering IP addresses and scheduling a scan.  It was all manual.  A person was required to configure the scanning solution and then launch the scanner.

There was a proven methodology used for approaching vulnerability scanning.  That methodology involved approaching the organization as an attacker would. .  The security professional would go out to domain registrars and other official sources to obtain potential IP address ranges, registered domains and other public information.  Then the professional would develop a profile of the organization’s external security posture using Nmap and similar tools as well as running non-intrusive scans to identify systems and potential entry points.  But even more insidious, the security professional would do what became to be known as “Google Hacking” to find out what, if any, information was available on the Internet that might facilitate the attack.  Log data, firewall configurations and even authentication credentials could be obtained through Google Hacking.  Even today, while Google does its best to minimize the information they collect, it is amazing the amount of information that can still be obtained with specially crafted Google searches.

Once the preliminaries were done, it was time to actually scan.  The most accepted approach used at that time was to run two scans, one fully external with security measures enabled and one with testing done inside the DMZ.  The scan done with the security measures enabled was to confirm that an organization’s firewalls, load balancers or other front ends were functioning as configured.  Essentially this scan showed what an attacker would discover in their scanning of the organization’s external presence.  There were two purposes of the second scan.  The first was to provide an inventory of vulnerabilities that were being obfuscated by the security measures so that they could be addressed or mitigated.

The second purpose of the DMZ scan was for what some would call today a form of “threat intelligence”.  The second scan provided a view of what an attacker would encounter if they were able to circumvent the organization’s external security.  It answered the question of how large the attack surface was in the DMZ should it be compromised?  What else could be subverted and used to further an attacker’s goal of getting even deeper into a network?  The point of this part of the exercise was to provide the organization with an idea of what to look for should anomalies begin to appear in log data.

Throughout this scanning process, separate scans were run for firewalls, routers, load balancers, Windows devices, Linux devices, etc.  The rationale for that was to obtain clear results based on the type of device.  No reason to toss a bunch of Windows vulnerabilities against a Cisco firewall or F5 load balancer and then have to sift through the results to remove the false positive entries generated by the Windows devices they protect.

All of this information would be collated, analyzed and an actionable report produced for management and security professionals to absorb and understand the organization’s true external security posture.  Once the draft report was delivered there would be one or more calls or meetings with the organization’s security personnel to tweak the message of the results, develop action plans to address the results and then a final meeting with management and security personnel to deliver the final report.  At the end of this process, management and security personnel knew exactly where they stood from a security perspective and what needed to be done to ensure that they remained secure going forward.

Obviously, such an approach is fairly comprehensive and that costs money to produce.  And it was the cost that drove the automation of the process so that it was less labor intensive.

In the mad dash to the lowest possible cost, the following is what typically happens today.

Organizations subscribe to an ASV vulnerability scanning service.  If the organization is lucky, they go through a scoping call with a human being at the ASV service to confirm the IP addresses that are in-scope for PCI compliance.  Having been on a number of these calls with ASVs, it is more of a questionnaire approach and there is very little done by the ASV to actually confirm the scope of the scanning.  It is more or less a conversation that explains how to enter the IP addresses into the scanning tool’s Web interface and to schedule a scan, not a diligent conversation on whether or not a device is in scope or not.  There is typically a brief discussion on how to use the Web site to dispute scanning results that caused a non-passing scan.  In total, the call takes 20 to 30 minutes at which point customers are essentially on their own.

Scans are run on a quarterly time period.  Scans may be rerun if vulnerabilities are identified that result in a non-passing scan.  Once a passing scan is obtained, a passing certificate is generated and the organization moves on to the next quarter.  Every now and then a result may be disputed, but those situations are typically rare.  The bottom line is that the ASV process is fairly automated with very little, if any, human intervention.

From a tool perspective, the vulnerability scanner is tossing every vulnerability at every IP address they are given.  Firewalls, load balancers, switches and routers are tested for Windows and Linux vulnerabilities.  False positive results can be rampant but most scanners weed out the obvious false positive results based on OS and device signatures.  But even if a few false positives remain, as long as they have a CVSS score of less than 4.0, no one cares.  And the scanner is configured to run as quickly as possible because it needs to be available for the next customer so that the ASV does not have to have an individual scanner for every customer.

All of this is driven by customers’ desire to minimize cost as much as possible.  Thus, in order to stay in business, the ASV scanning service providers have done an admirable job of driving down cost.  But at what other costs?

While I think we can all agree that the current approach is flawed, the Council would point to the fact that the quarterly ASV scanning requirement is only a minimum and that organizations should be doing their own external vulnerability scanning more often than quarterly.  And that is what is missed by a lot of organizations is that they need to do more than what the PCI DSS requires to be secure.  And no matter how loud that message is repeated, it seems to get missed over and over again.

However, a lot of organizations do not do anything more than the PCI DSS requires.  We have this “bare minimum” mentality because, if it is not mandated in the standard, then we must be secure if we do only what is required.  This results in the false sense of security that organizations have when they only do what the PCI DSS or any other security framework for that matter mandates.

In addition, I cannot tell you how many organizations totally write off vulnerabilities that have a CVSS of less than 4.0 as “trivial” representing no threat.  There is this mistaken belief that, because the CVSS score does not warrant immediate patching, it is not a threat.  However, when you have them in relative abundance and of certain types, those “trivial” vulnerabilities can be put together in a particular sequence and used to compromise a network.  To the shock and horror of a lot of organizations, their security is breached in moments during their penetration test using those “trivial” vulnerabilities.

Then we have the limitations of the tools themselves.  Their vulnerabilities are constructed for the general audience, not a specific network.  As a result, depending on security measures and configurations, a particular vulnerability for say HTTP or FTP might not register due to a load balancer or other devices in the communication path.  Yet with a minor tweak, the same vulnerability could actually work.  And that was another point of doing that second scan inside the DMZ back in the “old” days.  Taking the two scans allowed you to correlate that while HTTP or FTP was open; it was vulnerable because that vulnerability was shown on the internal DMZ scan.  As a result, you could take additional steps to ensure that the vulnerability could not be exploited from the Internet.

The bottom line in all of this is that ASV scans have become a pointless piece of the compliance assessment process.  It has not been because the ASVs are bad, it is that the process they are forced to follow is not doing what it once did at the beginning.  It does not help that merchants and service providers treat ASV scans as a “check the box” activity and that because they do something quarterly they are secure.

If the process is broken, what should be done to fix it?  That is the subject of my next post.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

November 2014
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930