Archive for March, 2015

31
Mar
15

PCI Council Calls It Quits

Citing that the payment channels are now secure, the PCI Security Standards Council announced this morning in a press release that they were closing down effective June 30, 2015.

20150401-PCISSCDisbanding-AprilFoolLetter

A large sigh of relief was heard worldwide upon the announcement.

Have a happy April 1st otherwise known as April Fools’ Day.

Advertisement
27
Mar
15

PCI SWOT Analysis

SWOT – strengths, weaknesses, opportunities and threats

I had someone ask me about my thoughts on this sort of analysis of the PCI DSS. While these comments are PCI focused, I found that they actually apply to all security frameworks.

Strengths

The biggest strength in any security framework, PCI DSS included, is they are all based on the “best practices” from a wide variety of leading experts and organizations. Essentially, security frameworks are the shared knowledge base of what it takes to have basic security. We talk today about sharing breach information better and potentially in near real time, but security frameworks are the original method of sharing such information.

Weaknesses

Unfortunately, I see a number of weaknesses with security frameworks.

The largest weakness with security frameworks I see is that most people, including a lot of security professionals, seem to believe that complying with the framework is all it takes to be secure. With the PCI DSS a lot of this misinformation can be laid at the feet of the card brands. It was the card brands that originally marketed the PCI DSS as the “be all, to end all” for securing the payment process.

The unfortunate fact of life for security frameworks is that they only minimize and manage security risks, they rarely ever eliminate them. Therefore, even following the PCI DSS to the letter is no guarantee that an organization could not be breached. Yet this concept of risk minimization, risk management and the fact that security is not perfect consistently gets missed by executives. So when the inevitable breach occurs, executives go after the security people for supposedly misleading them.

Another area of weakness is the time with which it takes to make an update to the framework. In October 2014, the National Institute of Standards and Technology (NIST) issued a bulletin on secure sockets layer (SSL) indicating that they had found a flaw in the protocol and that they no longer found the protocol secure. A few weeks later the Internet was introduced to POODLE and SSL was declared insecure. It took a few months for the PCI SSC to react to this and officially declare SSL was no longer to be relied upon for secure communications. It took vulnerability scanners almost a month to begin flagging SSL implementations as high vulnerabilities as the CVE had not yet been updated. And we were recently informed that it will be April at the earliest before we will get the latest version of the PCI DSS. In the meantime, all of this administrivia did not stop attackers from using POODLE to their advantage.

The final weakness I see with security frameworks is that organizations find it impossible to execute them consistently at near 100%, 24×7. In theory the PCI DSS will provide reasonable security for all but the most dedicated attacks such as with advanced persistent threat (APT). For an organization to achieve basic security, they would have to execute the requirements of the PCI DSS at least at 95%+ and would have to remediate any issues within a few days. Unfortunately as we have seen in the recently released Merchant Acquirer Committee study, merchants are typically only compliant with the PCI DSS between 39% and 64% of the time – far from 95%+. Verizon’s recently released PCI report backs this up with their findings. The bottom line is that most organizations lack the discipline to execute any security framework consistently enough to achieve basic information security.

Opportunities

The biggest opportunity I see for the PCI DSS is it gives organizations the impetus to simplify their environments. The biggest reason for the failure to execute the PCI DSS consistently is because a lot of organizations have technology environments that mimic a Rube Goldberg cartoon. Only by simplifying that environment will an organization have a reasonable chance of securing it.

Another opportunity this gives organizations is a reason to enhance their security operations. Most organizations run bare bones security operations no different than other areas. However, what PCI compliance assessments typically point out is that those security operations are grossly understaffed and not capable of ensuring an organization maintains its compliance at that 95%+ level.

Related to these two opportunities is what the PCI SSC calls business as usual (BAU). BAU is the embedding of the relevant PCI requirements into an organization’s business processes to make it easier to identify non-compliance as soon as possible so that the non-compliance situation can be rectified. BAU is primarily designed to address the execution weakness but can also have a significant effect on the other weaknesses.

Finally, the last opportunity is to address the failings of an organization’s security awareness program. Organizations finally come to the realization that all it takes to defeat all of their expensive security technology is human error. The only way to address human error is extensive security awareness training. No one likes this, but in the end it is the only thing that remains when you have implemented all of the requisite security technology.

Threats

The obvious threat that will never go away is the attackers. As long as we have our interconnected and networked world, attackers will continue their attacks.

The final threat is complacency. A lot of organizations think that once they achieve PCI compliance that their work is done and that could not be further from the truth. Security is a journey not something you achieve and then move on to the next issue. The reason is that no organization is static. Therefore security must constantly evolve and change to address organizational change.

There are likely even more items that could be added to each of these categories. However, in my humble opinion, these are the key points.

20
Mar
15

You Make The Rules

At last year’s Community Meeting, there were a couple of instances where members of the PCI SSC reminded organizations that it is up to them to set the parameters of their PCI assessment. A lot of people in attendance took those comments to mean it is up to merchants and service providers to define the scope of their assessment, but it goes further than that.

For years organizations have complained that they receive varying advice from different QSAs even when the QSAs are from the same firm. Obviously this situation is frustrating for not only merchants and service providers, but for the QSAs as well.

To address this situation, the Council is telling all PCI stakeholders that it is up to the organizations being assessed to define the rules of the assessment. Not just the scope, but also what level of risk that the organization is willing to accept. So what does that mean? I intend to clarify that in this post.

And to be extra clear, this is not some excuse to create a set of rules that allow you to skate by. You must show your work and document your rationale. If your QSA has honest concerns about your work, then expect some push back and bringing your acquiring bank into the discussion. If your acquiring bank agrees with your rules, then you need to get that in writing from them and everyone should move on. But if your acquiring bank agrees with your QSA, then expect to make changes to your rules.

Scoping

Scoping is the responsibility of the organization being assessed, not your bank’s or your QSA’s responsibility. This requirement is even explicitly called out in the PCI DSS on page 10, second paragraph, second sentence.

“At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

The first step in defining scope is to define the rules of how to scope. This is the toughest part of scoping.

Complain about it all you want, but the Open PCI Scoping Toolkit is a good framework to start the discussion about how to scope a PCI assessment based on the risk presented by devices. My first recommendation is that I would highly recommend that you stick with the three categories. In my experience, organizations that create more categories just end up creating more confusion and consternation. However, there is no way to go with fewer categories without putting your entire network in-scope.

Categories 1 and 3 are not the ones in question. My personal opinion is that having two sub-categories in category 1 seems silly to me. Devices and systems that directly process, store or transmit cardholder data (CHD) or are in use within or define the cardholder data environment (CDE) are category 1 regardless. Category 3 devices/systems are those that never, ever come into contact with the CDE. In my opinion, these two categories are clear cut and pretty straight forward.

Where the arguments occur or will occur most often is over the category 2 devices and systems. You can accept the four sub-categories that are defined in the toolkit or come up with your own. If you are going to define your own sub-categories for category 2, the key point to remember is that category 2 devices/systems have direct or indirect influence over the category 1 devices/systems because they have access in some way to the CDE. The sub-categories are used to define the level or risk or threat these category 2 devices/systems represent to the category 1 devices/systems based on the type of access the category 2 devices/systems have to the CDE.

The difficulty with setting the category 2 sub-categories is that everyone has their own risk tolerance. It is those differences in tolerance that create the problems. Security personnel tend to be more conservative because it is their butt on the line if something bad happens. The further people get away from security, the more risk tolerant people seem to get because they do not have a complete understanding/appreciation of the minutiae.

Experience says that whatever and however you define category 2 devices/systems do it as simply and clearly as possible. I would highly recommend you keep the number of sub-categories to a minimum and that you use examples for each sub-category so that readers understand where devices/systems fall under your sub-categories. The key outcome of this effort is a formal document, similar to the Open PCI Scoping Toolkit that; defines your categories, the rationale for those categories, provides examples for each category and is approved by executive management.

Once you have defined your scoping categorization rules, then it becomes an exercise in categorizing your inventory of networks, devices and systems based on your criteria. Do not be surprised if during this process networks, devices and systems you thought were out of scope suddenly come into scope. This is not unusual because prior to this point you were just taking a scientific wild ass guess (SWAG) as to what was in-scope.

Risk Assessment

Once you have your scoping categories set, you need to roll that methodology into your risk assessment so that you can properly assess your risk. By doing so most organizations find that their risk assessment shows more devices/systems at higher risk because they are now in-scope for PCI compliance.

Take your scoping categories and convert them to weights for evaluating risks. For example, category 1 devices/systems would carry the highest risk weighting available. Category 3 devices/systems would carry the lowest risk weighting allowed. Category 2 systems would carry risk weights somewhere between the highest and the lowest based on how you have defined your category 2 sub-categories.

Define Your Terminology

This is very straight forward, but is usually missed by most organizations. Organizations need to define their terminology. In particular, what the organization means by ‘significant change’, ‘period’ and ‘periodically’. I wrote a post on this very topic a while back so I will not bore you here with that discussion. These are very important definitions that must be set.

However there are likely other terms that should be defined for your QSA and anyone else without intimate knowledge of your technology environment. My personnel pet peeve is the lack of definitions of acronyms that are commonly used by your organization but that might be easily misunderstood by anyone else. It never ceases to amaze me when people inadvertently treat an outsider as an “idiot” when they speak in acronyms and the outsider has no clue as to what was said because they are not insiders.

My favorite example of this was a person that kept referring to the ‘HSM’ during our interview. Given this was a PCI assessment, my assumption was that they were referring to a hardware security module, however the way they used ‘HSM’ in our interview seemed to be in the wrong context. So I asked them and they confirmed my suspicion, they had been referring to a custom application called high-level system messaging. Had it been in a mainframe environment, HSM could have been a reference to hierarchical storage management. This is why a glossary of terms and acronyms is a good thing to build, not just for PCI but for any newcomer to your environment.

Discuss This With Your QSA

Finally, do not keep your QSA in the dark as you work through this process. As you create your documentation and classify your devices and systems, run this all by your QSA to get their buy in before they start your assessment. Most organizations will not run into too much push back from their QSA unless they are trying to set the bar too low in a vain attempt to make the PCI compliance process too easy, i.e., checking a box. The last thing you should do is spring this on your QSA the day they start your assessment. And that includes if you update or change your rules between assessments.

All of this effort should result in a much more straight forward assessment because you have defined the rules and criteria to which you are to be assessed.

14
Mar
15

The 2015 Verizon PCI Report

A lot has been written about this year’s Verizon PCI Compliance Report particularly about how 80% of organizations cannot maintain their compliance. And at the very end of the report are a number of issues raised by Verizon regarding why maintaining compliance is so difficult for most organizations. It is those issues that I would like to discuss.

Scale and Complexity of Requirements

“I just don’t understand why this ERP upgrade is going to take 18 months to complete. Can’t we just put the DVD in the drive and upgrade it like Microsoft Office?” – Anonymous Executive to IT Management

The same could be said about any security framework. If organizations are struggling with PCI compliance, imagine how they are struggling with HIPAA, FISMA or ISO 27K compliance. Compliance with any of the security frameworks is not easy.

I disagree with Verizon’s claim that it is related to the fact that most organizations do not know the PCI DSS. After six years and three versions, I rarely run into an organization today that does not have a basic, overall understanding of the PCI DSS. These organizations may have some interesting ideas on what sections and requirements of the DSS mean, but they have definitely studied it and read about it. Therefore the idea that organizations are ignorant on the subject is far from the truth in my experience.

In my opinion, where the problem lies is that most organizations have not truly managed their technology environments thanks to interference with mergers and acquisitions, partially implemented applications, bring your own device (BYOD), the Cloud and the plethora of other disruptions that complicate organizations. Today, IT is a very important part of any organization, but it is not managed like it was in the “good old days”. There are too many stakeholders and the consumerization of technology has not helped the situation by making everyone an IT “expert”.

Most organization’s IT operations these days are a hodge-podge of technologies, applications and networks. I would equate it to the technological equivalent of a house’s attic and garage combined. We all know we should clean and straighten them out, but that project always sits on the back burner as there are other, more important or fun things to do.

As a result, for most organizations, there is just no easy way to simplify, segregate and isolate cardholder data (CHD) and comply with the PCI DSS without making the environment even more complex. Starting over is not an option for a lot of organizations.

That said I have encountered a few very brave organizations that have done just that, started over. Management at these organizations came to the realization that fixing the problem was too complex and expensive and that starting over was the cheaper, safer and easier way to go.

Uncertainty about Scope and Impact

“I don’t know much about PCI, but I do know my scope.” – Anonymous Manager to QSA

When application developers cannot explain how their applications work on a technical level. When anyone in any department can be in the IT business. When security personnel are order takers for firewall configuration changes reviewed and approved by management that have no clue as to the implications of those changes. When network people are providing a communications utility for communications traffic but have no idea how that traffic traverses the network.

Is it any wonder we have no idea how to scope a PCI assessment?

But there are larger problems as to why scoping is difficult. The root cause of why scoping is such an issue is that everyone’s risk tolerance is different. I drive race cars at very obscene speeds on race tracks (mostly) that I am sure a lot of people would view as insane. However, I think that people that skydive and do rock climbing are the insane ones. All of this points to everyone’s acceptance and avoidance of risk based on their own views.

There is a sidebar in the Verizon report calling the PCI SSC to provide guidance about scoping. Good luck with that. The Council had a scoping SIG a number of years ago that imploded due to the aforementioned issues with everyone’s risk tolerance. The result was a small band of people from the SIG that published the PCI Open Scoping Toolkit. The PCI Open Scoping Toolkit is not perfect, but it provides a framework to have an intelligent discussion about how to go about scoping and determine what is in-scope and why.

The key to solving the scoping issue resides with the organization, not their QSA, acquiring bank or any other external entity. Organizations need to use the PCI Open Scoping Toolkit to come up with their scoping framework and definitions. Once that has been agreed, then an organization needs to map out their applications and networks to determine their true scope. This is where tools from vendors such as Tufin, FireMon, SolarWinds and the like can provide assistance by documenting the network and then simulating data flows over the network.

With that approach, it is incumbent on QSAs and other auditors to accept these definitions for their assessment unless there is some significant or gross error in the organizations definitions. This will address the complaint that organizations have with QSAs. How often have we heard something such as, “The last QSA told us this was compliant.” If we all play by the same risk definitions the client has provided, then statements like that should go away.

Once an organization truly understands and has defined its scope, it can then understand the impact of existing operations and any changes.

The Compliance Cycle

This is what the Council is attempting to address with business as usual (BAU). The idea is that with security practices and monitoring embedded within an organization’s operations, security issues can be quickly identified and addressed before they become serious.

However, for this to work, organizations need to have their scope known as well has how their IT environment actually works. Without that knowledge, embedding the PCI DSS into the organization is a futile exercise.

Lack of Resources

Every organization is running “lean and mean” these days. Cost control is king. As a result, resources are stretched, sometimes to the point that any additional activities just cannot be accommodated without hiring someone. And hiring is not allowed. So implementing BAU is not going to go well if it goes at all.

On the information security front, finding qualified people is nearly impossible, even for consultancies. Organizations are finding that most information security professionals are heading to consultancies because the pay is better. Since security is hard on both the mind and the body, most people want to be reimbursed as much as possible for their efforts. As a result, most organizations cannot pay for in-house security resources. And then, even if they do ante up, typically the person that takes the position either gets bored once they fix everything, or gets frustrated when the organization refused to make required changes to ensure or enhance security.

Enter the managed security services provider or MSSP. The concept is that the MSSP provides the security talent at a more reasonable price yet organizations get the quality personnel needed to enhance and stabilize their security.

Where this goes wrong is that the MSSP and the customer are not on the same page as to each other’s responsibilities. This is from a mixture of sales people over promising as well as prospective customers hearing what they want to hear. Never mind that it is all documented in a contract.

To address this situation, the PCI SSC has come up with a new requirement, 12.8.5, which states:

“Verify the entity maintains information about which PCI DSS requirements are managed by each service provider, and which are managed by the entity.”

Under the v3 Attestation Of Compliance (AOC) form, this will not be as big a problem for an organization to maintain. However, if an organization has a lot of service providers and/or the service providers have v2 AOCs; this could be a very daunting task.

Lack of Insight in Existing Business Processes

“I’ve only been in this position for [2, 3 or 4] months. So I’m not fully up to speed on everything we do.” – Anonymous Manager to QSA

“I’d give you an organization chart, but it would be out of date by the time I printed it.” – Anonymous Human Resources Manager to QSA

In today’s fast changing business world, people get shuffled out of departments and divisions faster than people can manage the changes. As a result, finding anyone with any sort of insight into an organization’s business processes can be extremely difficult, if not impossible.

Then we go back to my earlier comment about lack of IT management. With the advent of the Cloud, some business divisions and departments have totally sidestepped the formal IT organization and set up their own operations in the Cloud. Did they know what they were doing? No! But that was beside the point, they at least now have IT solutions, never mind if they are secure or implemented properly. The only way to find these rogue operations is to quiz everyone in the organization about how they operate and what they use to operate.

Even then, I have run into situations where a new payment channel pops out of the woodwork at the last moment. Next year’s assessment issue or we will not get the one we are currently doing out the door.

Misplaced Confidence in Existing Information Security Maturity

A lot of organizations that have been doing IT for years and years get caught in this trap. Just because you have been doing IT for an eternity does not mean that you have been doing it right for the same amount of time or that you are doing it correctly now.

In a lot of IT organizations it is an unfortunate fact of life that areas such as special projects, business continuity planning or information security were used as those “safe” places to put the former IT Vice President or Manager out to pasture so they could retire. It did not matter if the individual could handle the job; it was a place to park someone and provide a gentle way out of the organization.

A rare few individuals made the transition and actually took up the challenge of mastering their new responsibilities. However, the vast majority just checked out, collected their pay check and then retired. This left the organization with a very immature security operation compared to the rest of IT’s operations. Add into the mix the changing landscape of IT with business divisions and departments doing their own thing unbeknownst to anyone and you can see how the maturity of information security could be easily misunderstood.

Then along comes the QSA to do the PCI gap analysis and it all comes to a head as the organization comes to the rude awakening that all is not as good as they thought and that significant gaps exist. To add insult to injury, the organization finds that fixing the gaps is going to take a lot longer than the 90 days they had set aside for that activity so that they could get their Report On Compliance (ROC) done in the same year.

The Verizon report is a great read and provides a lot of insights. Everyone should get a copy and read it, take it to heart and address your organization’s security shortcomings.

07
Mar
15

An Audit Versus An Assessment

A lot of people are always calling their PCI assessment an audit.  However, certified public accountants (CPA) would tell them that there is a vast difference between the two.

An assessment is defined as:

“… to measure something or calculate a value for it. Although the process of producing an assessment may involve an audit by an independent professional, its purpose is to provide a measurement rather than to express an opinion about the fairness of statements or quality of performance.”

The key point of difference between an audit and an assessment is the “opinion”.  While people would argue that a QSA is judging them PCI compliant, judging is not the same as offering an opinion.  The reason is that a PCI assessment is done as of a point in time, not over a period of time.  Yes there are some tests in the PCI assessment process such as with change management and vulnerability scanning that are tested over a period of time.  However the bulk of testing for PCI compliance occurs at a given point in time, most often the time of the assessment.  Such limited testing does not provide the basis for opining on any security program.

An audit is defined as:

“Audits provide third party assurance to various stakeholders that the subject matter is free from material misstatement.”

As an example, a financial audit comprises testing and sampling that is performed over the audit period, typically a period of one year.  In addition, an auditor must conduct testing such that they can provide reasonable assurance that there are no material misstatements during the audit period.

The first important phrase is “reasonable assurance” and it is defined as:

“Acknowledgment that it is not possible to assert absolutely and certainly that an event will (or will not) occur.”

Going back to our financial audit example, what reasonable assurance points out is that it is impossible for a financial auditor to essentially redo all of the work performed by an organization’s accounting staff to prove that all of the transactions performed over the audit period were processed exactly as they should have been.  As a result, an auditor creates tests of processes and controls and then generates sample sizes based on the risk and the number of transactions performed throughout the audit period such that it is likely the procedures will identify any errors or omissions.  If the testing of those samples does not result in any errors or omissions being discovered, then the auditor believes that there is reasonable assurance that there are no material misstatements.  If errors or omissions are found, then the auditor must increase their sample size to determine if the errors or omissions are systemic in nature (i.e., the process/controls are broken) or if they are true mistakes.  The bottom line about reasonable assurance is that everyone (client, auditor, auditor’s certification body) agrees that if processes/controls are broken, the auditor’s procedures for detecting those breakdowns are sufficient to identify them.

And now we get to what we mean by “material”.  Materiality is defined as:

“Information is material if its omission or misstatement could influence the economic decision of users taken on the basis of the financial statements. Materiality depends on the size of the item or error judged in the particular circumstances of its omission or misstatement. Thus, materiality provides a threshold or cut-off point rather than being a primary qualitative characteristic which information must have if it is to be useful.”

Materiality is a judgment call by the auditor based on an examination of risk and whether that risk could result in a misstatement of facts in the financial reports.  Years ago we were working with a large client.  We relied on their external financial auditor and their assessment of the point of sale (POS) systems user management and access controls audit for Sarbanes Oxley (SOX) to satisfy some of the PCI requirements 7 and 8 testing.  However, two years in, the external financial auditor deemed that the controls surrounding the POS systems were no longer material to the financial audit and stopped their testing.  As a result, we were left with having to assess the user management and access controls ourselves.

At this point, I am sure a lot of you are wondering other than getting you all to stop calling PCI assessments “audits”, what are you saying?

Business as usual (BAU) is going to change how PCI assessments are performed.  Since organizations will have been required to embed controls and monitoring into their business processes, the PCI assessment will likely be changed into a true audit.  The reason will be that BAU will require record keeping that will allow a QSA to test for exception conditions for PCI requirements and ensure that the exceptions were corrected and how quickly they were corrected.

While I know a lot of organizations will complain about this sort of process, this is how a proper information security program should work in the first place.  Information security controls and monitoring should be embedded into all relevant processes in an organization.  Business management and information security should be monitoring and measuring the controls and, when an out of compliance condition occurs, the appropriate actions are taken to either bring the controls back into compliance or the controls are updated/changed to reflect changing conditions.

In rare situations, an organization might find that a control is no longer required because changes have made the control obsolete.  This is typically the case when an organization introduces new application software or new network architecture and the control environment wholly changes and controls end up as inadequate, monitoring for the wrong condition(s) or in the wrong place.

BAU is not a penalty; it is an approach to keep an organization on its security “game” by embedding controls and monitoring into the relevant business processes.  By doing so an organization then has a mechanism in place to maintain its information security compliance as close to 100% as is humanly possible.

But that will be the rub.  This approach will likely find a lot of organizations identifying that staying compliant is nearly impossible because of constant out of compliance situations that will be brought to light.  The side benefit of BAU will be to demonstrate just how important security training for all personnel is and that security technology is not the biggest cause of security issues, it is human error.  BAU statistics will provide the focus for security training of personnel to address shortcomings.  In theory, that training should minimize the security issues from human mistakes and make an organization’s security posture all that much better.

Implementing BAU will take time.  It is also not a silver bullet.  Like its financial audit brethren, errors and omissions can still occur under BAU, but they are more likely to be caught and addressed before they can spin out of control.

01
Mar
15

What Is A Level 3 Merchant?

This consistently keeps coming up as an issue because of the confusing definitions on the Visa, MasterCard and Discover Web sites.

“Merchants processing 20,000 to 1 million Visa e-commerce transactions annually”

“Any merchant with more than 20,000 combined MasterCard and Maestro e-commerce transactions annually but less than or equal to one million total combined MasterCard and Maestro e-commerce transactions annually”

“All merchants processing between 20,000 and 1 million card-not-present only transactions annually on the Discover network”

In my opinion, the reason for the confusion is that definitions only mention eCommerce or card-not-present (CNP) payment transactions and no other payment channels. As a result, people think that other payment channels do not count for Level 3 merchants or that Level 3 merchants only do business through eCommerce or CNP payment transactions.

I have even encountered merchants that argue that they are exempt from PCI compliance because their organization does more than 20,000 eCommerce or CNP payment transactions but they also process payment transactions through other payment channels but, in total, have less than 1 million payment transactions. Some people will argue any point to avoid PCI compliance.

So if this is not true, exactly what is a Level 3 merchant?

Based on training and from discussions with the card brands over the years, Level 3 merchants have 20,000 or more eCommerce or CNP payment transactions, but cannot exceed 999,999 payment transactions from all payment channels combined.

As examples:

  • A pure eCommerce merchant with no other payment channels can conduct up to 999,999 payment transactions through their Web site and be considered a Level 3 merchant.
  • A merchant with 20,000 or more eCommerce or CNP payment transactions that also has one or more of the following; brick and mortar, mail order, telephone order or other payment channels, cannot exceed 999,999 payment transactions from all of their payment channels to be considered a Level 3 merchant.

If an organization exceeds a total of 999,999 payment transactions from all their payment channels they are, by definition, classified as a Level 2 merchant. If the merchant has fewer than 20,000 eCommerce or CNP payment transactions, then they would be classified as a Level 4 merchant.

Hopefully we all now understand the definition of a Level 3 merchant.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2015
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031