Posts Tagged ‘Consistency

29
Aug
14

Is The PCI DSS Effective?

Brandon Williams has a great blog post on his site that answers this question.

The bottom line is that there is no organization that is going to execute the PCI DSS, or any security framework for that matter, 100% of the time, all day, every day.

Why? 

Security is NOT perfect. 

Why? 

Because it involves human beings and we are flawed.

However, that does not mean that you should not try and be as close to 100% flawless as possible.  Because the difference between an organization that is breached and one that is not breached, can be only a percentage point.

For all of you in the United States, have a safe holiday weekend.

Advertisement
27
Jul
14

The Dilemma Of PCI Scoping – Part 1

Based on the email comments of late, there are apparently a lot of you out there that really do not like the Open PCI Scoping Toolkit.  I am not sure exactly what post where I mentioned the Toolkit got you all wound up, but I have definitely hit a nerve.  From the comments in these messages, it is painfully obvious that the reason the SIG failed was that none of us are in agreement about how much risk we are willing to accept.  And that is why no PCI assessment is ever the same because organizations and even QSAs from the same QSAC can have different levels of risk tolerance.

I, too, have to admit that I think the Toolkit needs some work, but it is the best framework we have to start a discussion on the topic.  And that is the problem, the topic.  Until the Toolkit appeared, scoping discussions had no real framework and everyone had their own definitions.  And as I have pointed out before, while there are a lot of people out there that might not know the nuances of the PCI DSS, it seems that everyone “knows” what is in scope and what is out of scope.

As a result, QSAs have found out through the “School of Hard Knocks” that everyone has their own view of scope and there was no good guide to explain how or why to draw the line let alone discuss the topic civilly in some cases.  I view the Toolkit as the bare minimum.  If an organization wants to get even more restrictive and have more categories, great, that is their prerogative.  However, if they want to go less than the Toolkit, in my very humble opinion, they can do it without me.  The bottom line is, regardless of whether you are using the Toolkit or have your own approach, document the definitions of your categories and provide examples so that everyone can understand your rationale and then discuss the impacts on your organization’s PCI scope.  Without such a document, we are not going to have productive discussions on scope.  That is why I lean toward the Toolkit, it gives me a starting point to get a productive discussion started.

We seem to all be able to agree on the Category 1 and 3 systems, because those are clear and easy to identify.  Category 1 systems are always in the cardholder data environment (CDE) because they directly process, store or transmit cardholder data or define the CDE and are therefore always in-scope.  Category 3 systems never, ever process, store or transmit cardholder data and are therefore always out of scope.

It’s those pesky Category 2 systems (the ones that connect in some way to/from the CDE) that get everyone’s undies in a bunch.  The group that developed the Toolkit did their best to break them out in ways that made sense but were still understandable and easy to use.  The more that I have thought about it, the more I think they came up with the best compromise. In my opinion, if you start adding any more categories or sub-categories to the existing definitions you will lose almost everyone due to complexity, including security people.  However, I also don’t believe that simplifying Category 2 is an answer either.

But if the discussion about Category 2 is tough, the fact that the Toolkit allows for Category 3 systems to exist on networks with Category 2 systems sends some security purists right over a cliff.  Their rationale is that Category 3 systems could be easily attacked and therefore allows a beachhead to compromising Category 2 systems.  While this is true, the idea of totally isolating Category 2 systems is not realistic for most organizations because of the ramifications of such a decision.

Why Isolation Is Not An Option

Security purists seem to think isolation of the CDE is the answer.  From an outsourcing perspective, that would provide isolation.  But in my experience, even outsourcing is not as isolated as one would think.  Here is why I think that isolation does not work whether doing it internally or through outsourcing.

Isolation means physically and logically separate directory systems with no trust relationships between the isolated environment and normal production.  I have seen all sorts of technical gymnastics to secure directory systems inside the CDE that can still leave too many holes in firewalls so that the trust relationship can exist.  If you are truly serious about isolation, then you need to have true isolation and that means physically and logically separate directory systems.  This also means duplication of credential management and introducing the possibility of errors when provisioning accounts.

The idea of leveraging your existing solutions for network and application management must be rethought as well.  This means separate security event and information management (SEIM) solutions, separate network management and monitoring, separate application management and monitoring, etc.  I think you get the idea.

Of course separate firewalls, routers, switches, intrusion detection/prevention, load balancers and other infrastructure are also going to be needed.  If you use RADIUS or TACACS+ for authentication, you will have to have separate systems for authentication to the infrastructure as well.  You will also need separate DNS and DHCP servers if you intend to provide those services inside the CDE.  Of course all of this duplicated infrastructure adds to the likelihood that mistakes will be made in configuration changes that could result in a breach of that isolation.

There are no “out of band” or virtual terminal access into your pristine isolated environment.  So you will need to provide separate PCs for operations and network personnel so that they have access to the isolated environment and then another physically separate system to your organization’s normal work environment.  Internal users with access to cardholder data (CHD) will also be required to have physically separate PCs for accessing the CDE.  This will also mean ensuring security of network switches inside the CDE by using MAC filtering or “sticky” MAC approaches to ensure that only the PCs that should have access to the CDE do have access.  And of course wireless networking is totally out of the question.

But wait, you will also have to invest in some sort of portable media solution so that you can get data from the isolated environment to the “normal” production environment and vice versa.  No connected databases or application integration because that will require holes into and out of the isolated environment.  This is where outsourcing for isolation also comes up short.  But without application and data integration, the economies of scale shrink almost exponentially as more and more data must be manually moved between environments.  This drives the cost of isolation almost through the roof and typically makes isolation too expensive for most organizations.

Various government entities have all tried this approach with mixed results as far as breaches are concerned.  So in practice, the isolation approach will still leave your organization with some amount of risk that must be managed.

So if isolation is not the answer what is the answer?  In Part 2 I’ll discuss what I think works.

08
Nov
09

Credit Card Terminals And PCI Compliance

Here is a point of confusion that even I do not completely understand.  Mainly because I do not understand why there is any confusion to begin with.  I am writing about this because the PCI SSC and the card brands need to provide guidance on what applies in regards to credit card terminals and PCI compliance.  The credit card terminal industry also needs to wake up and get on board with security before they end up in the PCI compliance dog house.

There seems to be a huge disconnect between the various standards and how they apply to credit card terminals.  In a thread on the SPSP Forum, there have been discussions regarding the fact that credit card terminals are required to meet the PCI DSS standard.  Yet I have seen terminals that store primary account numbers (PAN) unencrypted and violate other PCI DSS and PA-DSS requirements.  If you ask the terminal vendors, they claim that the only standard they need to worry about is the PCI PTS.  Hello?

Requirement 3.4 of the PCI DSS is the most troubling of the lot, the storing of PANs unencrypted.  I have seen numerous terminals that store PANs unencrypted.  Press the vendors on this issue and they come back with the following.

  • The PANs can only be displayed one at a time.
  • You have to be in administration mode to view the PANs.
  • The PANs cannot be printed out.
  • The PANs are stored in memory, not on a hard drive.
  • The PANs are cleared when the end-of-day (EOD) process is run.

In a couple of instances of which I am aware, the terminal vendor has told everyone that the terminals that are storing PANs will be fixed by August 2010, but not sooner.

Okay.  So you will rely on a compensating control to meet requirement 3.4.  In my opinion, none of those aforementioned bullets are sufficient to meet the requirements of a compensating control.  Big deal that the PANs can only be displayed one at a time.  The fact that you need to be in administrative mode is nothing, as most of these devices only have two modes, end user and administrative.  And to run EOD or do anything else, you need to be in administrative mode.  Storage is storage, memory or otherwise.  Logging of access to these devices is not available.  None of these conditions rise to going above and beyond, so a compensating control is not even possible.

Then there is compliance with the PA-DSS.  This is a really sore spot with terminal vendors.  They claim that the PA-DSS does not apply to them and point to the following on page vii of the PA-DSS standard.

“Hardware terminals with resident payment applications (also called dumb POS terminals or standalone POS terminals) do not need to undergo a PA-DSS review if all of the following are true:

  • The terminal has no connections to any of the merchant’s systems or networks;
  • The terminal connects only to the acquirer or processor;
  • The payment application vendor provides secure remote 1) updates, 2) troubleshooting, 3) access and 4) maintenance; and
  • The following are never stored after authorization: the full contents of any track from the magnetic stripe (that is on the back of a card, in a chip, or elsewhere), card-validation code or value (three- or four-digit number printed on front or back of payment card), PIN or encrypted PIN block.”

First, I do not believe there is such a thing as a “dumb” credit card terminal any more.  They all have memory and software and, in most cases, have complete software development kits for application development using languages such as Java, C++ and the like.  In some cases, these terminals are as powerful as a netbook.  Yet, somehow the PCI SSC and the card brands have missed this point.

Most of these devices have only one ‘secure’ account.  And that account is shared with every support person around.  Anyone remember PCI DSS requirement 8.5.8 regarding shared accounts?  Whoops!

Then there is that first bullet regarding the terminal having NO connection to any of the merchant’s systems or networks is where I run into the most problems.  We see a lot of these credit card terminals with serial or USB connections to POS solutions.  In most cases, the terminals are only retrieving the amount of the purchase from the POS solution and telling the POS solution that the transaction has been approved or declined.  But there are also a lot of instances where the data flows from the terminal through the POS to and from the processor.  That does not include the number of terminals that are connected to LANs for access to the processor.

The “rub” in all of this is that the software that drives these terminals is the same regardless of whether they connect to a POS solution or network.  Talk to any software engineer from any terminal vendor and they will tell you that the underlying software for each family of terminals is the same, regardless of the options used or installed.  So, if the terminals are not connected to a POS system we can ignore the fact that these terminals are not PA-DSS compliant.  But if the terminals are connected to the POS, then all of a sudden, they need to be PA-DSS complaint.  What kind of nonsense is that?  In my opinion, they need to comply with the PA-DSS regardless as this is cardholder data processing software.

So, where are we in all of this?

Is the software application in the terminal PA-DSS certified?  No!

Is it supposed to be certified?  Yes!

And the vendors’ responses?  You are misinterpreting the standard.

Pardon?  Exactly where have I misinterpreted the standard?

It’s BS like this that allow people to point at the PCI standards and say they are inconsistent and stupid.  Well, I hate to say it, but in this situation, it is inconsistent and a bit stupid.  All of you at the PCI SSC, the card brands and terminal vendors – get a clue before this becomes the next big exposure point.

22
Aug
09

PCI’s “Scarlet Letter”

I will forewarn you all now.  This is a rant.  I want my soapbox and my minute to get this out of my system.

Imagine you are a student and at the beginning of the semester, that for this class you will be required to write a paper on a topic of your choosing but you must cover a series of 100 points and those points are given to you by the teacher.  Therefore, you go and do research, you write a paper that covers the 100 points and you hand in your paper for grading.  You get your paper back and find that you failed because you followed an old version of the 100 points, your topic was not an approved topic, and you were supposed to write your paper in the “Queen’s” English.  Sound farfetched?  Well that is exactly what the PCI SSC is doing to the QSACs in the name of quality assurance.  All of those QSACs you see with a status of “In Remediation”, in my very humble opinion, got the short shrift in being assessed in the PCI SSC’s QA program.

Now do not get me wrong.  I am all for quality assurance processes and making sure those organizations are following the stated rules.  However, the key phrase is “stated rules” and that is where things have gone terribly wrong.  Back in March or April of this year, the PCI SSC sent out to the QSACs the ‘grading scale’ that would be used to assess reports issued by the QSACs under the PCI SSC’s new quality assurance program.  This was the first time anyone outside of the PCI SSC had seen this document.  At that time, we were told that our reports would be assessed against these rules for the QA program.  That is fine except, the reports that were being assessed were generated prior to the issuance of the grading scale.  Not only that, the grading scale was developed against the v1.2 standard of the PCI DSS and a lot of the reports being assessed were v1.1 or even v1.0.  So, what do you think the chances are of passing?  I would say slim to none.  Based on an informal poll of those that have been through this year’s QA process, not one QSAC has passed.  Boy, there is a shock, it was impossible to pass.

I understand the PCI SSC wants to ensure that its QSACs are doing their work and that the work is supported by documentation.  Any QSAC that goes through the QA process after the grading scale was published will have the benefit of knowing against what they will be assessed.  Those organizations in the first wave were blind to the assessment rules.  The only way to make this fair would be to assess all QSACs’ reports for the time period prior to the issuance of the rules, but the chances of that happening are none.  And if that was not bad enough, they are only reviewing the reports, not the QSAC’s work papers that support the work.  In another change this year, the PCI SSC has forced all QSACs to put legal language in their proposals so that the PCI SSC can have access to a QSAC’s work papers.  Prior to this, it seems that the PCI SSC does not have the legal right to see the work papers.  I am not sure how you assess something without one of the key components, but apparently, it makes sense in their world.

A lot of QSACs’ reports have been reviewed by the card brands themselves and have been accepted.  Now, all of a sudden, there is a new Sheriff in town, and they are determined to make things “right.”  However, let us make the process fair.  Review QSAC reports and make comments regarding how we can make things better, but give people 12 months after you released your grading scale before you start holding people accountable.  It is hard to justify penalizing someone for a grading scale you just created and distributed, for a standard that may or may not apply to the reports you are reviewing, and then apply that grading scale to those documents that were created well before the grading scale you are now gauging them against.

So, for those QSACs that are in the first wave of the QA program, my sympathies.  Your entry on the QSA list will be flagged in RED, the PCI SSC’s version of the ‘Scarlet Letter’.  It will be interesting to see those QSACs that will be spared this indignity by having their QA review in later years so that they have time to address the grading scale.  If you do not believe there are favorites, look again.  Time and again, certain QSACs seem to dodge bullets that catch others.  It is not what you know, it is whom you know.

Oh, and to add insult to injury.  At our recertification training in April, we were told by one of the PCI SSC’s QA people that we should develop a report template around the grading scale so that we were always certain to pass the QA process.  While such an approach makes it easy for the graders to assess reports, it kind of defeats the purpose of the QA program in my book.

09
May
09

Changes Coming To A QSA Near You

As I stated in an earlier post, I had my recertification training back around the first part of April in Chicago.  One of the outcomes of this training was identification of a number of work program areas that are being significantly changed or expanded under v1.2 of the PCI DSS.  The purpose of this post is to give all my readers a ‘heads up’ on these changes as they have the potential for significantly increasing the amount of time it takes to get your Report On Compliance completed and, hence, likely increase the cost.

There is a reason for these changes to the procedures.  The PCI SSC is just trying to improve consistency between QSAs as well as obtaining more accurate and reliable results from the information provided by the QSA in the Report On Compliance.  This is probably one of the biggest complaints regarding QSAs and this is just one of the ways that the PCI SSC is attempting to address the QSA consistency issue.

First, we were told point blank that we would have to document and justify our sampling strategy.  We had heard for years stories of QSAs conducting ridiculously small samples such as two retail locations out of a 1,500-store retail chain.  Or sampling one Windows, one Linux and one Unix server each out of 2,000 total servers.  That is around 1/10th of one percent and, no matter how good your procedures are and the fact that you maintain consistent configurations, a QSA cannot justify these as good samples.  In addition, some QSAs were also sampling the same locations or servers year after year, which had always been not allowed, but without the PCI SSC QA program, there was no way to know that this sort of procedure was being followed.

One option to keep QSA sampling to a minimum is to get the client’s internal audit function to include PCI coverage in their retail outlet audit program.  Most large retailers conduct periodic on-site internal audits of their retail locations.  These audits are typically conducted on a three year rotating schedule, so every retail location will always get reviewed at lest every three years.  Potentially more often if issues are identified during a review.  We have added a number of basic PCI procedures to our large merchant audit programs to cover those PCI requirements that are relevant out in their retail locations.  We then review the results and conduct our own testing based on a much smaller sample size.  If issues are uncovered in our sampling, we then increase our sample size.

Another area that is being expanded is in the area of infrastructure configuration reviews.  Short of demanding that automated analysis tools be used, we were told that the analysis of firewall, router, switch and other infrastructure device configurations need to be much, much more robust and detailed.  Essentially, QSAs are being held responsible for ensuring that the analysis of the network is complete enough to give reasonable assurance that all of the relevant network security requirements can be answered from the detailed analysis of the devices’ configuration files.  We were also told that using tools such as Cisco’s Output Interpreter is not going to be acceptable.  The PCI SSC and card brands want a very detailed analysis of the network conducted to ensure that the PCI in-scope network is truly secure and controlled.  Based on my understanding of the results they are looking for, there is no way a manual assessment will meet their requirements unless the network is very simplistic.  As a result, you can expect QSAs to start using a variety of network configuration analysis tools such as Secure Passage’s FireMon or Athena Security’s FirePac and Verify.  Depending on the number of devices involved, this sort of analysis can be expensive.  If your network is consistently configured to specific standards, your QSA should be able to pick a sample of devices to obtain an end-to-end analysis of your network and, therefore, be able to get detailed answers.  If your network is not consistent, then this analysis is going to end up being expensive.

We were told of a number of QSAs that are under probation for issues with their Reports On Compliance uncovered during their QA review.  We were also given the grading scale that the PCI SSC uses to assess Reports On Compliance.  This grading scale is essentially the Report On Compliance with a rating and weighting scale for each requirement.  Any QSA company not scoring at least 85% on their reports is placed on probation for at least three months.  While under probation, the QSA company must submit every Report On Compliance to the PCI SSC for review before it can be issued.  Any QSA company scoring less than 75% can be removed as a QSA company by the PCI SSC.  I can tell you that my firm is going through its QA review and we are all on pins and needles.

Finally, there is a change coming in all QSA’s proposals and engagement/project legal agreements.  For this year’s QA reviews, all Reports On Compliance must be redacted to ensure client confidentiality is maintained.  This can be a very painful process for a QSA company that issues a lot of Reports On Compliance.  The change is to require a statement in all engagement/project legal agreements that allow the PCI SSC to have access to Reports On Compliance for the purposes of the QA review process.  This will effectively create a non-disclosure agreement between the PCI SSC and the QSA company’s clients.

The times they are a changing.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031