17
Sep
14

How Many Auditors Does It Take …

The title of this post sounds like the start of one of those bad jokes involving the changing of light bulbs.  But this is a serious issue for all organizations because, in today’s regulatory environment, it can be a free for all of audit after audit after assessment after assessment.  A never ending cascade of interruptions to their business operations as they go through audits and assessments, all in the name of ensuring controls are designed and functioning properly.

But another reason I have written this post is because of all of the comments that I have received that seem to paint my position as a reason why QSAs are not needed to conduct PCI DSS assessments.  I wanted to clarify for everyone my rationale for my position.

Besides those reasons, the larger reason this issue needs to be brought up and discussed is that the PCI SSC is pushing for organizations to adopt business as usual (BAU).  For those of you that did not read the preamble of the PCI DSS v3, BAU is the integration of relevant portions of the PCI DSS into an organization’s everyday activities.  A rather noble goal and only a recommendation at this time, one has to believe that BAU will at some point become part of the PCI DSS in a future version.

Any organization that takes the time to implement BAU is going to want to assess their implementation of BAU.  They will do this through internal/external audit activities, automated real-time monitoring via dashboards and other internal assessment processes.  Why bother with BAU if you are not going to use it to spot control issues before they become major problems?  That is, after all, the whole point of BAU.

Which brings me back to this year’s Community Meeting and the question I asked about reliance on other auditor’s/assessor’s work.  The reason for the question is to minimize, as best we can, the disruptive effects of the myriad of audits/assessments that some organizations are required to submit.  The answer provided by the Council was an emphatic “NO!” followed by some backtracking after the audience apparently showed its displeasure to the Council members on stage to their take it or leave it answer.

The reason for the audiences’ displeasure though is genuine.  A lot of organizations question the number of times user management controls such as identification of generic UIDs, last password change date, last logon date and the like need to be performed before such activities are deemed adequate?  How many times do facilities people need to be interrupted to prove that video monitoring is performed and the video is retained?  How many times do facilities have to be visited and reviewed for physical access controls?  There are numerous areas in all control assessment programs where those programs cover the same ground in varying levels of detail and focus.  It is these areas of commonality where the most pain is felt and we hear the lament, “Why do I have to keep covering this ground over and over with every new auditor that comes through?”

It is not like the PCI DSS cornered the market on control assessments.  Organizations have to comply with ISO, HIPAA, GLBA, FISMA, NIST and a whole host of other security and privacy control audits or assessments.  All of these audits/assessments share certain common controls for user management, physical security, facilities management, etc.  What differentiates the programs is the focus of what they are trying to protect.

One easy approach to address this situation is to combine audit/assessment meetings with personnel in physical security, facilities management, user management and the like.  Each auditor/assessor can ask their specific questions and gather evidence and conduct testing as they need.  Unfortunately, due to timing of reporting requirements, having common meetings might not always be possible.

But another approach would be to use internal auditors performing testing monthly, quarterly, etc. and then the QSA reviewing those results during their annual PCI assessment process.  There might be some independent testing required by the QSA for areas such as device configurations, change control and application development changes, but the sample sizes of any testing could be greatly reduced because of the testing done throughout the year due to the implementation of BAU.

If we as QSAs work with other auditors/assessors and agree to common criteria in our respective work programs that satisfy our common controls then we will not have to interrupt an organization to ask the same questions and alienate people as we do today.

Success of compliance programs is the result of making them as unintrusive and automatic as possible.  BAU is a great idea, but it will only succeed if the Council understands how BAU will be implemented in the real world and then adjusts their compliance programs and assessment approach to take BAU into account.  The quickest way to kill BAU is to make it painful and cumbersome which the Council is doing very effectively at the moment.

11
Sep
14

2014 North American PCI Community Meeting

Another year has come and gone and so has another PCI Community Meeting.  There were a number of interesting events at this year’s meeting.  Some I will cover here and some I still have to digest and determine what they really mean.

Good Bye Bob

This year’s meeting is the last one for the PCI SSC’s current General Manager, Bob Russo.  Over the years, Bob has been a good sport and has been a cowboy and other characters.  This year’s community meeting was no exception.  At Wednesday night’s networking event, Bob showed up as Gene Simmons’ brother decked out in silver colored platform boots, black tights, leopard spotted top, long black hair and doing his best to show off his tongue.

A lot of us over the years have pilloried Bob for various edicts and clarifications as he was the leader of the Council.  However, if we step back, Bob got the PCI SSC off the ground and took on the thankless task of combining the disparate security standards of the five card brands and giving us the common set of standards we have today.  As well as then asking us to do our best to ensure that those standards were followed.

Even though I have been critical at times of Bob, he has always been pleasant and cheerful to me and others at the community meetings and other events where he was present.  Bob recognized that there are always some of us in the crowd that are very passionate about security and tried to assist us in channeling that passion.

Bob stated that he will be doing a “Goodbye Tour” to the other community meetings this year, so make sure to thank him for his efforts, shake his hand and say your goodbyes at whatever meeting you are able to attend.

P2PE v2

The first versions of P2PE were lambasted for being pointless and the number of solutions certified, now at six, has somewhat proven that the newest of the PCI standards needed some work.  As a result, in November 2014 we will receive version 2 of the P2PE standard.  According to people I spoke with at the meeting that have seen the new version, the new standard should be much better. Is it perfect, no. But it supposedly is a better version than the originals.

The most notable change to the standard is the approach the Council has taken.  Based on the presentation made, they seem to abandoning the complete end to end model and are moving to a component approach based on how the solution will be implemented.

But the huge change to the standard is that a certified P2PE solution can be managed by a merchant without a third party.  That is, merchants can manage the encryption keys.

It will be interesting to see just how much the standard has changed since its last iteration only a year ago.  But most of all, it will be interesting to see how the new implementation approaches will work.

SAQs

The biggest clarification to come out of the community meeting on SAQs is the Council’s and card brands’ endorsement of using multiple SAQs for documenting compliance with the PCI standard versus doing an SAQ D.

This situation occurs when a merchant has multiple payment channels such as with merchants that have retail stores using traditional card terminals (SAQ B or B-IP) and an eCommerce presence that is outsourced (SAQ A or A-EP).

The other area of discussion that seemed to cause a bit of a stir was related to Web sites that use redirects or iFrames for payment processing.  The reason for this contention is the result of claims from vendors of these sorts of payment solutions in the past that claimed that their solutions placed merchants out of scope for PCI as it related to their eCommerce operation.

Ever since the issuance of the eCommerce information supplement in January 2013 and with the recent issuance by Visa of their eCommerce guidance, the outsourcing world has been buzzing about the implications.  Merchants of course have been going back to their eCommerce outsourcers and complaining about the fact that their eCommerce is no longer out of scope.

Reliance On Other’s Work

My final comment will be related to a question I asked at the Open Forum session on Wednesday.  We have been getting push back from our larger clients on our limited use of their internal audit work, SSAE 16 reports, ISO 27K audits and similar work, if we used it at all.  The driver is that clients want to minimize the amount of disruption to their personnel by all of the audits and assessments that are occurring these days.  This prompted me to ask the question at the Open Forum as to the Council’s advice on reliance on other auditor’s work to reduce sampling.

The answer I received was, “No, absolutely not.”  Quickly followed by, “Of course, I mean other auditors, not other QSAs and PA-QSAs.”

This blunt answer apparently shocked the audience as the people on stage reacted to that shock as well.  The people onstage then backed off saying that the Council would have to take the issue back and discuss it.

After asking this question I was approached by a number of people thanking me for bringing up the topic.  The bottom line is that organizations are audited and assessed out.  Most feel like one audit/assessment ends and another one begins.  But the truly annoying thing is that there are certain portions of all of these audits/assessment that cover the same ground over and over and over again such as with physical security, access controls and end user management.  Handled properly, it would not eliminate all testing, but it would definitely reduce the amount of testing and also reduce sample sizes.

But a very telling comment came from a member of the American Institute of Certified Public Accountants (AICPA) who told me that the AICPA has repeatedly tried to meet with members of the PCI SSC to discuss the SSAE 16 standard and how it could be used to reduce a QSA’s work only to be rebuffed by the Council.

Organizations would be more willing to go through PCI assessments if work done by their internal auditors as well as outside auditors could be leveraged to simplify their lives, not complicate them.  This will only become more important as the Council pushes organizations to adopt business as usual (BAU).

If I had one important take away for the Council to work on, it would be to work with other standards bodies such as the AICPA, ISO, FFIEC and the like and work toward providing guidance to organizations on how to use internal and external audit reports.

29
Aug
14

Is The PCI DSS Effective?

Brandon Williams has a great blog post on his site that answers this question.

The bottom line is that there is no organization that is going to execute the PCI DSS, or any security framework for that matter, 100% of the time, all day, every day.

Why? 

Security is NOT perfect. 

Why? 

Because it involves human beings and we are flawed.

However, that does not mean that you should not try and be as close to 100% flawless as possible.  Because the difference between an organization that is breached and one that is not breached, can be only a percentage point.

For all of you in the United States, have a safe holiday weekend.

24
Aug
14

P2PE Versus E2EE

I have been encountering a lot of organizations that are confused about the difference between the PCI SSC’s point-to-point encryption (P2PE) certified solutions and end-to-end encryption (E2EE).  This is understandable as even those in the PCI community are confused as well.

E2EE is the generic terminology used by the IT industry to describe any solution that encrypts communications from one endpoint to another endpoint.  Key management of the encryption can be done by any party that has an endpoint such as a merchant or a service provider.  Examples of E2EE include IPSec, SSL and TLS.

One of the most common E2EE solutions used by merchants is derived unique key per transaction (DUKPT) also known as “duck putt”.  DUKPT is commonly used in the convenience store and gas station industries to encrypt sensitive authentication data (SAD) from the gas pump to the merchant or processor.  DUKPT uses the 56-bit data encryption standard (DES) encryption or triple DES (3DES) algorithms.  While DES and 3DES 56-bit and 112-bit are no longer considered secure, because DUKPT uses a unique key for every transaction, it means that every transaction has to be individually broken to gain access to the data.  While using the cloud could be leveraged to perform this rapidly, it would be too costly an effort for the data retrieved.  As a result, DUKPT is still considered a secure method of encryption.

P2PE is a subset of E2EE.  This is because the major difference between P2PE and E2EE is that P2PE does not allow the merchant to be a manager of the encryption keys.  Under the P2PE standard, only the transaction processor or other third party is allowed to perform key management.  The merchant is never allowed to perform encryption key management under the P2PE standard.  As a result, DUKPT can be used by both P2PE and E2EE solutions.  However, under P2PE, the key management must be done by a third party, not the merchant.

While third party key management is typically acceptable for small merchants, this does not work for merchants that switch their own transactions to various processors as do mid-sized and large merchants.  That does not mean that E2EE solutions are not acceptable for reducing PCI scope.  As with PA-DSS certified applications, P2PE certified solutions can be accepted by a QSA as long as they are implemented according to the P2PE implementation guide which can reduce the amount of testing a QSA is required to perform.  In my experience, P2PE versus E2EE testing efforts are typically negligible, so any so-called savings are limited at best.

The huge downside to P2PE for merchants is that once you decide on a given P2PE solution, you are pretty much stuck with it and the processor providing it.  That is because most processors offering P2PE are only offering one P2PE solution.  As a result, if a better deal comes along for processing your transactions, you will likely have to replace your terminals and possibly other equipment to switch to the new processor.  For some merchants, that could be a costly proposition and make any switch not worth the effort.

So if your organization is looking at P2PE versus E2EE, I would not necessarily give an advantage to P2PE over E2EE.  Just because an E2EE solution is not P2PE certified does not mean it is not secure.  It only means that the vendor did not believe that the P2PE certification was worth the effort.

08
Aug
14

Requirement 10.6.2 Clarification

As a refresher, requirement 10.6.2 states:

“Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.”

The argument in PCI circles is the definition of “all other systems”.  Some of us believed that it meant systems other than those in-scope.  Other people believed that it had to refer to only in-scope systems such as a user workstation.  As a result, I asked the PCI SSC to clarify this requirement and this is the response I got back.

“In PCI DSS v2.0, logs for all in-scope systems were required to be reviewed daily. However it was recognized that for larger or more complex environments, there could be lower risk systems that were in scope for PCI DSS that could warrant less frequent log reviews. As such, PCI DSS v3.0 defines a number of events and system types that require daily log reviews, and allows the organization to determine the log review frequency for all other in-scope events and systems that do not fall into those categories.

For some environments, such as those designed specifically for the purposes of PCI DSS, then it is possible that all in-scope systems fall under the system categories defined in Requirement 10.6.1, meaning that daily log reviews are required for all in-scope systems. In other environments, there may be many different types of system that are considered in-scope, but which are not critical systems and neither store, process or transmit CHD nor provide security services to the CDE. Some possible examples could be stock- control or inventory-control systems, print servers (assuming there is no printing of CHD) or certain types of workstations. For these events or systems, the entity, as part of its annual risk assessment process, is expected to define the frequency for reviews based on the risk to its specific environment.

The intent of this update is not to apply PCI DSS Requirements to out-of-scope systems. We realize that the current wording is causing confusion and will address this in the next revision.”

So there we have it.  Not the first time my interpretation was wrong.  The requirement means in-scope systems that, from an assessment of risk, are at less of a risk of compromise can reduce the frequency of log reviews.

But that means you need to have an accurate risk assessment to support your argument.  So those of you that have not explicitly assessed the risk of your category 2 systems will have to break them out to support a reduced log review frequency.

08
Aug
14

The Dilemma Of PCI Scoping – Part 3

In part 2 we discussed the criticality of a risk assessment and started on implementing the framework with fixing monitoring and alerting so that we can properly manage the risk we will be accepting.  In this part I will deal with Category 2 and 3 systems and how to manage their risk.

One of the big problems I have with the security purists is with their arguments over Category 3 systems.  Do not get me wrong, I understand their reasons for their statements.  However to me, their arguments seem to imply that Category 3 systems are somehow not secure or not as secure as other systems and therefore cannot be trusted.  This further implies that organizations are implementing some sort of multi-tier security structure which should not be the case.  Taking such an approach would create an operational and support nightmare, let alone increase the risk of compromise.

In today’s world, all devices need to have a certain basic level of security comprised of, at a minimum, hardened OS, minimized ports and services, access control, anti-virus and anti-malware (as appropriate and necessary), personal firewall (if mobile), logging and other similar security measures as necessary.  All of this is something I like to refer to as “Security 101”.  If you are not doing Security 101 across the board, then you likely have bigger issues than just PCI compliance.  But if you are following a Security 101 approach, then all of your devices under your direct control should have a solid, basic level of security and should be able to be trusted when on your network.  As a result, with the proper logical access controls in place, you should be able to trust Category 3 systems coexisting with certain Category 2 systems.

Notice I said “certain Category 2 systems” not “all” Category 2 systems.

Category 2b systems that have inbound connectivity to the CDE are the most risky because of that direct inbound connectivity.  As such, should any Category 2b system become compromised, the CDE is also compromised by default.  Most Category 2b systems are gateways to the CDE such as with out-of-band management systems or “jump boxes” for network and system administration and Citrix or other virtual desktop technologies that you can use to grant business user access to CDE applications.

These gateways need to be configured such that if they become compromised they are immediately taken out of service until they can be rebuilt and/or reconfigured as secure.  This is typically accomplished not just with anti-virus, but also through file integrity monitoring, application whitelisting and other similar measures.  But this needs to be monitored at the virtual desktop infrastructure level (aka, Citrix, VMware, Terminal Services, etc. servers), as well as the virtual desktop image level.  If a file changes in the virtual environment or virtual desktop image that should not have changed, access to the virtual desktop environment is suspended until the infrastructure or image can be recertified as secure.  But that means very tight monitoring on the critical files, however this should be relatively easy to do as long as the virtual desktop image is restricted to only a single cardholder data (CHD) application.

The other thing that needs to happen with virtual desktop is that organizations must be extremely diligent about who has access to the virtual desktop.  This means very timely user management, something a lot of organizations are very lax about.  The PCI DSS states in requirement 8.1.4 that organizations must “Remove/disable inactive user accounts at least every 90 days.”  90 days is a very long time when you look at what attackers are able to do in such a timeframe.  As a result, organizations need to step up their game in this regard and when someone’s job function changes, that needs to have a help desk ticket generated for changing that person’s access.  A 90 day review process is fine, but if you are relying on that sort of thing as a control, you are not being diligent enough in managing your users.

But what about Category 2a systems you might ask?  Category 2a systems are directory servers, authentication servers, DNS and DHCP servers, anti-virus master servers, system update servers and other network and system management/monitoring support systems.  These systems are those that only system or network administrators should ever have direct access.  These systems should be configured as “bastion” servers, security hardened so that they are not easily compromised by vulnerabilities.  Most vendors of these systems understand this and have set up their installation routines accordingly – restricting services, allowing only necessary applications, etc.  The bottom line with Category 2a systems is that, if any of them become compromised, it is game over because the attacker has the privileges of an administrator.

I am not a fan of separate directory systems.  These systems double as the access control management system and the more complicated access control becomes, the more likely that access control becomes less reliable and secure.  As such, I prefer a common access control system managed through an integrated directory system such as Microsoft’s Active Directory (AD).  Over the years, the amount of ports required to make AD work between domain controllers have gotten much better and easier to implement.  Plus there is an ability starting with Windows Server 2008 to even limit and control the dynamic ports used.  As such, I recommend using a common domain approach rather than separate domains with trust relationships as it just makes management so much easier and more reliable.  Whether or not you choose to put domain controllers inside your CDE is up to you to decide.

However, there is a larger consideration here that needs to be put on the table.  Think about what the security purists are saying with their statements about Category 3 systems.  If an attacker can jump from a Category 3 system to owning a Category 2a system such as a directory server, DNS server, DHCP server, etc. what does that say about an organization’s security posture to begin with?  If an attacker gains control of a system/network administrator’s credentials, it is game over.  That is not a device failure issue, that is a social engineering failure.  As a result, you can have whatever controls of separation all you want, but in the end if an administrator gets owed, what did all of that separation do for you?  Nothing.  The best we can do is to manage and mitigate the risks and monitor for any attempts to leverage those risks and deal with any attacks as quickly as possible.

At the end of the day, organizations need to assess the risk that their Category 2 systems provide to the CDE.  If it is higher than desired, those systems need to be put into the CDE.  Others can share networks with other Category 2 and 3 systems if necessary.

There are obviously enhancements that you can make to further improve on this strategy.  But this is just what I would consider the minimum to get by which seems to be what a lot of you desire.

01
Aug
14

The Dilemma Of PCI Scoping – Part 2

In Part 1 I discussed how isolation is an answer, but probably not a viable answer for almost all but the most security conscientious of organizations such as the military, defense contractors or those that can afford that sort of painstaking luxury.  So unless you consider everything in scope for PCI compliance, is there a viable way to reduce scope?

Before we get to that though, we need a quick discussion on risk management as the solution is all predicated on the identification and management of risk.  Because if you cannot do an appropriate risk assessment, then the only choice you really have is to consider everything in scope I know the vast majority of you do not like that approach.

Assessing Risk

In order for my proposed solution to have a chance at working properly, an organization needs to understand its risks, what risks will be accepted and managed and what it will take mitigate the residual risks.  Doing a risk assessment is the way to do that, but most organizations avoid such an assessment for a variety of reasons.  The most common reasons I have heard are:

  • The risk assessment framework is too complex,
  • We tried this once before and never got any meaningful results,
  • We were never able to agree on the risks and their scores, or my personal favorite,
  • This spreadsheet is our risk assessment (it is not, but see Tom Benhave’s blog post on the topic as he does a much better job than I could explaining this topic).

The reason why risk is such a tough topic is that everyone has their own perspective on risk; good, bad or otherwise.  There are numerous articles and presentations on this phenomena but my favorite one is from Lance Spitzner of SANS who wrapped his around security awareness training and discusses risk at the start of his presentation describing why people are such poor judges of risk.  He uses various statistics regarding events that can happen in people’s lives to illustrate this fact.  My personal favorite example of just such a statistic is that people have a greater chance of dating a supermodel than of winning the PowerBall lottery.  Granted, both have tremendous odds, but the odds of dating a supermodel are still significantly less than the odds of winning the PowerBall.

The bottom line is that, without a decent risk assessment, an organization has no way to know the amount of risk the organization is willing to accept and how they will manage that risk.  The Council has repeatedly said that PCI compliance is supposed to consider risk and take a “risk-based” approach.  However, the problem is that we each have our own opinion of risk and what risks we are willing to take on.  But at the end of the day, no matter what an organization does, there is going to be risk.  The question is, “Are these risks my organization willing to take on?”  That question can only be answered by a risk assessment and an understanding of how risks can be managed and mitigated.

How your organization chooses which risks it is will to accept and how to manage those remaining risks are up to your organization to decide.  This is why the PCI DSS and all other security frameworks require an annual risk assessment to be performed.  The risk assessment process provides a framework for an organization to document their risks, understand those risks (size, frequency of occurrence, costs, etc.), how the risks can be managed or mitigated, then agree to what risks they will take on and how they will manage and/or mitigate those risks.

From here on we will assume that the organization has a valid risk assessment and that they are willing to take on the risks presented by the example I will discuss.

Managing Risk

Today’s integrated and connected world just does not lend itself to an isolationist approach due to the volume of information involved, business efficiencies lost and/or the operational costs such an approach incurs.  As a result, organizations need to take a hybrid approach of heavily protecting some components and taking on and managing the risks inherent to such an approach.

When it comes to the IT side of risk management and mitigation, most organizations rely on some form of near real time monitoring through collected system/event log data and other sources to monitor their environment(s).  Unfortunately, where this approach comes up short is that there are too many alerts to follow up and so alerts go unaddressed.  Almost every QSA can tell you about a discussion with operations personnel where the statement, “Oh, that’s a false positive alert, so I don’t have to worry about it” has been made.

This is the first problem you must address and make sure that this attitude never creeps back into your people that monitor alerts.  Anyone in operations that “knows” an alert is a false positive needs either: (1) re-education, or (2) your organization needs to seriously re-tune your alerting mechanism(s).  All you have to do is read the Target and Neiman Marcus press reports if you need examples of how bad things can get if your personnel are blowing off alerts because they believe they are not accurate.

In my experience, a lot of these problems are the result of bad or incomplete implementations of these systems.  Unfortunately, there are a lot of people out there that think that these solutions are more like a Ronco Rotisserie Oven where, as they famously say in the ads, “you can set it and forget it.”  Yes these solutions may be “appliances”, but that is where the comparison ends.

System incident and event management (SIEM) systems require fairly constant tuning and tweaking, beyond their own software and signature updates, to minimize false positive alerts in response to the changes to an organization’s networks and systems.  Yet time and again, I encounter monitoring and alerting systems that were put in place years ago (typically to meet PCI compliance) and have not been adjusted/changed since then while all around them changes have been occurring that affect their operation.

When interviewing the people responsible for these systems I hear statements such as, “Yeah, that alert started to appear when we implemented [name of change].  We were told to just ignore it.”  When asked why they have not tuned it out of the SIEM, you get either they do not have time, they do not know how, they do not have the rights to do that or, my personal favorite, the head of security or the security committee will not let us change that.

The reason this issue does not get addressed is that it has no visibility since alerts are tucked into the various monitoring tools.  So, the best way to address this situation is to give it visibility by automatically feeding all alerts into an organization’s help desk system.  This gives all alerts immediate visibility by putting them in an automated tracking and escalation process.  It also allows for triage and investigation activities to be documented and, based on the results of those triage and investigation activities, having the alert assigned to the right people/groups to address the alerts.

“Whoa, let’s not get crazy here,” I am sure some of you are YELLING at the screen.  There is no doubt this is a very brave step to take because this will potentially uncover something you probably did not want to advertise given the state of your existing alerting.  But that is typically only be a short term problem.  Unfortunately, it may be the only way to get the underlying problem of tuning and tweaking of the alerting systems completed and constantly addressed.

But taking such a step is not entirely a bad thing, at least in the long run.  A side benefit of doing such a thing is that it will focus an organization on triage activities for classifying the urgency of the alerts.  Not all alerts need immediate action, but a lot of them can require immediate investigation and then can be put on a back burner.  It will also give visibility to the number of alerts being dealt with on a daily basis after triage.  That typically results in identifying and justify why more staff are required to deal with the onslaught of alerts that need to be researched.

Another change that organizations should make is adding a box into their change control form that indicates what the impact of a change will have on the SIEM environment.  At least these three basic questions need to be answered in regards to SIEM or other monitoring systems.

  • Do new alerts need to be added and, if so, what do they need to monitor and what are the alerting thresholds?
  • Do existing alerts need to be modified and, if so, what modifications are needed?
  • Are there alerts that are no longer needed?

If you address these areas, you should have monitoring and alerting taken care of with a built in feedback loop to keep it that way.

In Part 3, I am going to wrap up my discussion on PCI scoping with a discussion of Category 2 and 3 systems.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

September 2014
M T W T F S S
« Aug    
1234567
891011121314
15161718192021
22232425262728
2930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 939 other followers


Follow

Get every new post delivered to your Inbox.

Join 939 other followers