Archive for December, 2010


The Harsh Reality Of Security

Chris Skinner has a blog entry that asks the question, “Why does the card securities council not care about card security?”  What concerns me is the title of the article as it again implies that the PCI standards do nothing to secure cardholder data.  As a result, I thought I would take a shot at answering this question.

Mr. Skinner points to a number of technologies that he feels the PCI SSC is ignoring as potential solutions to securing cardholder data.  These solutions include tokenization, end-to-end encryption (E2EE) and Chip & PIN (EMV).  I recently posted a blog entry on all of these technologies, so I will not go into all of these here.  The bottom line on all of these is that, individually, they do not solve the security problems we face.  However, used in conjunction, they will create a much more formidable barrier to breaches.  I can tell you that the Council is not ignoring these technologies; they are only doing proper research to ensure that whatever guidance they issue is not flawed resulting in a recall or wholesale rewriting of a standard.  Want to lose credibility?  Issue a standard that you have to later heavily modify or replace.  Do I have to remind everyone about Wired Equivalent Privacy (WEP)?

Then we have the dynamics of the card brands.  Just because Visa writes a whitepaper on some technology does not mean that the other four card brands have bought into Visa’s analysis.  Visa may be the 800 pound gorilla of the card brands, but as anyone in business knows, the 800 pound gorilla does not always get its way regardless of how boisterous or how much chest pounding it may do.  A prime example of this was in the late 1970s when IBM (then the 800 pound technology gorilla) tried to force System Network Architecture (SNA) down the International Organization for Standardization’s throat as the Open Systems Interconnect (OSI) model.  What happened was that the rest of the technology companies in the world banded together and created the OSI seven-layer model that we have today.  While it has a lot in common with SNA, it also has numerous differences.  The bottom line is that there are certain dynamics between the card brands that will preclude the Council from always following Visa’s lead, regardless of whether Visa’s analysis is right.

How about the cost of any change?  Merchants do not live on thick margins.  Most are lucky to retain 1% to 4% of total sales as their profit margin.  If you are Wal-Mart or Target, margins can be huge numerically, but still not enough to fund the kind of wholesale changes Mr. Skinner is suggesting.  Unfortunately, most merchants are nowhere near the size of Wal-Mart, so they need to be even more judicial with their expenditures.  As a result, any change that requires a significant investment is going to be tough for any merchant to swallow and will take time to get rolled out.  After all, we are in the midst of a recession, so there is even higher sensitivity to expenditures that do not enhance the bottom line.

But for a number of merchants, the cost is not so much theirs to bear as much as it is their merchant bank’s cost.  That is because a lot of merchant banks provide the entire cardholder processing environment to their merchants.  As a result, the bank will have to absorb the cost and possibly increase fees should new terminals or software be necessary.  Banks are not necessarily doing well either, so they too are avoiding any expenditures that are not going to positively influence the bottom line.  Since security is an intangible, banks are going to be very reluctant to spend on cardholder infrastructure that does not drive up revenue.  After all, in the United States and United Kingdom, the banks were bailed out by the government and are now being watched very closely by the various regulators and the regulators are holding the purse strings.  Unless the regulators come on board, there will be no expenditure on what they will consider a frivolous expense on new terminals or software.

All of these parties are intimately involved in the PCI Security Standards Council as stakeholders.  All of the card brands and a number of larger financial institutions are on the Council’s board and various work groups.  Given the economic environment and the predisposition of these parties, is it any wonder why the Council appears to not be moving forward?

Not to mention that the changes Mr. Skinner is suggesting do not eliminate the problem of security breaches, they just shift the risks.  Granted the risks get reduced, but by how much is anyone’s guess.  But in the end, there are still going to be risks.  As I always like to remind people, security is not perfect.  Yet that seems to be what the card brands and Council seem to want people to believe.  That if everyone followed the PCI standards, breaches would not occur and that is simply not true.  Breaches would still occur, they just would not necessarily occur every week releasing thousands or millions of accounts.  It would be more like a release every month of tens or hundreds of accounts.

I too would like to live in a perfect world.  But the real world is always far from perfect.  Decisions get made only when the wheel is so squeaky that it needs to be replaced.  We can rant and rave all we want, but we will only get action when we can either show (i) a measurable business benefit, such as an increase in profit or improved efficiency, or (ii) someone else is doing it and they now have a competitive advantage.  Unfortunately, I see neither of these conditions satisfied at this time nor any time in the near future.  As long as the status quo remains, no one is going to move.

In the end, the PCI SSC does care about security.  It is the politics that slow things down and those politics are not going to go away any time soon.  That is the harsh reality of business and security.

UPDATE: Forrester Research has published a great white paper titled ‘How To Market Security To Gain Influence And Secure Budget’ that explains how to avoid the common mistakes that most security people commit and, as a result, why security professionals do not get the resources that they need to get the job done.


I Must Have Struck A Nerve

My last post on the PCI SSC backing off on certifying mobile payment applications sure got a lot of people in touch with me.  As a result, I would like to recap my discussions with them so that the rest of the readership can be up to speed on this topic.

Like the term “cloud computing,” “mobile payment” means a lot of different things to people.  For most people, a mobile payment refers to the use of a cell phone, smart phone or personal digital assistant as the credit/debit card.  However, for a number of my more progressive merchant clients, a mobile payment refers to the use of a mobile, wireless device as a cash register.  This is one of the reasons why I believe that the PCI SSC has pulled back on certifying mobile payment applications.  The definition is becoming too broad and confusing thus creating too many issues to cover in a quick time.

Then there are the methods as to how these mobile payments are conducted.  From a consumer side, mobile payments can be done through RFID just like the contactless cards currently being deployed in the United States as well as using Bluetooth or Wi-Fi.  From a merchant perspective, there are a number of large merchants that are rolling out smart phones and PDAs with software to process payments over Wi-Fi and cellular.    All of these communication methods have risks associated with them.

Then there are the devices themselves that are involved regardless of whether you have the consumer or merchant view.  When you talk of cellular devices such as cell phones and smart phones, you open a Pandora’s Box of operating environments from proprietary to Windows and a number of others in between.  PDAs offer some common operating environments with their cellular brethren, but also bring some OSes of their own to the table.  All of these operating environments have their own idiosyncrasies when it comes to security or lack thereof.

Add into the mix the variety of proprietary and open development environments for each platform.  Then there is how applications get distributed.  Apple started the application marketplace approach and all the other mobile OS vendors are following their lead.  This all causes the issue of who makes sure that payment applications are certified?  Is it the developer or the marketplace?  Does the marketplace need to make sure that payment applications have been certified before they are allowed to be pushed out?  It is issues such as these that need to be discussed and addressed before the PCI SSC can issue guidance.  And these issues surrounding distribution are not simple ones.

Ultimately we are heading towards a payment environment where there is no card in the traditional sense.  I truly believe that a software algorithm will be developed that will generate secure, single use “codes” that are used to conduct transactions between consumers and merchants.  This algorithm will be similar to the Advanced Encryption Standard (AES) and will be platform independent and therefore can be run on any “intelligent” device.

In the end, I am sure all of this led the PCI SSC to want to take a step back rather than blindly charge ahead, issue a standard and then have to repeal or greatly modify the standard because of knowledge gained later.  Such an approach, while inconvenient to the rush of technology, should create a much more thoughtful approach.  So let us all be patient and let the Council do their work and get it right rather than issue something that ultimately is severely flawed.


PCI SSC Nixes PA-DSS Certification For Mobile Payments Applications – For Now

In a not so widely disseminated and tough to find statement, the PCI SSC has basically put the kibosh on the certification of any mobile payment applications for the time being.  The second paragraph of the statement says it all.

“Until such time that it has completed a comprehensive examination of the mobile communications device and mobile payment application landscape, the Council will not approve or list mobile payment applications used by merchants to accept and process payment for goods and services as validated PA-DSS applications unless all requirements can be satisfied as stated.”

The statement indicates that the Council will be taking up the topic of how to certify these applications and addressing any changes to the PA-DSS program that may be required.

As I stated in a previous post, mobile payment processing, no matter how you define it, is not the easiest environment to secure.  It is interesting that the Council has seen the light and is also taking a careful approach to this environment.

Hopefully, we shall see next year if the Council comes up with a workable solution.


MasterCard SDP Revisited For Level 2 Merchants

I have had some clients contact me in the last couple of weeks regarding MasterCard’s Site Data Protection (SDP) program.  Specifically, what is MasterCard’s position regarding Level 2 merchants generating a Self Assessment Questionnaire (SAQ)?  All of these merchants intend to conduct their own SAQ but wanted to make sure that was still acceptable under the MasterCard SDP rules.  Last year there was a lot of confusion since MasterCard pulled back on their decision to require Level 2 merchants to have a QSA conduct an on-site PCI assessment.  However, I thought the new rules were straight forward and did not realize that there could be confusion until I started getting questions.

If you go to the MasterCard Web site and to the Merchant Levels Defined page, you will see the following information regarding Level 2 merchants.  Under the column “Onsite Assessment”, MasterCard states that it is under the merchant’s discretion with a reference to footnote number two.  Under the “Self Assessment” column, it says that the self assessment is required annually and also references the number two footnote.  Footnote two states the following.

“Effective 30 June 2011, Level 2 merchants that choose to complete an annual self-assessment questionnaire must ensure that staff engaged in the self-assessment attend PCI SSC-offered merchant training programs and pass any associated PCI SSC accreditation program annually in order to continue the option of self-assessment for compliance validation. Alternatively, Level 2 merchants may, at their own discretion, complete an annual onsite assessment conducted by a PCI SSC approved QSA rather than complete an annual self-assessment questionnaire.”

The key date to remember is June 30, 2011.  In my humble opinion and based on previous PCI pronouncements, if you can get your SAQ effort started before June 30, 2011, you can do it internally with any staff you choose.  However, from June 30, 2011 and forward, Level 2 merchants that wish to do an SAQ must use personnel that have attended and pass the PCI SSC Internal Security Assessor (ISA) program annually.  To be safe, I would confirm your plans with your acquirer before you start just to ensure that you are not going to get in trouble when you submit your SAQ.

Where things get confusing is MasterCard’s use of the phrase “at their own discretion” in describing their alternative.  The reason I think this is confusing is that their alternative seems to be hardly discretionary.  As near as I can tell, if a merchant does not have their staff attend and pass the PCI SSC’s ISA training annually, then the merchant is required to have a QSA conduct an annual on-site assessment (code words for a Report On Compliance or ROC).  In my book, that seems hardly discretionary.  That is an either you do it one way or you do it another way, but you are going to pick one.  I suppose MasterCard is trying to soften the blow by indicating that it is up to the discretion of the merchant which option they choose.

In the end, there is hardly a nice choice.  ISA training is not exactly cheap and, for most merchants, requires travel in addition to the cost of the training, let alone making sure that a merchant has staff capable of passing the ISA course and conduct such an assessment.  And that is an annual cost, not just a one-time expense.  I am also sure that ISAs will have to keep work papers meet other requirements that QSAs are required to meet, so there is more work and costs.  Then there is the cost of a QSA conducting an on-site assessment which is also not cheap.  It will all come down to the type of SAQ that the merchant would fill out.  However, I have to admit, that most Level 2 merchants would end up with SAQ D which is not exactly a small task to complete.  Which is why we have always referred to SAQ D as “ROC Lite.”

So it appears that Level 2 merchants that take MasterCard are damned if they do, and damned if they do not.  Pick your poison my friends.


Anatomy Of A Breach

People are always asking me why complying with the PCI standards is important as in, “What’s in it for my company?”  So I thought I would take a known, documented breach and walk through where PCI compliance would have made a difference.  And for those naysayers that point to the PCI DSS and say that compliance does not matter, I intend to show that compliance does lead to security.

The breach I am going to use is the Wal-Mart breach which was documented in an article in Wired magazine back in October 2009.  Wal-Mart has what most professionals would consider a robust control environment.  However, what this breach shows is that even with such an environment, a breach can still occur.  That is not to say that Wal-Mart did not make mistakes and it is those mistakes that I want to point out so that we can all learn.

For some background, the Wall-Mart breach occurred sometime between 2005 and November 2006 when it was discovered by Wal-Mart.  The good news, at least as far as Wal-Mart has ever publicly shared, was that no cardholder data was ever released as a result of the breach.  However, the final report issued internally by Wal-Mart was never shared outside the company, so it is anyone’s guess as to whether the claim that no cardholder data was ever released is accurate.  This was not Wal-Mart’s first cardholder data breach.  In the Fall of 2005, a small number of Sam’s Clubs gas station systems were accessed by intruders and around 600 credit card accounts were believed to have been compromised.

The breach was discovered by accident when a server crashed.  During the investigation to figure out what had happened to the server, one of the investigators found that L0phtcrack had been installed on the failed server and that it was L0phtcrack that had caused the server to fail.  Obviously, L0phtcrack was not an approved application to have installed.  As a result, this information caused an even larger investigation to be launched.

Before we discuss L0phtcrack, let us discuss file integrity monitoring.  This incident points out why the PCI DSS mandates file integrity monitoring in requirement 11.5.  But just monitoring known files is not enough.  This is where an organization needs to be going above and beyond in order to better ensure their security.  While monitoring critical files, you also need to be monitoring any new files that might be added to a system.  And alerts generated by your file integrity monitoring system need to be reconciled to all changes being made to the systems.  Any file addition, change or deletion not documented in changes needs to be investigated to determine its cause.  Based on the timeline, while Wal-Mart may have had critical file monitoring going on, it either was monitoring only a limited number of files and directories, not monitoring for new files and/or any alerts were not being followed up in a timely manner.

Then there is a topic not even mentioned in the PCI DSS but just as important.  Root Cause Analysis (RCA) is something that everyone should conduct in the event of a failure and needs to be an activity conducted as part of an organization’s incident response process.  Because of their RCA process, Wal-Mart found that L0phtcrack was the cause of the server failure.  Since L0phtcrack was not an approved program and was likely installed by an attacker, Wal-Mart personnel broadened their investigation to determine if L0phtcrack was installed on other systems.

While L0phtcrack should be an obvious program that should not be installed, it is not always that easy.  This is why requirements 2.2 and 12.3.7 are important, so that when doing an investigation, the investigators know what to expect to see installed as well as what was approved so that they can quickly determine if the server was running approved software.  Again, I am certain that L0phtcrack would not have been part of those standards.

That even larger investigation led to Wal-Mart to determining that over 800 systems and servers had been compromised or attempted to be compromised.  The compromise was traced back to a remote access VPN account that was used by a former Wal-Mart employee in Canada.  That account had been used by the intruder to enter Wal-Mart’s network and begin the compromise of their systems.  While investigating the breach, Wal-Mart personnel suspended that account and the intruder moved over to another terminated employee’s account.  When they disabled the second account, the intruder moved over to a third terminated employee account.

Requirement 8.5.4 states that accounts for terminate employees should be disabled or removed immediately and this was obviously not followed in this case.  Requirement 8.5.5 states that inactive accounts should be removed if not used for 90 days or more.  Unfortunately, we do not know if any of the accounts had been inactive for more than 90 days.  We also do not know if any of these accounts were disabled.  However, in such a breach, if the attacker has any sort of administrative access, it takes almost no time to activate a disabled account.  That is why an organization needs to remove those accounts as soon as possible, particularly any account that might have administrative privileges.

The investigation quickly focused on one particular Wal-Mart system, point-of-sale (POS).  Documentation from the investigation indicates that the intruder(s) were very focused on POS source code, executables, databases and documentation.  The intruder(s) were so focused on POS, that they even downloaded the latest technical specifications for Wal-Mart’s POS system.  As a result, investigators focused much of their efforts on POS systems at store locations and at corporate.

If not already obvious, investigators inspected log files to determine that the compromise went at least as far back as June 2005.  If you want to have a concrete example of why log information and proper time keeping are important and requirement 10 is so focused on log data and time setting, there is no better example than this breach.  Thanks to an obviously large retention of log data, Wal-Mart was able to at least figure out when and where the breach started as well as trace the actions of the intruder(s) through their network and systems.  It is implied that the time settings on servers and network devices must have been fairly closely synchronized as it is never mentioned if there were time correlation issues in the log data.  Had Wal-Mart had to rely on system and event logs that were contained only on the network devices and servers, the when, where and how of this breach might have never been known.

Unfortunately, the log data was not as complete as it could have been.  As a result, the Wal-Mart investigators were somewhat stymied in their efforts to better understand the breach.  Server logs were only configured to log unsuccessful logon attempts.  As a result, they were not able to track the successful logon attempts of the disabled accounts that were being used by the intruder(s) and therefore trace the actions of the intruder(s) through their network and systems.  A lot of administrators save log space on internal systems by not logging all activities.  I am also guilty of doing this as I also used to believe that successful attempts internally were not a big deal to miss.  However, as the internal threat has become more and more prevalent, I have changed my opinion and now I log everything I possibly can on all systems.  Requirement 10.2.5 implies that logging of all authorization and identification mechanisms are logged, but it does not specifically call out that successful and unsuccessful attempts are to be logged.

The saddest fact of all was that none of this should have been a shock to Wal-Mart IT and security personnel.  Almost six months prior to discovering the breach, Wal-Mart’s QSA had completed their PCI assessment and had found numerous areas where Wal-Mart was not compliant.  A lot of the areas of non-compliance were the direct result of how the breach occurred.

So what are the lessons that should be learned from this incident?

  • Compliance does matter and does result in security.  I do not care whether you follow the PCI DSS, FISMA or any other well known security standard.  The purpose of all security standards is to provide guidance on how to secure hardware and software so that it is difficult to compromise.  If you comply with any of these standards, you greatly enhance your security posture.  However, the best security comes down to more than just complying with a standard.  If an organization really wants to be secure it will have to go beyond just what the standard requires.
  • Security is not perfect.  The purpose of any security programs is to limit the damage of incidents when they occur so that they do not get out of control.  All we can expect to gain out of a security programs is minimizing the potential risk that an incident results in a breach of sensitive information.  A good friend of mine has a great quote on this point.  He always likes to say, “I just want my security program to be sufficient enough that it makes everyone else an easier target than my company.”  What security standards let you have is the information you need to know where the bar is set so that you can make investments to do that little bit more.
  • Most breaches are discovered by accident.  It has been my experience that even with great tools and instrumentation, the discovery of a breach or compromise all comes down to the uncovering of information that results in someone becoming curious and digging further into the incident and discovering that systems and/or data have been compromised.  This is not to say that monitoring and alerting is not worthwhile.  It is just that it is very rare that a breach or compromise is uncovered when the initial alert was issued.  It takes follow up on all of the alerts to actually uncover the breach or compromise.
  • Follow up should be the standard for all alerts and a documented Root Cause Analysis (RCA) process should be followed as part of an organization’s incident response plan.  This is where most organizations get sloppy and miss the signs of a breach or compromise.  They do not treat all alerts consistently,  do not perform the RCA process every time and therefore earlier warnings go undiscovered until the situation gets truly serious such as when a production server crashes.
  • If you do not have at least a year’s worth of log data, you are probably going to be in the dark about how, when and where the compromise occurred.  There is a lot of push back from organizations about hanging onto log data, particularly more than three months worth.  A lot of this comes down to the cost of storing such a huge amount of data.  However, had Wal-Mart only had three months worth of log data, they never would have known when they had been breached nor the focus of the breach.
  • What gets logged is also very important.  Wal-Mart’s breach would have been a bit easier to investigate had the log data been complete.  Just because you are on the inside of the network is not an excuse to not log everything.  As I have pointed out before, log data is IT’s version of a commercial airliner’s flight data recorder.  Without all of the data, it can be almost impossible to isolate the cause of a compromise.
  • As soon as employees and contractors are terminated, they need to be removed from the access control system.  I know that this can cause issues with some operating environments, but there are work arounds to avoid those complications.
  • And finally, there are no easy ways to ensure security.  Security requires diligence.  Extended diligence typically results in tedium which then results in diligence faltering.  As a result, organizations interested in maintaining their security need to combat tedium by rotating security and operations personnel through positions so that tedium does not set in.  This has an added benefit in improving cross training of personnel.

PA-DSS Certification “Clarification”

In the November 2010 Assessor Update newsletter from the PCI SSC, there is a clarification of what constitute “minor changes” to PA-DSS certified applications.  Based on my reading of this clarification, there is going to have to be more clarification done.

The first thing the clarification does is define a “minor change.”  According to the PCI SSC, a “minor change” includes, but is not limited to, a change to the application that:

  • Only impacts the aesthetics of the application such as GUI enhancements, movement of buttons, color updates or changes and marketing changes.
  • Only impacts components of the application that do not relate to the authorization or settlement processes such as adding a field not related to cardholder data processing and updates to the Implementation Guide.

Changes that fall into these two categories do not require that the PA-QSA conduct a re-assessment of the application and file a new Report On Validation (ROV).  Therefore, the application continues to hold its existing PA-DSS certification.  However, the PA-QSA is required to prepare and file a Minor Update Attestation form with the PCI SSC.  Should the PCI SSC reject the Minor Update Attestation form, the PA-QSA could be placed in remediation.  So PA-QSAs will likely be very picky about signing off on any “minor” changes.

So what then are considered changes that require a new ROV?  The PCI SSC defines the following changes as those requiring that an application be reassessed and a new ROV prepared and filed.

  • Changes that directly impact components of the application, which performs the authorization or settlement of the payment transaction, such as any change that can be tied to a PA-DSS requirement.
  • Changes made to how cardholder data is stored, processed, or transmitted such as adding a new authentication module or database.
  • Changes that impact the approved underlying operating system or platform.

The first two points should not surprise anyone.  However, the last point regarding changes to the operating system is interesting as I know a lot of vendors and my own clients that will now hide behind this as a way to keep from patching their systems.  I do not think this is the effect that the PCI SSC desires, but I can tell you that is exactly the effect they will get.  As a result, vendors and merchants alike will not patch because any patching other than the OS release certified will now invalidate their PA-DSS certification.  This would seem to be in direct conflict with the PCI DSS, a situation that the PCI SSC told us would not happen under v2.0 as they were aligning the two standards to complement one another.

Under such a scenario, should Microsoft issue an emergency patch to SQL Server in response to a severe threat such as SQL Slammer, that patch would likely not be applied by merchants because that act will invalidate the application’s PA-DSS certification and the vendor would also likely not recommend that the patch be applied for support issues as well because of the PA-DSS certification issue.  This will only lead to making applications less secure, not more secure.  The PCI SSC will need to further clarify this point to make sure this is not the case.

I also have to question the change to the approved underlying platform.  So, if a vendor only certified their application on Windows or Linux running on HP hardware, if they change to IBM hardware configured with exactly the same memory, processor, expansion slots, etc. the application must be reassessed?  In this day of common Intel or AMD -based hardware, that seems a bit over the top even for the PCI SSC.  However, in the case of going from an Intel to AMD or from Windows to Linux, I would agree that the AMD or Linux systems need to be assessed.  I know what the PCI SSC is trying to accomplish with this definition, but they will have to clarify this as well.  Just what constitutes a platform change?  Is it the whole platform, CPU, memory, sub-components such as NICs, modems, SAN, NAS, etc.?  How they answer will drive the costs of certification.  I think they have opened a Pandora’s box that they should have discussed further with the PA-QSAs and application vendors before issuing this “clarification.”

Then there are those of you that develop in-house custom applications that are in-scope for the PCI DSS.  Do not think that the PA-DSS does not apply to you.  While it technically does not apply, as I point out to my clients that develop applications, the PA-DSS is a great framework for secure application development.  I urge our clients’ application developers to use the PA-DSS standard as a reference and framework for their own secure development practices.  Therefore, I would recommend that all custom application developers follow the aforementioned definitions.


Interesting Announcements From The PCI SSC

For those of you that are not QSAs, the PCI SSC over the last year has tried to keep QSAs in the loop by issuing a monthly Assessor Update newsletter via email.  These usually are not noteworthy, but the November 2010 issue contains a number of items that need to be shared just in case you miss your edition or you are not a QSA.

PCI DSS Timeline Clarification

The Council apparently got the message that they did not communicate the sunset date for the PCI DSS v1.2.1 and the start date for PCI DSS v2.0 very well.  As a result, they issued a clarification in the November 2010 newsletter.  To quote the Council:

“Entities needing to comply with the PCI DSS are strongly encouraged to begin using the new standard immediately. However, version 1.2.1 will remain effective until December 31st, 2011 to allow everyone time to adopt any changes they may need to in order to maintain their PCI DSS compliance. This means that organizations assessing and reporting compliance during 2011 may validate to either version 1.2.1 or 2.0. However, the Council urges all organizations to complete their transition to the new standard as quickly as possible, especially where any new controls may enhance the protection of cardholder data.”

Since QSAs will not have the scoring template until sometime in January 2011, it makes planning and executing any assessments difficult until the scoring template is issued.  As a result, the earliest I can see any v2.0 assessments getting started is March 2011.

PCI DSS and PA-DSS v2.0 Scoring Templates

And speaking of those scoring templates, the scoring templates for v2.0 of the PCI DSS and PA-DSS should be published sometime in January 2011.  It would be nice to have these a bit earlier, but better late than never.

Expiration Of PABP v1.4 Extended 90 Days

The PABP v1.4 standard that was expected to expire tomorrow, December 2, 2010, has been extended to March 2, 2011.  To quote the Council:

“This updated deadline recognizes the challenges many merchants and Payment Application end users have in implementing system changes over the busy holiday period, and allows the Payment Application vendor community to consider submitting new versions of their products for assessment against the new PA-DSS 2.0 standard.

The Council is committed to reviewing all submissions for the updated versions of expiring PABP v1.4 applications, and this new March 2nd 2011 deadline will allow the review process to be completed before previous versions of these applications expire.  This extension will also provide more time for PA-QSAs to complete reviews of those Payment Applications that are currently in process.  Finally, this extension will allow Payment Application vendors, should they choose to hold off on assessment of expiring Payment Applications and instead submit (after January 1st, 2011) their Payment Applications for assessment against the new PA-DSS v2.0 standard.”

ASV Sampling And Scanning Do Not Mix

While sampling of devices is allowed under the PCI DSS, it is not allowed for ASV scans.  To quote the Council:

“Within a given quarter, all Internet accessible systems must pass an ASV scan. It is not necessary that they all be scanned at the same time, but they all must be scanned quarterly.”

Apparently, some ASVs were only scanning a sampling of PCI in-scope devices each quarter.  I am sure this will lead to consolidation of a lot of organization’s external network presence.

2011 PCI SSC Training Schedule

The training schedule for next year should be posted to the PCI SSC’s Web site by mid-December.

Telecom Private Circuit FAQ Issued

See the end of my post on MPLS for the text of the FAQ.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

December 2010