Posts Tagged ‘clarification

02
Jul
15

Policies, Standards And Procedures

Nothing bothers me more than asking for an organization’s firewall policy (or any policy actually) and getting my own personal version of ‘War and Peace’. The document is a mix of policies, standards and procedures all in a vain attempt to get as many topics covered as possible in a single document. And then the writers of these tomes wonder why no one actually reads them and therefore they have little or no compliance.

As that as our backdrop, here is my take on how to properly write policies, standards and procedures.

POLICY

Webster’s dictionary defines a policy as:

“a high-level overall plan embracing the general goals and acceptable procedures”

The key phrase here is “high level” not details like I encounter in most policies I read.

At a minimum, a policy needs to cover the following topics.

  • Problem statement. This section defines the problem or issue that the policy addresses. This should be no more than two paragraphs and should be devoid of technical terms or legalese if at all possible. If you are taking more than two paragraphs, then you either need to figure out how to be more concise or maybe you are covering too much ground with just one policy. Do not be surprised when you write your problem statement that you figure out ways to consolidate topics or better define topics. For example, why write separate electronic mail, letter writing, facsimile and texting policies when they are all forms of business communications and then just write a communication policy.
  • A bullet point or numbered list of the objectives of the policy. Are you protecting the assets of the organization? Are you minimizing the potential for fraud? Are you informing employees of their rights? Document the objectives you are trying to achieve with this policy.
  • Policy statement. Define the organization’s position and solution(s) regarding the problem documented from the problem statement.
  • Define who is responsible for what. Explain the responsibilities of boards, committees, all personnel, corporate officers of the organization and anyone else that is affected by this policy such as contractors or temporary help.
  • Policy management information. This is a block of information such as the date of issue, the last date of review, board approval date and other information that might be required to prove that policies are updated and reviewed.

The Guru’s rule of thumb is that a policy should be no longer than two to three pages in length. Actually the shorter the policy the better. Anything longer than that and it is not a policy.

As an example, here is that Communication Policy.

Problem Statement:

The organization provides employees with a variety of communication capabilities. As with any form of communication, care must be taken to ensure that the organization is represented appropriately and intellectual property is appropriately protected.

Objectives:

  1. Ensure the appropriate use of organization assets.
  2. Ensure the protection of the organization’s intellectual property and sensitive information.
  3. Notify employees of their rights when using organization-provided communication systems.

Policy Statement:

The organization provides a variety of communication systems including, but not limited to, written, pager, telephone, cellular telephone, facsimile, voice mail, electronic mail and instant messenger systems. These communication systems are for employee use in conducting organization business. The personal privacy of these communication systems is not guaranteed by the organization, as the organization may, from time to time, find it necessary to monitor communication systems and traffic to protect its intellectual property, sensitive information and other critical assets.

As with any method of communication, employees are to conduct such communication in a dignified and business-like manner. Failure of an employee to communicate with other persons in a dignified and business-like manner will result in disciplinary action and possibly termination.

While the use of the communication systems are specifically for conducting the business of the organization, management recognizes that personal use of the communication systems will occur. Employees are to minimize personal usage of the organization’s communication systems. Should personal usage become a problem, the organization will take steps to restrict personal communication that include, but are not limited to, disciplinary action and technical measures to limit or restrict personal communication.

Responsibilities:

The Corporate Security Officer (CSO) is responsible for development, implementation, dissemination, and monitoring of appropriate communication policies, standards and procedures. The CSO is also responsible for the development and implementation of an internal training program to educate personnel on the communication policies, standards and procedures.

Employees, contractors and temporary personnel will be required to attend a policy training course as well as sign an acknowledgement statement that refers to the organization’s policies regarding the use of the organization’s communication systems at time of on-boarding and annually thereafter. Failure to sign this statement will be grounds for termination of employment/contract.

Information systems management is responsible for implementing, maintaining and monitoring appropriate methods for securing and monitoring automated communication systems.

Internal Audit is responsible for periodic audits of the organization’s automated communication systems capabilities to ensure compliance with the communication policies, standards and procedures.

The head of Human Resources is responsible for the management oversight of this policy.

There are a number of other policies, standards and procedures that support this simple policy. All of those support and facilitate this document. But hopefully this provides you an example of what a policy should look like.

STANDARD

To continue with definitions, Webster’s defines a standard as:

“something established by authority, custom, or general consent as a model”

The key word with this definition is “model”. Standards define for example how device names will be formed, how IP address ranges will be defined and where those addresses will be used, how to properly code in various programming environments, network architecture for where and how to use firewalls and other security devices and the obvious how devices will be configured.

It is allowable to put hyperlinks to Internet documents or other reference documents so that you do not have to constantly update those documents. Where I encounter this the most is with device configuration standards. An organization that for example has Cisco equipment can provide hyperlinks to the relevant Cisco configuration and security standards for the various versions of IOS or Nexus standards on Cisco’s Web site so that the organization can leverage Cisco’s documentation.

Standards can be long or short, but the key is that a standard needs to address the model you are trying to establish.

PROCEDURE

Webster’s defines a procedure as:

“a series of steps followed in a regular definite order”

A proper procedure is a checklist or a numbered list that tells anyone how to complete some process. Examples of procedures are:

  • Checklists for configuring a device that proves that the standard(s) involved were followed.
  • Daily operations checklists to document that all daily procedures were performed.
  • Monitoring procedures for ensuring that all devices are properly monitored.
  • Incident response procedures documenting how to respond correctly to particular alerts that could be an incident.

The key though with procedures is that they must be complete so that anyone can perform them. And when I say anyone, I mean anyone. If you have never been through procedure writing, the best example I can give you is to write down instructions for tying a pair of shoes and then give them to a co-worker and have them follow those instructions. You should find that it is not as easy as you might think and will take many revisions to get a set of working instructions.

When I was a CIO, the way I tested procedures was to perform them myself. What started this practice was an extended power outage one night and our UPS systems required us to shut down the data center. When the power came back on around 4AM, the night operator was not able to get the systems back up and running because he had no procedures to follow. As a result, when I got in at 6AM systems were still powered down and the day operator was waiting for some key people to get everything back up. I gave the IT staff three days to prepare start up and shut down procedures for all systems and then came in on a Sunday and tried them out myself. Needless to say, that Sunday did not go as well as people expected and a lot of updates and changes to those procedures were made. However, once those procedures were complete, I could have gotten the Chairman of the Board to restart the data center.

Hopefully this gives you the guidance you need to properly write policies, standards and procedures. I think you will find as I have that properly written, such documentation can be very useful in clearly communicating with everyone involved in your organization.

01
Jun
15

Supplemental Validation Procedures Coming

In the April 2015 Assessor Newsletter (received just last week) from the PCI SSC was the following announcement.

Coming Soon – Supplemental Validation procedures for Designated Entities

The analysis of PCI DSS compliance trends as well as the recent data breaches involving cardholder data has revealed that many organizations continue to view PCI DSS compliance as a periodic exercise only, and fail to implement processes to ensure that their PCI DSS controls are continuously enforced. This approach has been shown to result in a lapse in security controls between validation assessments. Organizations must remember that security is an ongoing process that must be incorporated into an entity’s overall strategy and PCI DSS security controls must be maintained on a continual basis.

In response to these trends, the PCI SSC is planning to issue additional validation procedures that are designed to help organizations illustrate how they are maintaining PCI DSS security controls on an ongoing basis. These supplemental validation procedures are due to be published in the upcoming weeks, along with guidance for understanding how and to whom these procedures may apply. Stay tuned!

Could it be that business as usual (BAU) is coming before v4 of the PCI DSS is released?

Who are these “designated entities”?

As the newsletter says, “Stay Tuned!”

UPDATE: On Friday, June 5, the Council issued the ‘PCI DSS Designated Entities Supplemental Validation’ standard. It can be downloaded from the Council’s Web site.  The document gives the following as examples where these supplemental procedures apply as entities that: (1) store, process, and/or transmit large volumes of cardholder data, (2) provide aggregation points for cardholder data, (3) have suffered significant or repeated breaches of cardholder data, or (4) anyone the card brands determine should go through this process.

15
May
15

Whole Disk Encryption Explained

There are a lot of security professionals and lay people that seem to believe that encryption is encryption and that is simply not the case. Worse yet, vendors play into this misconception and obfuscate the issue with lots of words and phrases discussing seemingly complicated encryption key management procedures and the like that are actually meaningless when it comes to protecting data on running systems. As a result, it is time to clarify all of this misunderstanding.

First things first, we need to discuss how whole disk encryption works. As its name implies, whole disk encryption encrypts an entire disk drive. When a file on the whole disk encrypted drive is accessed, the encryption solution decrypts the file necessary using the decryption key provided at system startup and the rest of the drive remains encrypted. That way if a system failure occurs or the system is shutdown deliberately, the drive is always protected.

That is the key concept of whole disk encryption. The drive is technically only encrypted when the system is shutdown. If the system is running, the encryption is technically not in place because the operating system has the decryption key to access the disk at will. This is why whole disk encryption is great for devices like notebooks and the like that are shutdown at some point.

This is also why whole disk encryption is meaningless when applied to a server. When is a server shut down? Never. When using whole disk encryption on a running server, the only control that protects data is access controls, not encryption.

So using this definition, let us examine requirement 3.4.1 in the PCI DSS. That requirement states:

“If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.”

The first statement that throws people is “logical access must be managed separately and independently of native operating system authentication and access control mechanisms”. In short, whole disk encryption cannot rely only on the operating system’s authentication process.

The best example of this is Microsoft BitLocker. BitLocker has a number of modes under which it can operate. It can integrate with Active Directory (AD), it can rely on a trusted platform module (TPM) chip in the computer or it can operate stand-alone.

In stand-alone mode, BitLocker requires the user to provide the BitLocker key by either manually keying it in or providing it on a USB device that stores the BitLocker key in order to boot the system. If that key is not provided, the system will not even offer the user to logon. This form of BitLocker meets the requirements set forth in requirement 3.4.1.

But then the requirement goes on and say, “Decryption keys must not be associated with user accounts”.

In stand-alone mode, the BitLocker key is not associated with the user’s credentials so it also meets this part of 3.4.1.

However, in the AD or TPM modes, BitLocker operates behind the scenes and the end user never knows that their disk is encrypted and the user still logs onto the system as always using their Windows credentials. These modes do not meet the independence requirement in 3.4.1 because all that is protecting the data is the user’s Windows credentials. And in the case of AD mode, BitLocker also does not meet the user credential disassociation requirement because the BitLocker decryption key is tied to the user’s Windows credentials.

But if people would fully read the Guidance column for requirement 3.4.1 they would read the following at the end of the guidance for 3.4.1 where the Council states:

“Full disk encryption helps to protect data in the event of physical loss of a disk and therefore may be appropriate for portable devices that store cardholder data.”

BINGO!

Whole disk encryption helps protect data in the event of physical loss of a disk. Period.

So with a server that never shuts down, if a drive in a SAN or NAS fails, the failed disk with whole disk encryption will be encrypted when it is pulled from the array. But if things are running just fine, whole disk encryption does nothing to protect the data.

So do not be baffled by the statements from those vendors trying to convince you that whole disk encryption on your server is going to protect your data while the server is running. That is not true.

Now you know.

16
Apr
15

ASV Guidance For SSL/TLS Vulnerabilities

Hidden by all of the news about v3.1 of the PCI DSS being published, is a notice that was sent to all PCI approved scanning vendors (ASV) from the PCI SSC regarding how to handle SSL and “early TLS” vulnerabilities.

In regards to the “early TLS” comment, the Council did define the term by referencing everyone to NIST SP800-52 rev1. That NIST document essentially tells the reader that while TLS 1.1 is allowed, whenever possible, TLS 1.2 should be the only version used. In fact, NIST is highly recommending that all government entities move to TLS 1.2 by January 1, 2016.

FYI TLS 1.3 is in a draft specification by the IETF as we speak. I would expect that we will see TLS 1.3 released by the time the PCI SSC’s June 30, 2016 deadline.

With that covered, what is an ASV to do with a scanning customer’s SSL and TLS 1.0/1.1 issues?

According to the letter sent to the ASVs:

Prior to 30 June 2016: Entities that have not completed their migration should provide the ASV with documented confirmation that they have implemented a risk mitigation and migration plan and are working to complete their migration by the required date. Receipt of this confirmation should be documented by the ASV as an exception under “Exceptions, False Positives, or Compensating Controls” in the ASV Scan Report Executive Summary and the ASV may issue a result of “Pass” for that scan component or host, if the host meets all applicable scan requirements.”

The key here is that you must be mitigating the vulnerability and working to migrate to TLS 1.2.

So what would a mitigation plan look like? Most likely you would monitor for usage of SSL or TLS 1.0/1.1 connections to your devices that only support SSL and TLS 1.0/1.1.

For those of you that are not going to be able to migrate to TLS 1.2, the Council gives ASVs guidance there as well.

After 30 June 2016: Entities that have not completely migrated away from SSL/early TLS will need to follow the Addressing Vulnerabilities with Compensating Controls process to verify the affected system is not susceptible to the particular vulnerabilities. For example, where SSL/early TLS is present but is not being used as a security control (e.g. is not being used to protect confidentiality of the communication).”

The reason the Council has to be able to provide a solution past June 30, 2016 here is that it is my understanding that a lot of comments were received about “baked in” SSL that was going to require wholesale replacement of devices to correct the problem. A lot of those devices are IP-based point of interaction (POI) devices. ASVs have been instructed on the process to use to reduce the CVSS so that the vulnerability is no longer considered “high”.

If you have any further questions regarding this announcement, I would discuss it with your ASV. As with all things PCI, every ASV will have variations based on their own risk adversity as to what this pronouncement says.

14
Apr
15

SSL Update

“The report of my death was an exaggeration.” – Mark Twain

Today the PCI SSC announced that SSL and “early TLS” (whatever that means) will not truly die until July 1, 2016. This will allow a transition period for all of you stuck with vendor baked-in SSL as well as the procrastinators amongst us to get converted to TLS 1.2. I say TLS 1.2 because if you are going to convert, you really should go to the most current version and not just with whatever will get you by.

The complete summary of PCI DSS v3.1 changes can be found here.

UPDATE: Late on Wednesday, April 15, the PCI SSC released v3.1 of the PCI DSS. The Council uses NIST SP800-52 rev1 as the definition of “early TLS”. You can get a copy of the new version of the PCI DSS here.

03
Apr
15

PCI Issues Penetration Testing Information Supplement

On Thursday, March 26, the PCI SSC issued their latest information supplement. This one is focused on the topic of penetration testing. Not surprisingly, I have a number of comments based on what I have read. There are some good things that have come out of this effort. But in my opinion, the committee missed the boat in a few areas and those will lead to even more confusion between penetration testers, QSAs and clients.

Scoping

Talk about making a mess of a mess, I believe the writers accomplished that amazingly well in their discussion on scoping. This is probably the area that creates the most confusion and will likely create the most problems.

On page 5 and the first bullet, talk about confusing. It took me a number of readings to even get what I think they are trying to communicate.

“The scope of the internal penetration test is the internal perimeter of the CDE from the perspective of any out-of-scope LAN segment that has access to a unique type of attack on the CDE perimeter. Critical systems or those systems that may impact the security of the CDE should also be included in the scope. Testing must include both application-layer and network-layer assessments.”

Out of scope LAN segments are out of scope because there are no rules that allow them access to the CDE. I agree that you need to test that these LAN segments are truly out of scope, but if there is a unique type of attack available from such a LAN segment into the CDE, that would seem to imply that the LAN segment was not out of scope. Confusing and likely to cause a lot of pen testers to force a lot of unnecessary testing and fishing for bizarre issues on out of scope network segments.

The next sentence paired with the first seems to just add to the confusion because it implies again that the out of scope LANs are not out of scope. These should have been two separate bullet points in my opinion.

In the end, what I think the writers are saying is that even if deemed out of scope, all LANs and VLANs need to be confirmed to be out of scope. However, how they have said that in my opinion leads to a lot of confusion.

If the last one did not cause your head to spin around, this one probably will. It is the third bullet at the top of page 5.

“To be considered out of scope for PCI DSS, a system component must be isolated (segmented) from the CDE, such that even if the out-of-scope system component was compromised it could not impact the security of the CDE. Therefore, the penetration test may include systems not directly related to the processing, transmission or storage of cardholder data to ensure these assets, if compromised, could not impact the security of the CDE.”

Correct me if I’m wrong, but this seems to imply that everything is in scope for penetration testing. I know the writers of this supplement have used the word ‘MAY’, but we also know how ‘MAY’ tends to be interpreted by the Council (i.e., MUST) and how it will be interpreted by a lot of our clients (i.e., do not do it). Until a few penetration testing reports are reviewed by the Council, the verdict will be out on the correct approach. However, coupled with the theme created by these bullets, I am guessing that the committee could not agree on how to express how to scope the penetration test.

But if those bullets were not confusing enough on scoping, the writers add insult to injury. After the bulleted list at the top of page 5, the first paragraph states the following.

“It is not a requirement to test from within the CDE to the servers inside the CDE; and testing exclusively from within the CDE perimeter will not satisfy the requirement. However, when access to the CDE is obtained as a result of the testing, the penetration tester may elect to continue exploring inside the network and further the attack against other systems within the CDE, and may also include testing any data-exfiltration prevention (data-loss prevention) controls that are in place.”

An internal pen test does not require testing from within the cardholder data environment (CDE)? This sounds like the “Three Hop Rule” all over again. But then we twist it further somehow justifying the first sentence.

All this diatribe does is just further obfuscates scoping. What simply should have been said is that everything in-scope for PCI compliance is to be penetration tested. Instead, we got a lot of words that, in my very humble opinion, do nothing to clarify scoping.

PA-DSS

On page 6 under 2.3.2 PA-DSS Compliant Applications we get the following guidance.

“If a payment application has been PA-DSS validated, the application’s functionality does not need to be tested as part of the entity’s PCI DSS compliance validation. However, the implementation of the application does need to be tested. This includes both the operating system and any exposed services, but not the payment application’s functionality (e.g., authentication, key management, transaction processing, etc.) since this was validated as part of the PA-DSS application validation.”

Under the PCI DSS, even PA-DSS validated applications still must be tested to prove that their implementations were completed per the vendor’s PA-DSS implementation guide. Yet for penetration testing, PA-DSS validated applications, their operating systems and exposed services are exempt?

I hope the acquiring banks are geared up for the fights this is going to cause between the penetration testers and their clients. For example, in the restaurant and hospitality industries, I can see the situation where penetration testing only occurs on firewalls, routers and switches because everything else is PA-DSS validated.

I do not believe this is what the Council wanted, but that is what this supplement is stating. If we are truly talking about security of our networks and systems, this is not how you achieve it.

Social Engineering

I was pleasantly surprised that this was included in this supplement. However, as I read through this section, I got more concerned that it was even included.

For section 2.5 Social Engineering on page 7, we encounter the following.

“Social engineering is the attempt to gain information, access, or introduce unauthorized software into the environment through the manipulation of end users. PCI DSS v3.0 reconfirms testing by requiring industry accepted penetration testing approaches (many of which include social engineering as part of their approach) and to have an approach to penetration testing that “considers the threats and vulnerabilities experienced by merchants in the last 12 months.” This may include social-engineering attacks as a method used for introducing malware into the environment.”

This makes social engineering attacks legitimate if the client has been breached by such methods in the past year. But does this include physical as well as logical social engineering? Since it is not explicitly called out, I would assume so, but it should have been defined. Regardless of that being explicitly called out, this is a great thing to see in this document.

On page 8, the last paragraph of the discussion on social engineering states:

“Social-engineering testing may not be appropriate or provide a meaningful result for all organizations. Although social-engineering testing is not a requirement of PCI DSS, an organization may consider documenting the reason(s) for foregoing social-engineering testing and include applicable documentation with the internal and external penetration test reports, particularly if social-engineering attacks were encountered in the last 12 months.”

What started with such promise ends on a very sour note. This last paragraph gives organizations an out. So I guess we can look forward to a lot of organizations issuing inventive explanations to avoid social engineering testing.

Some Miscellaneous Issues

On page 2, the seventh bullet down in the Terminology we have the following.

National Vulnerability Database (NVD): The U.S. government repository of standards based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance (e.g., FISMA).”

I know they just grabbed the description from the National Vulnerability Database (NVD) Web site, but that does not mean that it is appropriate for use when discussing penetration testing. The Common Vulnerabilities and Exposures (CVE) database maintained by Mitre Corporation would have been the better reference since the NVD uses the CVE as the catalog of vulnerabilities.

I have no idea how the Federal Information Systems Management Act (FISMA) ties into the NVD. If they had truly pulled the content exactly from the NVD Web site they would have grabbed the fact that it ties to the Security Content Automation Protocol (SCAP) not FISMA.

My last comment on this topic is regarding the fact that the NVD (or the CVE) enable automation of vulnerability management. Had they brought in the reference to SCAP, I might have had an easier time going along with things. The NVD/CVE do not contain the actual exploits, they are merely a catalog of the vulnerabilities. As a result, I question the value of either the NVD or the CVE as to it automating vulnerability management. Yes, the CVE catalogs all of the vulnerabilities, but that is the value. In my opinion, the use of either of these in regards to the automation of the vulnerability management process is a stretch. They are a foundation that will allow for automation, but they do not in and of themselves actually enable automation.

Under 2.3.3 Web Applications on page 6, we have the following.

“It is common for an environment to host a web application that was not specifically coded for the organization such as commercial, off-the-shelf web-mail interfaces, document-sharing tools, file-transfer services, network-device administrative interfaces, etc. In these instances, the web application does not typically need an application-layer penetration test as the entity is not responsible for the source code of this type of software. Instead, the tester should perform a network-layer test and ensure the software was implemented, configured, and is currently being maintained in a secure manner (disabling or uninstalling unused services, blocking unused ports, applying current updates, etc.).”

I can see it now, organizations arguing over the pen testing of IBM Websphere, Oracle eCommerce or similar Web application frameworks because they are a package. It is a framework that you then must write an application for, but penetration testers will get into arguments over that aspect.

A Few Good Things

Yes, I have complaints, but there are some good things that are contained in this supplement.

Probably the best thing about this information supplement is the checklist at the end of the document for evaluating penetration testing reports. The Council should have a similar item for vulnerability scanning reports as well.

The discussion of the penetration testing methodology in section 4 of the document is a good read for both penetration testers and the people that contract for penetration testing services. But keep in mind it is a framework and not the entire rule/procedure set for conducting a proper penetration test. Yes, there are some good details in areas, but it is not the detailed roadmap that some people will portray it.

Section 3 has a good discussion on how to evaluate the qualifications of a penetration tester. It was nice to see that while they document the common certifications a penetration tester may hold, they caution readers that certifications alone does not make a penetration tester qualified.

The case studies in section 6 probably do more to clarify than the previous five sections. They are good examples of properly conducted penetration tests.

Finally, there is a good discussion on the definition of a “significant change”. However, I think this discussion as well as a discussion about the words “periodic” and “periodically” should be put into a discussion about risk management and assessment, rather than brought up here. I wrote a post on this topic a while back to provide some guidance to people and it was nice to see that my thoughts jive with what they wrote in the supplement.

What Was Missed

The biggest item I think that was missed was how to handle the penetration testing of huge environments. Case study 6.3 discusses the concept of sampling somewhat, but it deals with a retailer with only six locations, not thousands of locations. There are also veiled references to sampling in section 2.3.4 in discussing a separate testing environment. But there is no deliberate and complete discussion on this very important topic.

Retailers such as Wal-Mart, Target and Home Depot have more than a thousand locations with 15+ checkout lanes meaning there are, at a minimum, 15K devices out in the field that technically need to be penetration tested. How does an organization penetration test such a large number of devices annually? The answer is that they do not test them all. It is physically impossible. However, this information supplement does not address that situation and provide guidance as to what the Council deems as acceptable.

While I use some of the largest retailers in the world as my example, it is not just them as there are mid-sized retailers as well as hospitality organizations that face this same issue. I had hoped that how to approach these situations would have been explicitly addressed in greater detail but it was not.

The writers also missed a huge topic regarding social engineering testing. Unlike vulnerability scanning and penetration testing, in my experience, it is virtually impossible for any organization to “pass” social engineering testing. So what would be considered “passing” in a social engineering test? That is not discussed but should have been. Since this document broaches the subject, it is probably a good topic to propose for a SIG for this year’s SIG recommendations so we get social engineering covered in an information supplement rather than as a sidebar to penetration testing.

It is not that this information supplement is an entirely bad document. But the advice it provides needs to be taken with a bit of caution. As usual, the shortcomings will be leveraged by all of the “check boxers” to do as little as possible so that you get that box checked for 11.3.

20
Mar
15

You Make The Rules

At last year’s Community Meeting, there were a couple of instances where members of the PCI SSC reminded organizations that it is up to them to set the parameters of their PCI assessment. A lot of people in attendance took those comments to mean it is up to merchants and service providers to define the scope of their assessment, but it goes further than that.

For years organizations have complained that they receive varying advice from different QSAs even when the QSAs are from the same firm. Obviously this situation is frustrating for not only merchants and service providers, but for the QSAs as well.

To address this situation, the Council is telling all PCI stakeholders that it is up to the organizations being assessed to define the rules of the assessment. Not just the scope, but also what level of risk that the organization is willing to accept. So what does that mean? I intend to clarify that in this post.

And to be extra clear, this is not some excuse to create a set of rules that allow you to skate by. You must show your work and document your rationale. If your QSA has honest concerns about your work, then expect some push back and bringing your acquiring bank into the discussion. If your acquiring bank agrees with your rules, then you need to get that in writing from them and everyone should move on. But if your acquiring bank agrees with your QSA, then expect to make changes to your rules.

Scoping

Scoping is the responsibility of the organization being assessed, not your bank’s or your QSA’s responsibility. This requirement is even explicitly called out in the PCI DSS on page 10, second paragraph, second sentence.

“At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

The first step in defining scope is to define the rules of how to scope. This is the toughest part of scoping.

Complain about it all you want, but the Open PCI Scoping Toolkit is a good framework to start the discussion about how to scope a PCI assessment based on the risk presented by devices. My first recommendation is that I would highly recommend that you stick with the three categories. In my experience, organizations that create more categories just end up creating more confusion and consternation. However, there is no way to go with fewer categories without putting your entire network in-scope.

Categories 1 and 3 are not the ones in question. My personal opinion is that having two sub-categories in category 1 seems silly to me. Devices and systems that directly process, store or transmit cardholder data (CHD) or are in use within or define the cardholder data environment (CDE) are category 1 regardless. Category 3 devices/systems are those that never, ever come into contact with the CDE. In my opinion, these two categories are clear cut and pretty straight forward.

Where the arguments occur or will occur most often is over the category 2 devices and systems. You can accept the four sub-categories that are defined in the toolkit or come up with your own. If you are going to define your own sub-categories for category 2, the key point to remember is that category 2 devices/systems have direct or indirect influence over the category 1 devices/systems because they have access in some way to the CDE. The sub-categories are used to define the level or risk or threat these category 2 devices/systems represent to the category 1 devices/systems based on the type of access the category 2 devices/systems have to the CDE.

The difficulty with setting the category 2 sub-categories is that everyone has their own risk tolerance. It is those differences in tolerance that create the problems. Security personnel tend to be more conservative because it is their butt on the line if something bad happens. The further people get away from security, the more risk tolerant people seem to get because they do not have a complete understanding/appreciation of the minutiae.

Experience says that whatever and however you define category 2 devices/systems do it as simply and clearly as possible. I would highly recommend you keep the number of sub-categories to a minimum and that you use examples for each sub-category so that readers understand where devices/systems fall under your sub-categories. The key outcome of this effort is a formal document, similar to the Open PCI Scoping Toolkit that; defines your categories, the rationale for those categories, provides examples for each category and is approved by executive management.

Once you have defined your scoping categorization rules, then it becomes an exercise in categorizing your inventory of networks, devices and systems based on your criteria. Do not be surprised if during this process networks, devices and systems you thought were out of scope suddenly come into scope. This is not unusual because prior to this point you were just taking a scientific wild ass guess (SWAG) as to what was in-scope.

Risk Assessment

Once you have your scoping categories set, you need to roll that methodology into your risk assessment so that you can properly assess your risk. By doing so most organizations find that their risk assessment shows more devices/systems at higher risk because they are now in-scope for PCI compliance.

Take your scoping categories and convert them to weights for evaluating risks. For example, category 1 devices/systems would carry the highest risk weighting available. Category 3 devices/systems would carry the lowest risk weighting allowed. Category 2 systems would carry risk weights somewhere between the highest and the lowest based on how you have defined your category 2 sub-categories.

Define Your Terminology

This is very straight forward, but is usually missed by most organizations. Organizations need to define their terminology. In particular, what the organization means by ‘significant change’, ‘period’ and ‘periodically’. I wrote a post on this very topic a while back so I will not bore you here with that discussion. These are very important definitions that must be set.

However there are likely other terms that should be defined for your QSA and anyone else without intimate knowledge of your technology environment. My personnel pet peeve is the lack of definitions of acronyms that are commonly used by your organization but that might be easily misunderstood by anyone else. It never ceases to amaze me when people inadvertently treat an outsider as an “idiot” when they speak in acronyms and the outsider has no clue as to what was said because they are not insiders.

My favorite example of this was a person that kept referring to the ‘HSM’ during our interview. Given this was a PCI assessment, my assumption was that they were referring to a hardware security module, however the way they used ‘HSM’ in our interview seemed to be in the wrong context. So I asked them and they confirmed my suspicion, they had been referring to a custom application called high-level system messaging. Had it been in a mainframe environment, HSM could have been a reference to hierarchical storage management. This is why a glossary of terms and acronyms is a good thing to build, not just for PCI but for any newcomer to your environment.

Discuss This With Your QSA

Finally, do not keep your QSA in the dark as you work through this process. As you create your documentation and classify your devices and systems, run this all by your QSA to get their buy in before they start your assessment. Most organizations will not run into too much push back from their QSA unless they are trying to set the bar too low in a vain attempt to make the PCI compliance process too easy, i.e., checking a box. The last thing you should do is spring this on your QSA the day they start your assessment. And that includes if you update or change your rules between assessments.

All of this effort should result in a much more straight forward assessment because you have defined the rules and criteria to which you are to be assessed.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

July 2015
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,268 other followers


Follow

Get every new post delivered to your Inbox.

Join 1,268 other followers