Archive for the 'information supplement' Category

24
Oct
21

Remote PCI Assessment Guidance Issued

At the end of September 2021, the PCI Council released a Guidelines and Procedures document on conducting Remote Assessments for PCI and card brand assessments.  Most of this document is a rehash of previous Council statements and guidance.  However, there is one new element in this document that all QSAs will need to read and comply with and that is the requirement of documenting a feasibility analysis to justify conducting a remote assessment.

Some of the examples the Council gives as valid reasons that an on-site assessment may not be feasible includes:

  • Restrictions on the ability to travel or meet in person due to health and safety concerns or government advisories.  We are all familiar with the COVID-19 pandemic and its impact on travel, particularly international travel.  However, I encountered this a while back due to a volcanic eruption in Iceland that cancelled my trip to Europe.  Since we had no way of knowing how long the eruption would cause travel disruptions and we were on a tight timeline, we conducted video conferences rather than travel.
  • Geographic locations that are physically inaccessible or difficult to reach.  I personally ran into this situation one several years ago when a data center in Europe that was supposed to be decommissioned before the next assessment remained operational.  The company I worked for had shut down their EU operations and there was no way to justify 16 hours of flight time for a two-hour data center walk through.  We held meetings with the data center operator via video conference and did a virtual walk through.
  • Testing required at a location is limited to documentation and interviews and no observations of processes, systems or physical environment apply.
  • The entity operates a virtual environment without physical premises or facilities.  This has become more and more common with entities that operate in The Cloud.  Why rent expensive office space when there is not need for it?  This situation only got more prevalent with the pandemic and will likely only increase in the future.

As the Council states in their guidance,

“For many assessments, a combination of onsite and remote testing may provide a suitable balance, as it allows for increased efficiencies in the assessment process while enabling an appropriate level of assurance to be achieved in the assessment result.  For example, documentation reviews can often be performed remotely without significant loss of assurance, whereas observations of processes and environmental characteristics will generally require an onsite review.”

Regardless of whether the assessment fits into one of the bullets above, the Council wants QSAs to formally document their analyses of why the onsite assessment cannot be performed and the risks that may present to meeting the assessment objectives.  This analysis needs to be completed prior to starting any testing and is supposed to be a joint effort between the assessor and the client.

Topics that the Council recommends be addressed include, but are not limited to:

  • Confidentiality, security, and data protection requirements.
  • Availability and effectiveness of the remote assessment technologies.
  • Effects on entity’s personnel.
  • Effects on operation support.
  • Assessment scope and completeness.
  • Quality and reliability of digital evidence.

The Council further states:

“During the analysis, the entity and assessor should identify any challenges and potential risks associated with the remote testing and determine whether it is feasible for testing to be thoroughly completed to produce a high level of confidence in the assessment results.

The results of the feasibility analysis—including the risks and challenges associated with use of the remote testing methods, and any mitigating controls for overcoming the risks and challenges—should be documented and agreed upon by both the entity and assessor. A copy of the feasibility analysis results should be included with the applicable ROC/ROV. Entities and assessors may be required to produce the analysis upon request by the PCI SSC or applicable compliance-accepting entity.

The key points from that statement above is that: (1) the feasibility analysis needs to be submitted with the ROC/ROV and, (2) if requested by the PCI SSC or compliance accepting entity (i.e., Brand or bank), the QSA is required to produce the analysis.  As a result, this is a non-optional exercise.

The feasibility analyses must document that:

  • The assessment is feasible to be fully completed at this time using onsite methods, remote methods, or a combination of onsite and remote methods.
  • The assessment is only feasible to be partially completed at this time.
  • The assessment is not feasible currently.

According to the guidance, it is only those assessments that are completely feasible that can be conducted.

The Council includes a very important note regarding the analyses.

“The feasibility analysis determines whether the use of remote testing methods is feasible for a particular assessment.  Determining that a remote testing method is feasible does not guarantee that use of the testing method will produce the level of assurance needed for the assessor to reach a finding; this will depend on how the remote testing method is implemented and used, whether the testing can be completed for all applicable components and areas, and whether sufficient evidence is provided for the assessor to make a determination.  Assessors and entities should continue to monitor and evaluate the effectiveness of the remote testing methods throughout the assessment to confirm whether the testing methods are performing as intended and whether additional testing may be needed.”

This concept of “assurance” appears to all be in the eye of the beholder.  Meaning, if the Council, Brands or Banks determine, in their opinion, that the remote methods are not providing appropriate levels of assurance, the ROC/ROV can be rejected.  Not that a lot of banks are going to reject ROCs/ROVs on this, but I can see the Council’s AQM reviews and Card Brands rejecting ROCs/ROVs on analyses that they deem flawed or incomplete.  The AQM process is the most concerning because a QSAC could end up in remediation due to a failure to appropriately document the remote assessment feasibility.

As with most edicts issued by the Council, they should have produced a form for this feasibility analysis so that everyone understands what is required from these feasibility analyses.  Can the feasibility analysis be documented in section 1.2 of the reporting template or is a separate document required?  I would recommend this for the obvious remote assessments of COVID and everything in The Cloud.  I would recommend a separate document for feasibility analyses that are longer in discussion.

Sadly, I foresee a lot of confusion and heartache in the QSAC community as we move through this new requirement.  That is because I see a lot of assessments that are blocked due to COVID travel restrictions or the assessed entity having no physical offices being rejected for “flawed” feasibility analyses when it should just be allowed with no further documentation or discussion.

It will take time to see how this shakes out.

UPDATE 11/29/2021 – I received a comment on this post (see below) and the confusion is beginning. A service provider has had one of their customers request the documentation regarding what is provided in Appendix A of the remote assessment guidance document as well as the remote assessment feasibility study. Since these are ROC documents, there is no requirement from the Council that requires any organization to turn over their ROC to any third party other than their acquiring bank or the card brands. The AOC is the communication document to third parties. If an organization wishes to turn over Appendix A from the guidance, that is the organization’s decision, but it is NOT mandatory nor it is required by the Council.

Advertisement
30
Apr
20

The Last (Hopefully) Scoping Discussion

Back in May 2017, the Council finally issued their long awaited Information Supplement on Scoping and Network Segmentation.  Based on some questions I have received since then, there are apparently a lot of people that still have not read the official information supplement.

So, I am invoking “RTFM” which means the first order of business is to get everyone to read the information supplement before asking questions.  The second order of business is to forget everything that was discussed in the Open PCI Scoping Toolkit as the Council will tell you it does not apply and never did apply.  Even though they never offered any alternative until the publication of the aforementioned information supplement.  So, throw away all your copies of the Open PCI Scoping Toolkit as it is not usable anymore.

With the Council’s information supplement, there was a change in terminology in how we refer to the various network segments and what is in scope.  As you will see, the Council’s approach has simplified the scoping classifications.  Because of the pervasiveness of the Open PCI Scoping Toolkit, I have included some references to the categories used in the Toolkit to clarify the Council’s terminology.

  • Cardholder Data Environment (CDE) Systems – These systems are always in scope for PCI compliance. These are systems that are either: (1) a system that directly processes, stores or transmits cardholder data (CHD) or sensitive authentication data (SAD), OR (2) a system or component that is on the same network segment (i.e., same network subnet or VLAN) as a system component that directly processes, stores or transmits CHD/SAD.  With the Open PCI Scoping Toolkit, these were considered ‘Category 1A/1B’ systems.
  • “Connected To” or “Security-Impacting Systems” – These systems are also always in scope for PCI compliance. These systems are basically those that directly connect to systems in the CDE or could influence the security of the systems or data in the CDE.  In the Open PCI Scoping Toolkit, these were the ‘Category 2A/2B/2C/2D’ systems.  Unlike in the Open PCI Scoping Toolkit, the Council chose to simplify things and have only one category versus the “shades of gray” approach.  That said, there are more detailed criteria defined on page 10 of the information supplement that define these systems.  Examples include, but are not limited to, Active Directory (AD) servers, RADIUS servers, TACACS+ servers, Security Information and Event Management (SIEM) solutions, Network Time Protocol (NTP) servers, Domain Name System (DNS) servers and Domain Host Control Protocol (DHCP) servers.  These systems and devices can also be considered as “Shared Services” because they provide service not only to the CDE but also to out of scope systems.
  • Out of Scope Systems – There are four criteria for these systems: (1) The system must NOT process, store or transmit CHD/SAD AND  (2) the system cannot be on the same network segment or subnet as the CDE. AND  (3) the system cannot directly connect to any other system or component in the CDE  AND  (4) The system does not meet ANY of the criteria described for “Connected To” systems.  If all of these criteria are met, then the system is out of scope.  In the Open PCI Open Scoping Toolkit these were the ‘Category 3’ systems.

As we have found out at the Community Meetings since the publication of the information supplement, the Council will demand you use their scoping terminology.  If you use the Open PCI Scoping Toolkit scoping categories, you will be asked to restate your questions or comments using their terminology.  So please from here on out use the Council’s terminology whenever discussing scoping categories.

Why Is Scoping A Problem?

Scoping is a problem because organizations think it is the QSA’s problem.  However, the PCI DSS states on page 10:

“At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data, and identify all systems that are connected to or, if compromised, could impact the CDE (for example, authentication servers) to ensure they are included in the PCI DSS scope.”

Eight times out of ten, it falls into the QSA’s lap to determine and confirm PCI scope even though it is the assessed entity’s responsibility to define scope and the QSA’s role is to confirm that analysis.  This is why arguments over scope happen.  QSAs get into trouble because they follow the processes defined below and determine that the scope is not correct.  Had the assessed entity done their work, the argument likely would not have happened or at least would not have been as big as it became.

The purpose of this post is to explain what your QSA is doing when they asked for all the documentation and what they are doing that your organization should be doing before the QSA even shows up.  For QSAs, this is what you should be doing to ensure that the scope of your engagement is correct.

Follow The Data

The first thing that people seem to get wrong about scope is fixating on the storage of CHD and ignoring the processing and transmitting of CHD/SAD.  This is a big reason why voice over IP (VoIP) gets missed.  VoIP typically never stores CHD/SAD.  But when customers are making payments over the telephone, CHD/SAD is being discussed and that is what makes the telephone system a CDE and therefore in scope for the PCI assessment.

The key to resolving this is to follow the CHD/SAD through your networks.  When he was a Council trainer, Art (“Coop”) Cooper was famous for constantly telling his classes to, “Follow the data.”  Therefore, the data flow diagrams overlaid on your network diagrams are so especially important in determining PCI scope.  Done properly, these diagrams allow you to understand where the CHD/SAD flows through your organization (i.e., transmission), where it is processed, as well as where it ends up stored.

From that analysis, you can then document where, if anywhere, the CHD/SAD is encrypted and who manages the encryption keys.  If your organization manages the encryption keys, then you will need to prove and document that those intermediate devices between the encryption endpoints cannot decrypt the CHD/SAD in order to keep them out of scope.  If an outside third party manages the keys, then scope is reduced to where the encryption endpoint is in your environment.  For more about encryption and scope, see my Encryption Series of posts listed on the Post Series References page.

Once you have completed this activity, you have defined your CDE, likely many of them.  It is not unusual for organizations to have their VoIP network and solution as one CDE and then another for their eCommerce or brick & mortar retail.  But there could be even more CDEs depending on your environment.

One other caveat on scoping CDE.  Devices that are in the CDE that do not process, store or transmit CHD/SAD are in scope for PCI compliance.  These include devices and systems such as jump servers, switches, routers, Active Directory domain controllers, DHCP servers, DNS servers and firewalls.

And that is the rub in this process.  It is not unusual to have a client determine that their CDE is larger than they originally believed.  This is particularly true in environments that are rapidly changing.  The reason is that changes occur that involve the processing or transmission of CHD/SAD and people forget that those are also in scope because of their fixation on storage of CHD.  So do not be surprised to be surprised when this analysis turns up with in scope devices that were not believed to be in scope.

Connected To Systems

With the CDE(s) defined, we now we need to define all the systems that connect to the CDE(s), hence the “Connected To” designation by the Council.  The reason Connected To systems are in scope is because they can influence the security of the systems and devices inside the CDE.  The term you will hear some people use is that Connect To systems can be ‘infectious” to systems in the CDE.

The first place to start is by reviewing the firewall rules or access control lists (ACL) that segment your CDEs from the rest of your network segments.  You will likely find specific IP addresses for devices such as Active Directory domain controllers, security incident and event manager (SIEM), FTP, DNS, DHCP, RADIUS, TACACS+ and similar services.  It is not unusual to see application and database servers in a complete network subnet.

The second place to investigate are the organization’s most recent penetration testing results for network segmentation.  It still amazes me how even with a detailed examination of the firewall rules and ACLs that there are still devices that end up with connectivity into the CDE because of human error examining the rules and ACLs.  So use the network segmentation testing to double check your review of the firewall rules and ACLs.

Once you have identified all these networks you then need to make sure that you have an accurate inventory of all the systems and devices on these networks.  I typically ask for Nmap scans of the network subnets to make sure the inventory is complete.  I take the Nmap results and compare those to the organization’s configuration management database (CMDB) or whatever they use to track their system/device inventory.

I also make sure that all the devices and systems found in this process are contained in their internal vulnerability scanning.  Again, it is not unusual to find out that devices and systems are not being scanned quarterly for PCI which is why this check is important.

Now We Have PCI Scope

With all of this done, we now know the scope of the environment and what must be assessed.  But, remember, while you are done for the current assessment, this all needs to be performed again next year.

25
Feb
19

Network Segmentation Testing

As part of penetration testing, merchants and service providers are required to test that their network segmentation is properly implemented and functioning.  Sounds like a simple enough task, but you would be amazed at the bizarre and complicated discussions that QSAs encounter when segmentation testing comes up.

As a reminder, requirement 11.3.4 states:

“If segmentation is used to isolate the CDE from other networks, perform penetration tests at least annually and after any changes to segmentation controls/methods to verify that the segmentation methods are operational and effective, and isolate all out-of-scope systems from systems in the CDE.”

For service providers, requirement 11.3.4.1 adds in the requirement of testing at least every six months or any changes to network segmentation, not just “significant changes”.

Regardless of whether you are a merchant or a service provider, how segmentation testing is performed is the same.

So why all of the issues?

First, the PCI DSS does us no favors with the “guidance” for requirement 11.3.4 which states:

“Penetration testing is an important tool to confirm that any segmentation in place to isolate the CDE from other networks is effective. The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE. For example, network testing and/or scanning for open ports, to verify no connectivity between in-scope and out-of-scope networks.”

The first point of confusion typically relates to the phrase “penetration testing” as though segmentation testing somehow requires the use of a penetration testing tool such as Metasploit or similar to conduct the segmentation testing.  Nothing could be further from the truth.  But the terminology of “penetration testing” clouds the task.

The second point that seems to confuse is the last sentence that starts out with “For example …”.  People seem to miss that start of the sentence and take it that all they have to do is make sure that out of scope devices cannot get to the CDE and that is it.  While network segmentation testing is simple, it is not quite that simple.

What Is Segmentation Testing?

After going through the debunking of all of the mythology and rumors surrounding network segmentation testing, this is the first question asked.  I always take people back to what the purpose of network segmentation testing is – to prove network segmentation is implemented and is functioning as designed to keep the various networks logically separated.

When I say, “various networks”, I am referring to the network segments defined in the information supplement “Guidance for PCI DSS Scoping and Network Segmentation” issued in May 2017.  In that document, the following terminology is used.

  • CDE Systems – any systems/devices that directly process, store or transmit sensitive authentication data (SAD) or cardholder data (CHD) or are directly connected to such systems/devices. These systems/devices are also sometimes referred to as Tier 1 or Category 1.
  • Connected To or Security Impacting Systems – are systems that provide services to the CDE or have connections to systems/devices in the CDE that could adversely affect the security of the systems/devices in the CDE. These systems/devices can also be referred to as “Shared Services”, Tier 2 or Category 2.
  • Out of Scope Systems – are systems that cannot connect to the CDE also referred to as Tier 3 or Category 3.

For PCI compliance, all CDE Systems (Category 1) and Connected To (Category 2) systems are always in scope.  However, for network segmentation testing, Category 3 systems/devices are also included because the testing must prove that Category 3 cannot get to Category 1 and vice versa.  That is typically were network segmentation testing goes wrong is that it only proves that Category 3 cannot get to Category 1 and then stops.  The guidance for requirement 11.3.4 provides some clarity in the second sentence which states:

“The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE.”

The Council has advised that what they want is testing from inside and outside the CDE as well as from other network segments including the internet if applicable.  The idea is to further support the analysis and findings from a QSA’s review of the firewall rules from the requirements in 1.3.x of the PCI DSS.  The reason for this is that with some breaches and the advent of “next generation” firewalls and more sophisticated security technologies, the Council felt that assessed organizations and QSAs were not necessarily proving that network segmentation was truly in place and wanted some additional testing and confirmation.

How Do I Test?

First and foremost, timing of the testing is very important.  For merchants, it should be conducted as close to annually as possible,  For service providers, they are required to be conducted as close to every six months as possible.  But you also need to consider the concept of “significant change”.  If there have been significant changes that affected network segmentation, then the network segmentation testing must be done as soon as possible (the Council typically recommends a maximum of 30 days) after the significant change has been implemented.

While the tool used to conduct the test can be as simple as Nmap or the like, the testing itself can be complicated depending on how your network is segmented.  I have clients that have hundreds of segments that results in a very time-consuming amount of testing.  The key here is to be thorough, but not insanely thorough.

I have no problem with network segmentation testing including a review of firewall and ACL rules and using that information to test for example from a particular network segment into another because the rules are the same for all the network segments being tested to support a particular rule.  The key is to be able to justify why you picked one segment over another and not repeatedly test from only one segment for every test.  Provide the rules with an explanation of your justification for what you did.  This will allow the QSA to understand how you worked and why.

But Nmap is not the only tool that can be used.  There are a number of network management/modelling/monitoring tools such as FireMon, Tufin and RedSeal that can also be used to prove out network segmentation.  In fact, these tools can provide ways to perform the network segmentation testing that do not need to involve scanning the network and merely running reports against the databases created by these tools.

Regardless of the tool used, be careful.  I have seen too many reports where the tools did not go to the devices within the network segment and the results did not necessarily prove segmentation is in place and functioning because when matched up to the server configuration it showed other forms of communication.

Segmentation Testing Reporting Requirements

Once you have completed your network segmentation testing, you need to create a proper report of those results.  At a minimum, a network segmentation testing report should have the following sections.

  • A one to two page (at most) Executive Summary of the network segmentation test, the date the testing was started, the date when testing was completed, the results (i.e., pass or fail) and a summary of all findings and recommendations.
  • Document who conducted the test including a bit of background as to why they are considered capable of conducting the test by including any information security certifications they hold and other relevant information security experience.
  • Provide the reader a frame of reference for the testing performed. At a minimum, this should include a high-level diagram of the various segments (i.e., CDE, Connected To and Out of Scope) and an overview of the IP addressing within each of those segments.
  • Document and discuss any significant changes that occurred since the last network segmentation test and what was done to prove that significant changes did or did not occur since the last segmentation test. This is necessary to confirm to the QSA and other readers that you are not just following some predefined schedule (i.e., annually or semi-annually) but are also ensuring that significant changes also potentially drive segmentation testing as required in by the PCI DSS.
  • Document the methodology that was followed and the tools that were used to prove out network segmentation. What is needed in this section is specificity.  Document step by step, in enough detail that someone else could conduct the testing, what you did to prove network segmentation was in place and functioning as expected.
  • Document any findings and recommendations that result from the network segmentation testing particularly those findings that prove the network is not segmented as expected resulting in a failed test. If segmentation is not in place, then you will need to remediate those findings and retest to prove that the remediation was successful.  If retesting is required, you need to keep all reports so that you have a record of everything that has been tested.
19
Dec
18

The Remote Worker Dilemma

We received the following question during the last PCI Dream Team session back in October.

“We have a call center that sometimes takes a credit card numbers from customers.  Our senior management keeps pushing us to come up with a work-from-home option for some of our call center employees in case of DR and Business Continuity.  We keep telling them that PCI says that all components of such a home setup is subject to PCI standards and thus is impossible, Have any of you seen any solution that would allow this?”

Since that session the Council released the new telephony information supplement that has created a stir in the PCI community.  I wrote about the new information supplement a few weeks back so I will not cover that here, but I will rely on it to answer this question.

First and foremost, remote workers are allowed under the PCI DSS as there are no requirements that prohibit it.  However, there are PCI-related considerations when you want to implement such an approach.

You will obviously need to develop PCI compliance policies, standards and procedures that will support remote working.  If your organization already has policies, standards and procedures for clean desks, secured work area, protection of information, proper handling of sensitive authentication data (SAD) or cardholder data (CHD), then you probably have the bulk of what you need.  You will need somewhere in your documentation to allow for your organization to conduct annual and spot inspections of remote working environments for compliance with organization policies, standards and procedures.

If you do not have those policies, standards and procedures, then you will need to get those published, approved and all employees and contractors to formally acknowledge them.  Most organizations’ policies, standards and procedures work just fine for corporate environments but do not consider the situation when workers are not in a corporate facility.  As a result, it is not unusual to see organizations develop policies, standards and procedures that take into account that the remote workers’ working environment might not necessarily be as secure as those at a corporate controlled office.

The annual inspection can consist of the remote worker taking a picture of their work environment and filling out a form that ensures the remote worker is complying with relevant organizational policies, standards and procedures as related to remote working.  I have clients that have remote workers fill out the relevant PCI SAQ depending on their remote worker environment.  In all cases, the employee signs the form/AOC stating that they are compliant with all relevant policies, standards and procedures.

It is when the organization has questions, issues or concern with a remote worker is when the spot inspection clause becomes useful.  The spot inspection capability allows organization management or an auditor to go to the remote worker’s location and personally examine the work area to ensure that it complies with all policies, standards and procedures.

With the paperwork out of the way, let us now discuss the technical challenges related to remote workers.  The goal here is to minimize the PCI scope of the remote worker’s configuration.

The easiest way to do this is using a point-to-point encryption (P2PE) validated solution or an end-to-end encryption (E2EE) solution for the keying of SAD/CHD.  Of course, this means that you will have to ensure that your application will work properly with a P2PE/E2EE solution which further means not allowing SAD/CHD to be keyed through anything other than a P2PE/E2EE validated terminal also referred to as the point of interaction (POI).  This can also mean pairing the P2PE/E2EE solution with tokenization if your application is expecting CHD back at the end of the transaction.

But P2PE/E2EE only addresses the transaction, not the conversation that results in the transaction.  To reduce costs of remote workers, organizations typically implement a softphone.  Softphones are great.  However, they result in a PCI scoping problem.  As a reminder, when a telephone system is used for having conversations involving SAD/CHD, it puts that system and networks in the cardholder data environment (CDE) also known as a Category 1 system.  As a result, any other system that connects to the telephone system is now also part of the CDE.  Since a soft phone cannot be readily logically or physically segmented from the workstation it connects, it drags the workstation into PCI scope regardless of whether or not SAD/CHD is discussed.

The solution to the softphone issue is to use a physical VoIP phone with a headset.  But it is not as simple as just swapping in a physical phone for the softphone.  That physical phone needs to be on a logically or physically segmented network that does not include any devices that you desire to be out of PCI scope.  It is that segmentation that drives up the cost of the remote worker configuration because you now need to have a managed network device to allow for separate VLANs or physically separate network connections.  Not impossible, just costlier than delivering a cable/DSL modem with four Ethernet ports to the remote worker’s location and being done.

As a result of all of this, it is not unusual for organizations that allow for remote workers that need to be PCI compliant to supply those remote workers with a US Department of Defense compliant document shredder, computer or workstation, router, network switch, display(s), keyboard(s), secure POI(s), telephone(s) and any other equipment necessary to ensure compliance with the PCI standards.

In addition to this, there may be other requirements due to the European Union’s General Data Protection Requirement (GDPR), Health Insurance Portability and Accountability Act (HIPAA) or other security or privacy regulations or requirements.

30
Nov
18

The New Telephony Information Supplement

In case you missed it, the PCI SSC released their new information supplement on telephony this week.  Since I served on this Special Interest Group (SIG) I was involved in its development.  As a result, I thought I would share my thoughts on the new information supplement.

A Bit Of Background

At the start of the SIG a number of participants brought up the fact that the prior Telephony Information Supplement issued in 2011 had basically been ignored by the qualified security assessor (QSA) community and companies being assessed.  A number of QSAs and Participating Organization (PO) representatives explained to Council representatives that they had personally witnessed situations where QSAs ignored voice over IP (VoIP).

That brought about the following response from one of the Council members on the call:

“All QSAs are trained to understand that VoIP is in scope if CHD/SAD [cardholder data/sensitive authentication data] is discussed on any telephone calls.”

The consensus response was that while that is no doubt the case, many participants attested to the fact that they had encountered QSAs ignoring VoIP as being in scope.  Some had witnessed QSAs telling their clients and prospective clients to not worry about VoIP because it will not be in scope.  These same QSAs did worry about the security of call recordings, but they were leaving the rest of telephony out of scope.

That response seemed to send a chill through the Council representatives.  No one identified any particular qualified security assessor companies (QSAC) but the participants made it clear that VoIP was largely being ignored in PCI assessments.  The point was also made that some QSACs were benefiting handsomely in obtaining engagements because of their willingness to ignore VoIP.

But that exchange also identified a shortcoming with today’s telephony solutions.  QSAs and technology people do not seem to understand telephony solutions and appreciate their risks.  Therefore, they do not know where to even start in securing them let alone those that make an attempt only to find themselves in one or more “rabbit holes”.  As a result, it is easier to ignore these telephony solutions than to try and deal with the intricacies and vagaries of securing them.

There were also brief discussions about the shortcomings of the original information supplement from 2011.  The biggest complaint of which was that it was call center centric and did not reflect what was being implemented in the real world.  Various people explained that the 2011 document did not address call centers operated within corporations on a shared telephony solution with the rest of the business nor was there any useful guidance provided for PCI compliance.

Such configurations obviously complicate the scope of PCI assessments since any device connected to the shared VoIP system and network was therefore in scope (hence why a lot of QSAs ignore VoIP).  As we were to find out, the new version of the information supplement would do only a little to address this complex issue.

Disappointment

Trust me, it was not the SIG’s intent nor the Council’s intent to disappoint people, but I have a feeling that a lot of people will be disappointed with this information supplement.  Not that there are not good ideas and solutions presented, they are just not always as fleshed out as well as they should be and do not always represent what actually goes on with the solution.  The reason is for that is because telephony solutions all operate differently when performing various functions such as call forwarding, conference calling and the like.  As a result, providing real guidance depends greatly on how a given solution functions in a particular circumstance.  As we found out a number of times, this issue would come back to bite SIG participants repeatedly.

In my very humble opinion, the latest information supplement is lacking in detailed guidance to a lot of telephony situations particularly those that are complicated because of how vendors have approached Unified Communications which is the driving force now behind most vendors’ current telephony solutions.  The document points out a lot of scope and security concerns regarding the use of softphones and VoIP only to leave the reader essentially up to their own devices as to how to address those concerns using existing guidance from other information supplements.

That was a point of contention as the information supplement was developed.  There were a number of people that argued that more guidance was needed to be provided because the issues are more complicated and nuanced than the supplement leads people to believe.  They wanted more discussion with the card brands about the risks involved so that all parties could come to a consensus over what was acceptable risk and if there were better ways to address those risks and therefore provide more guidance.  Unfortunately, we were told that there was not enough time to have such discussions which drove in great part what resulted in the document that you now have access.

Then there are the threats to VoIP that seemed to be minimized in discussions.  At one point in a meeting someone stated that VoIP is not an attack vector so there is no need to worry about it.  This individual was almost immediately reminded that this is how we got into this situation in the first place.  People ignored the risks to processing, storing and transmitting payment card data and then we all had to do a fire drill to secure that information.

Using CVE Details, I was able to identify close to 400 specific threats to VoIP and/or specific VoIP vendor solutions.  Of those, there were around 250 to 300 that appeared to be able to compromise VoIP and by association, CHD/SAD.  While most had been patched, there were a around 20 that had no fix because they were flaws in the protocols themselves (mostly due to UDP streaming).  The bottom line in this research is that while VoIP might not be an active attack vector at this point in time, it is ripe for being turned into one.  Worse, current information security practices have minimal effect on a lot of the attack vectors thanks to UDP.  And if that was not bad enough, in a lot of cases all it takes is a telephone call to start the attack process.

With that as a background, while the new information supplement is a quantum leap above the 2011 information supplement, a lot of participants feel it is still somewhat lacking in guidance.

Telephony Guidance Anger

I can already anticipate the anger that will result from this one particular recommendation on page 55, section E.4 Unified Communications, where it states:

“As a result, entities can find that their only option to minimize the PCI scope of their VoIP environment is to implement multiple instances of in scope VoIP and out of scope VoIP.”

Say what?!?!?

That will be a huge burst of a bubble to a lot of organizations, QSAs and ISAs alike.  The rationale for this statement goes to Unified Communications and how most vendors have approached it.  The telephony system vendors have now so tightly integrated services such as voice, voice mail, facsimile, video, telepresence, instant message, email and other communication mediums that it has resulted in an inability to decouple and move say instant messaging or email to a different network segment from the call manager.  As a result, there are no easy ways to implement network segmentation around telephony solutions so that some are in the CDE (Category 1) and others are in Shared Services (Category 2).

Unfortunately, Unified Communications is not the only situation where two telephony solutions will be needed.  Softphones, call centers on common corporate telephony solutions and other telephony features/functions will also create situations where the only way to be PCI compliant will be to implement at least two separate telephony systems.

Speaking of softphones, if you were angry at the first statement, your anger will likely only grow when you read the following on page 24, 5.2.4 Softphones:

“It is important to note that the use of such systems [softphones] to capture payment card account data would bring the workstation and probably the network it is connected to into PCI DSS scope.”

The next paragraph after the quotation points readers to the Network Segmentation Information Supplement for guidance.  Unfortunately, the problem with that guidance is that regardless of how you try to segment the workstation, the softphone application will put the workstation in scope regardless.  No other guidance is provided regarding softphones.  It is not like this was not discussed within the SIG, it is just that there was no agreement on how to address this subject.  So, what you read in this section is the guidance you get.

One potential solution discussed to minimize scope is to put the softphone in a virtual desktop (VDI) workstation.  That would put the VDI in the CDE and the workstation as Shared Services.  However, the VDI approach can be fraught with compatibility issues and other technical problems that may not reliably provide telephony service to end users via the softphone.  There is also still some risk of eavesdropping through the end user’s workstation, but it is now limited to memory in the workstation versus the softphone software that can sometimes be addressed with other workstation controls.  This of course is assuming that the VDI solution is easier to control, secure and monitor than the physical workstations.  The bottom line is that there are a lot of moving parts that would have to be assessed on a case-by-case basis, so the consensus was that there was no general, one size fits all recommendation that could safely be made about the VDI approach.

Another scope reduction approach is to use “inexpensive” physical SIP phones for handling calls that are logically network segmented away from the workstation.  I have a number of clients with agents configured this way to limit telephony scope to just their SIP phone.  But then their router must support two VLAN connections and those VLANs cannot be allowed to access each other.  That is easy to do in a corporate environment but can complicate things with SOHO workers.  Such a solution can drive up networking and equipment costs to an unacceptable level for some organizations.  Particularly organizations that were looking at softphones to reduce costs.

There are plenty of other areas of the information supplement that will generate anger mainly because for the first time, the PCI SSC is calling out more areas that are in scope for PCI compliance that organizations and some QSAs/ISAs treated as, or thought were out of scope.

Miscellaneous Comments

There are a few more points that I felt should be discussed.

On page 43, 7.2.2 SIP Trunking, the following quote will be of interest.

“As the technology matures, technical boundaries between an organization and SIP Trunk provider may become harder to define. Scoping for these services will therefore require an understanding of how connections are made between the different entities.”

I feel this is already an issue because the boundaries are already blurred.  When you realize that VoIP is predominately a UDP protocol, there is little you can do from an information security point to protect your telephony system.

First the carriers will tell you that their SIP demarcation device will provide some amount of security for your organization.  Exactly what amount of “security” that device actually provides is questionable at best.

But speaking of UDP, page 54, E.1 Protocols, Ports and Network states the obvious.

“… the use of UDP may render the detection of malicious content or payload more difficult.”

More difficult?  In some ways, it can be impossible to detect malicious payloads because it is streaming, and you want to ensure continuity of a conversation.  This is the biggest security issue with VoIP, because it relies on UDP streaming, VoIP exploits use that stateless streaming to their advantage by embedding the attack in the voice/video stream.

This inevitably brings up the discussion of firewalling your VoIP because that seems to have been the answer for every other security issue.  While the firewall will provide some amount of control and monitoring of TCP connections, it will do nothing for the UDP streams that VoIP relies upon.

Yet I have actually had some firewall vendor sales people claim that their firewalls are “VoIP aware” and can identify certain “bad” VoIP packets.  I’m not sure exactly how you can identify bad UDP audio/video data streams, but they claim to have some sort of proprietary methods for doing just that.  Of course, when you attempt to drill down on that “proprietary method” you get essentially stonewalled because it is “proprietary”.  I take that as an indication of sales “smoke and mirrors”.

Then there is the solution of encrypting all VoIP traffic.  I have had a number of clients suggest this as a solution to the security of telephony.  While encryption of all VoIP traffic minimizes the ability to eavesdrop on calls via the network, it does not remove the risk of eavesdropping via compromised endpoints which is much greater than the network risk.  Encryption also does not remove the risk of malware injected via the UDP stream which is the bulk of the real threats to VoIP.  After all of the discussion surrounding encryption, I really see only marginal value in the use of encryption of VoIP traffic from a security perspective.

Also, on page 54, E.2 VoIP Attacks and Vulnerabilities you get this statement.

“VoIP equipment and software are susceptible to vulnerabilities that could allow someone with malicious intents to gain access to your network, intercept and gather customer data, or initiate a Denial Of Service attack.”

I cannot tell you how many IT professionals do not realize the risk presented by VoIP and its infrastructure.  They seem to treat it like the PABXs of old that used to be located in basements next to the telephone carrier’s point of presence (POP) at their organization’s facilities.

Granted, we have moved away from the Windows and Linux versions of call managers that were standard fare when VoIP originally came out.  Most of today’s call managers are based on some proprietary derivative of Linux or Unix stripped down and modified for telephony.  But there are open source solutions that run on Windows and Linux server editions.  The bottom line though is that regardless of what you run, these are still servers no different than any other servers and they need to be properly configured and get regular patching just like any other server.

That is my take on the latest telephony guidance from the Council.  Better than what was produced in 2011 but still lacking in some areas.

18
Aug
17

Why Voice Over IP Matters

“Voice over IP are the most insidious set of communication protocols ever invented by man.” – Jeff Hall

I have been having some interesting conversations of late with prospects and clients regarding Voice over IP (VoIP).  These conversations all seem to revolve around whether or not VoIP is in scope for PCI compliance.  Ultimately the conversation turns to a discussion of why I believe VoIP is in scope for PCI and almost every other QSA seems to never bring the subject up.

The primary reason I believe VoIP is in scope is that the PCI SSC says so.  If you read FAQ #1153 titled ‘Is VoIP in scope for PCI DSS?’ the Council makes it painfully clear that VoIP is definitely in scope if VoIP transmits sensitive authentication (SAD) or cardholder data (CHD).  If you doubt it, here is the exact quote from the first paragraph of that FAQ.

“While PCI DSS does not explicitly reference the use of VoIP, VoIP traffic that contains cardholder data is in scope for applicable PCI DSS controls, in the same way that other IP network traffic containing cardholder data would be.”

Yet even when it is stated that clearly, I still run into people that claim I am making a mountain out of a mole hill and their VoIP is not a risk because other QSAs have never inquired about it.  What that merely means is that other QSAs are ignoring it when they should not be ignoring it.

The first problem with VoIP seems to be that very few people understand it which is the biggest reason in my opinion that a lot of QSAs avoid the discussion.  But it is not just QSAs.  I speak with network administrators, information security personnel and other technology people all of the time and if there is one topic that will glaze over all of their eyes, it is VoIP.  When the discussion turns to VoIP, people seem to hark back to that old PBX system tucked away in the basement or closet.  No one seems to remember that the PBX did get updates (usually two or three a year).  All anyone remembers is that it just worked and that it got replaced once, maybe twice, in a generation.  And the biggest risk was toll fraud from the Caribbean.

But scarier yet is that these people do not seem to completely understand how VoIP and its protocols work let alone the risks.  The biggest problem with VoIP are the protocols used and the reason for my quote at the start of this post.  Regardless of whether you are talking SIP, H.323, H.248, whatever, they all operate the same.  Call set up (start of a call) and call tear down (end of a call) are the only points of a VoIP telephone conversation that are stateful, i.e., conducted via TCP.  The actual call itself is all done via streaming UDP just like any other audio/video stream.  Adding insult to injury, VoIP also requires a large number of the ephemeral UDP ports above 32767 to be open.  UDP, being what it is, provides one of the best transport mechanisms for delivering malware.  There are hundreds of exploits for VoIP from the most benign DDoS attack to turning a VoIP telephone into a spying device by surreptitiously enabling its microphone and video camera (if it has a camera).  But my personal favorites are the attacks that use the VoIP network as an entry point into an organization’s data network.  The bottom line is that the only way to firewall any of the VoIP protocols for actual protection is to keep them away from the rest of your network.

But it can and does get worse.  Add in VoIP trunks from your telephone carrier and you really begin to have a recipe for disaster.  When you have VoIP trunks from your carrier, your internal VoIP network is really only protected from every other VoIP network by the carrier and your call managers.  It is that sad fact that keeps a lot of information security professionals up at night.  If security is all about your weakest link, how do you protect yourself and minimize your risk when your weakest link is essentially the entire world’s phone systems?

Let us add insult to injury in this tale of woe and bring in the concept of unified communications and its primary tool, the softphone.  A softphone is software that turns a PC into a telephone using VoIP. All users need is the internet and a VPN connection to the office network and they have their office telephone right there no matter where they are in the world.  However, the softphone opens up that PC to the same risks that exist for every other phone using that call manager.  But if your VoIP system is used to take calls that discuss cardholder data (CHD), you have now turned that PC with a smartphone into a Category 1, in-scope device because it is now connected to a Category 1, in-scope system and network.  Suddenly all of that effort to achieve PCI scope reduction flies right out of the window.

But this all gets the more fascinating as people go back to their VoIP vendors and find out even more troubling issues with their VoIP solutions.  I remember numerous conversations where people thought once a call was connected to a phone that a call manager was no longer involved therefore the call managers could be put on a different network segment, only to find out that call managers act as bridges when calls are conferenced, involve telepresence or they are to/from outside lines.  They also find out that with the advent of unified communications, services such as instant messaging and email integration are no longer separate servers/functions from the call manager and cannot be easily segmented from the call managers to take them out of scope.

But then there is the revised draft version of the VoIP information supplement from the PCI SSC.  Great guidance if you have a call center.  Worthless for any other sort of implementation of VoIP.  It treats VoIP as a discrete operation as though only the call center model exists for VoIP implementations.  Granted call centers are the largest risk when they are in scope because their call volume is typically 80%+ of calls involving payments.  But all sorts of organizations take payment information over the phone but are not a call center model.

So, what about the organization that has call centers and also normal business people all on the same system?  Based on the information supplement, every phone is a Category 1 device unless the call center VoIP system is separate from the rest of the organization.

Must the call center be on a separate VoIP system from the other users?  It would appear to be that way to manage scope.  But again, there is no explicit guidance for any other implementation model other than a call center.

And if the other users take overflow calls from the call center or occasional calls dealing with PAN, how would separate systems help with that situation?  Near as I can tell, it does not help.

And what about unified communication solutions?  No idea as the information supplement does not reference a unified communication solutions.  However, given the whole premise of unified communications is that it is tightly integrated in most VoIP solutions, other communication methods such as instant messaging and telepresence would likely be in scope as well for PCI compliance.

The bottom line is that the advice I provided over six years ago in this blog is still accurate today.

11
Feb
17

The Council Gets A Clue

Late this week the PCI Security Standards Council issued a new information supplement titled ‘Multi-Factor Authentication’ after the brew-ha-ha that occurred last fall at the Community Meeting in Las Vegas.  For once, the Council has issued an excellent reference regarding the issues of multi-factor authentication (MFA).  Although I still have a couple of minor bones to pick about this document, but more on that later.

If you understand the concepts of MFA, you can skip through the document to the end where the Council presents four scenarios on good and bad MFA.  These are well documented and explain the thought process behind why the scenario works or does not work for MFA.  The key takeaway of all of this is the independence of the MFA solution from the logon process.  The Council is getting in front of the curve here and stopping people from creating insecure situations where they believe they are using MFA that minimizes or stops breaches through administrators or users with access to bulk card data.

Now for a few things that I do not necessarily agree with in this document.

The first involves the Council’s continued belief that hardware security modules (HSM) are actually only hardware.  On page four, the following statement is made.

“Hardware cryptographic modules are preferred over software due to their immutability, smaller attack surfaces, and more reliable behavior; as such, they can provide a higher degree of assurance that they can be relied upon to perform their trusted function or functions.”

The Council has made similar statements over the years in the mistaken assumption that HSMs are only hardware.  HSMs are hardware that use software to manage keys.  There are standards that are followed (e.g., FIPS 140) to ensure that the HSM remains secure, but these devices are predominately software driven.  That is not to say that just any device can serve as an HSM, but a lot of us in the security community are concerned that the Council continues to perpetuate a myth that HSMs are only hardware which is patently false.

My other issue comes on page six as part of the discussion regarding the use of SMS for MFA.

“PCI DSS relies on industry standards—such as NIST, ISO, and ANSI—that cover all industries, not just the payments industry. While NIST currently permits the use of SMS, they have advised that out-of-band authentication using SMS or voice has been deprecated and may be removed from future releases of their publication.”

While everything in this statement is accurate, it gives the uninitiated the impression that SMS or voice is no longer a valid MFA solution.  I know this to be true because I have fielded some questions from clients and prospects on this subject, particularly about SMS.  The key is that this is not SSL and early TLS where NIST called them out as insecure and to no longer be used.  This is a “heads up” from NIST to everyone that there is an issue that makes SMS and voice not secure enough for MFA.

But while there is a risk, a lot of us in the security community question the viability of that risk when matched against merchant risk versus a bank or a government agency.  While I would not want any bank or government agency to use SMS or voice for MFA, a small business may not have a choice given their solution.  The reason is that the risk of an attack on SMS or voice is such that only a high-value target such as a bank or government agency would be worth such an effort.  In my very humble opinion, while a total ban is the easy solution, this is an instance where the Council should take a more nuanced approach toward the use of SMS and voice for MFA.  The bottom line to me is that small merchants using any MFA solution, even if flawed, is better than using no MFA solution.

I would recommend the following approach to manage this risk.

  • Level 4 merchants can be allowed to use SMS or voice for MFA.
  • Level 1, 2 and 3 merchants would be allowed to transition away from SMS and voice to a more secure MFA solution within one year of NIST stating that they are no longer acceptable.
  • All service providers would not be allowed to use SMS or voice for MFA once NIST states that both are no longer acceptable. This means service providers should start transitioning now if they use either.

Those are my thoughts on the subject.  I look forward to the comments I am sure to receive.

15
Dec
16

The Council Speaks On A Number Of Topics

The Council had a Webinar session for QSAs and ISAs on Thursday, December 15. It was a great session, but at only an hour, there were a lot of questions that went unanswered.  The following were the more notable discussion topics.

Not Tested

The Council got the message and they are working on new wording for the AOCs as well as some guidance for “Not Tested” and how it can be used and not impact PCI compliance.  They expect to have something issued in the first quarter of 2017.

Network Segmentation and Scoping

This was a very hot topic and drew a lot of questions and some useful answers as well as generating a slew of new questions.

We got a definition of “purpose-built controls”.  There really is not any change here in what the Council has told QSAs and ISAs in the past regarding segmentation.  The bottom line is that “purpose-built controls” are those controls that segment one network from another network.  That can be firewall rules, access control lists (ACL) or any other controls that control or limit the communications from one network to another network.  I posed a question regarding encryption such as TLS and IPSec as still being a valid segmentation control, but it did not get answered.  I am assuming that it still is a valid control given the Council’s statement that nothing has changed, but until we have explicit confirmation, that still is an assumption, not a fact.

The Council answered a number of questions regarding whether or not in-scope devices can be on the same network segment as out of scope devices can co-exist.  As usual, we go the “it depends” discussion.  The bottom line is that it depends on the threat presented by the out of scope devices to those in-scope.  If an organization has lax security controls over all of their networks and devices, then I would be hesitant to allow out of scope devices to be on the same network segment as in-scope devices.

One of the most amazing discussions on this topic was an answer given regarding whether or not a device that has only an outbound connection from the cardholder data environment (CDE) can be considered out of scope.  Under the Open PCI Scoping Toolkit, this would be categorized as a 2C system.  The Council started out with their stock answer of “it depends” and then clarified that answer.  The answer given was that while the system would be in scope because it is connected to the CDE, what requirements it would need to comply with would depend on the risk presented by the system to the CDE.  This seemed to give organizations an opportunity to argue a minimization of requirements.  I am sure this will result in a lot of arguments between QSAs, ISAs and their assessees in the future.

As a funny aside, the Council mentioned the “three hop rule” and then feigned ignorance as to where it came from.  As I pointed out in my post, it was from the 2014 Community Meeting in Orlando.

Not-Listed Encryption Solutions

This guidance is a train wreck and just seems to keep getting worse.  The Council gave a lot of answers to questions, but it just seemed like they were digging an ever deeper hole, not filling it in.

The biggest news is that the Non-Listed Encrypted Solution Assessment (NESA) document should be available for review in the first quarter of 2017.

The next biggest news was the Council reconfirming that this is only guidance/recommendations and not some new process that is mandatory.  They even made sure to tell everyone attending that QSAs are NOT to hold up an organization’s ROC/SAQ over not having a NESA for their E2EE solution.  So if an E2EE solution does not have a NESA, then the fallback based on a lack of guidance from the Council is to preform whatever procedures that the merchant’s acquiring bank recommends.

The purpose of this Information Supplement the Council stated was to provide QSAs, merchants, service providers and banks with the Council’s acceptable way to deal with assessing E2EE solutions.  While on its face this statement and rationale makes sense, it does not make sense from the standpoint that the organizations driving the E2EE solutions are the banks and processors that have partnered with the E2EE solution providers.  Given that the banks and processors are the same organizations driving PCI compliance of the merchants that consume those E2EE solutions it seems rather odd that they would be questioning what is acceptable for PCI compliance of their approved E2EE solutions.

At the end of the day, it just seems that this NESA process is a solution looking for a problem and that the only problem the process really solves is getting more E2EE solutions to just finish the NESA and validate as a P2PE solution.

Until the banks and processors get behind the NESA process, I see this effort as dead on arrival.

So it sounds like it will be a busy first quarter for the Council.

The Council stated that the slide deck for this session will be posted to the Portal sometime after the first of the year.

10
Dec
16

The Council Releases Draft Scope And Network Segmentation Information Supplement

Quietly on Friday, December 9, 2016, the PCI SSC released the draft Information Supplement titled ‘Guidance for PCI DSS Scoping and Network Segmentation’.  As with all Information Supplements, the information documented in these does not replace any of the requirements in the PCI standards.  These documents contain only guidance and suggestions as to how organizations can comply with the PCI standards.

Overall this Information Supplement does not break much new ground regarding the clarifications that have been given over the years on these two subjects.  The Council has taken a much simpler approach to defining categories of systems than did the Open PCI DSS Scoping Toolkit (OPDST).  The Council only has three categories:

  • CDE Systems (Category 1A/B in the OPDST)
  • Connected-to and/or Security-Impacting Systems (Category 2A/B/C/X in the OPDST)
  • Out-of-scope Systems (Category 3 in the OPDST)

One thing the Council has done is provide some good examples for how to prove systems are out of scope.  If a system meets ALL of the following criteria, then it is considered out of scope.

  • System component does NOT store, process, or transmit CHD/SAD.
  • System component is NOT on the same network segment or in the same subnet or VLAN as systems that store, process, or transmit CHD.
  • System component cannot connect to or access any system in the CDE.
  • System component cannot gain access to the CDE nor impact a security control for CDE via an in-scope system.
  • System component does not meet any criteria described for connected-to or security-impacting systems, per above.

The Council goes on further to say that even though these systems are out of scope for PCI compliance, they still need to be secured and patched regularly to ensure the overall security of the organization.

However, there are two points I noted that will likely require some additional clarification from the Council as they are going to potentially cause issues with a lot of organizations.

On page 7, the second paragraph, the document states:

“The existence of separate network segments alone does not automatically create PCI DSS segmentation. Segmentation is achieved via purpose-built controls that specifically create and enforce separation and to prevent compromises originating from the out-of-scope network(s) from reaching CHD.”

The paragraph taken as a whole seems to imply that the Council is taking the conservative position that only firewalls can be considered as network segmentation controls.  It is the phrase “purpose-built controls” that needs to be further defined by the Council.  Earlier in the document there is an example provided using firewalls which the paragraph would definitely lend itself.

In the past, the Council has said that access control lists (ACL) and encrypted tunnels also constituted valid network segmentation.  However this paragraph calls into question whether those are now considered “purpose-built controls” or not.  One would assume so, but as we have all learned in the past, one should never assume with the Council.  As a result, it would be great if the Council could provide clarification on what exactly they mean by “purpose-built controls” in the final release of this document.

The next point of concern is on page 11 in the Connected-to and/or Security-Impacting Systems table.  The third bullet down in the list of criteria states:

“System component can impact configuration or security of the CDE, or how CHD/SAD is handled—for example, a web redirection server or name resolution server.”

It would appear from this statement that the Council has brought Web servers that perform a redirect into scope for PCI compliance as they are considered ‘connected to’ systems.  That will be a huge blow to merchants using redirects to keep their Web servers from having to be ASV scanned and meeting all of the other PCI requirements contained in SAQ A-EP.

The only remaining question is if those Web sites using iFrames will also now be in-scope for SAQ A-EP compliance as well?  Time will tell.

I have no idea when the final version of this document may be released.  But if the Non-Listed Encryption Solutions Information Supplement is any indication, it could be released on this coming Monday to the public.

03
Jul
16

The Council Issues A New Information Supplement

Back in May, the PCI SSC issued a new information supplement titled ‘Effective Daily Log Monitoring’.  It probably slipped under most people’s radar because of the issuing of v3.2 of the PCI DSS.  And for my friend, Anton Chuvakin, this will be on his reading list immediately if he has not read it already because his book, ‘Logging and Log Management: The Authoritative Guide to Understanding the Concepts Surrounding Logging and Log Management’, is listed in the information supplement’s bibliography and from the way this supplement reads, figured prominently in its content.

First some overall impressions.  Unlike a lot of other information supplements, this one actually clarifies and provides a significant amount of knowledge and great advice.  Every organization that needs to meet PCI compliance should read it.  But more importantly, any organization that does not fully understand the collection and analysis of log data should read it to get a foundation in these practices.

One of the best recommendations comes in section 5.2 in the supplement which instructs organizations to develop a baseline from log data.  This is the biggest problem when an organization gets serious about analyzing log data, they fail to baseline the exceptions.  I cannot tell you the number of times that I have discussed logging with the personnel responsible for monitoring for alerts and they cannot tell you what is “normal” activity versus “abnormal” activity.  As a result, they either: (1) chase every alert as though they are an attack (they are not), or (2) ignore alerts because they assume they are “normal” activity (they are not).  The result in case #1 is that they cause burn out in the organization and, of course, in case #2 the organization is likely getting attacked and compromised.

I cannot understate the importance of this activity both in getting a log monitoring project going but also as something that needs to also be an ongoing activity.  No organization’s IT environment is static, so it is important to continuously baseline the log data and adjust for changes.  This can be done by adding an item on change control tickets to evaluate the impact on log data or re-baselining every quarter.  The bottom line is that baselining is not something done once, the baseline must be periodically updated.

The most glaring omission in this document is a reference on page 27 to Appendix B that promises to provide readers with a list of “sample tools” for normalizing log data.  But guess what?  There is no Appendix B.  Hopefully the Council will issue a revision to this document and add Appendix B to that future revision.

If I had to ding this information supplement for anything it is the downplaying of the need to have one or more tools to collect and analyze the log data.  The Council obviously has to walk a fine line on this, but let us be honest.  Anyone that believes that log monitoring and analysis can be done without one or more commercial tools is kidding themselves.  Yes, a person can put up a centralized log collection server and the parse that collection of data with tools such as Grep, Perl or Python.  Again, let us be honest, no one has the time to write all of the necessary scripts or search patterns to find all of the conditions you need to find.

It used to be that there were open source tools such as Splunk and similar available for log gathering and analysis.  However, those days disappeared a number of years back and only commercial versions are available.  So everyone needs to be prepared to spend a certain amount of money to get a toolset that will provide the information gathering and analysis toolsets necessary.

I also fault this document for focusing too much on log information normalization.  This is exactly why organizations need to invest in a commercial tool.  This was the big complaint with using open source tools.  People found that they got basic functionality, but then had to essentially conduct a full-fledged system development effort to get the analysis and alerting done.  This was the big selling point with commercial tools such as LogRythms and ArcSight that came the basics as well as a whole host of analysis templates both free and for purchase.

Another ding I have is the minimal number of requirements that the information supplement identifies as being satisfied by log monitoring and alerting.  I can find at least 15 instances where requirements can be met by monitoring and alerting, not just the eight that are identified in the supplement.

Even with these concerns and shortcomings, this is possibly one of the better information supplements issued by the Council and is a must read by everyone.

To get your own copy of the information supplement, go to the PCI SSC Document Library, select ‘Guidance Documents’ and scroll down to the ‘Logging’ banner.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031