Archive for January, 2011

27
Jan
11

Intent Of Requirements – 6.1

I am starting to see more and more instances where QSAs and acquiring banks are blindly enforcing PCI requirements without thinking about the intent of the requirements and how they should be applied to the organization being assessed.  It is not like the intent of requirements has been hidden by the PCI SSC as the intent of every requirement is published in “Navigating PCI DSS: Understanding the Intent of the Requirements” which is available from their Web site.

The base problem with the PCI DSS is that it was written for a multitude of situations.  While the PCI DSS is a reasonable standard, there are portions of it that are written for small merchants that know little to nothing of technology.  This is where we see hard and fast metrics applied.  Then there are portions that are vague, written for extremely large organizations so they have the flexibility of solutions.  And we have a few requirements that are written for organizations in between these two extremes.  Unfortunately, the next release, the PCI SSC does not address these shortcomings any better than the current release.  However, the PCI DSS, in its Navigation Guide and the Glossary do give a clue as to the intent of the requirements and it is that intent that QSAs and others should work to enforce.  Too often people focus only on the contents of the PCI DSS and do not use the other reference materials provided by the PCI SSC.  As a result, people end up with a skewed vision of what the PCI DSS is all about.

The problem is no more pronounced than with those requirements that have hard metrics such as with requirements 6.1.  The enforcement of the metric related to this requirement can be difficult to achieve as well as misleading.  Therefore, a QSA needs to look at the intent of this requirement and enforce the intent rather than the bare metric.  I know why the PCI SSC did what it did in setting hard and fast metrics.  They set them so that the organizations being assessed understood the importance that these requirements be followed, particularly Level 2, 3 and 4 merchants.  However, while hard and fast metrics work well with manufacturing standards, they do not work as well when a wide variety of hardware, software and human elements are required to obtain a given result.

The intent of requirement 6.1 is to ensure that organizations manage their infrastructure and applications to ensure that they are as secure as they can be.  It is intended to ensure that there is management visibility to the risks presented by not maintaining software and to better ensure that software is not allowed to just run without attention and maintenance.  Requirement 6.1 is in response to the fact that the majority of retail organizations do not keep their point of sale (POS) hardware and software current.  It is common for merchants to install a POS solution and then never really touch it for years, if ever at all.  After all, as one executive pointed out to me, “If it isn’t broke, why bother?  Maintenance costs money that I would rather invest that money somewhere else.”  Can you really blame anyone for this view?  Security is like insurance, it costs you money until you need it and then it is invaluable.  Unlike the insurance industry which has done a very good job of educating management on its value, the security industry has done a very poor job educating management on the value of security and what really needs to be done to secure the organization.

In practice, patching of infrastructure and applications can be daunting for any organization whether they have only a few IT resources or in big organizations with large numbers of servers and applications.  While there are numerous automated solutions available to managing patching, in practice even with all of the automation that is available, patching is not as simple as it appears.  In addition, depending on the applications involved, the application vendor may only issue patches or updates quarterly, semi-annually or annually and contained in those updates are also approved updates to the operating system and its services.

In my opinion, what organizations need to prove for compliance with requirement 6.1 is that the organization has a documented and working process for ensuring that infrastructure and applications are patched and kept current.  Whether or not patches or updates are applied in 30 days or less is not as relevant as ensuring their infrastructure and applications are not left to run without attention to their security and maintenance.

I am not saying or even implying that organizations can avoid regularly patching.  What I am saying is that QSAs need to understand the environment of the organization and then apply the intent of the requirement to the organization’s situation.  30 days is a nice ideal and probably practical for your typical small and possibly mid-sized retailer.  But in practice, this requirement is not always achievable, particularly in large organizations or organizations that rely heavily on packaged software solutions.  These organizations may come close to the 30 day timeframe, but hitting that timeframe with their testing, QA and production implementation processes just does not typically hit the magic 30 day mark.  Therefore, QSAs need to prove that the patching process is reliable and works not that it works in some arbitrary timeframe.

21
Jan
11

SPSP Forum Is Unlocked – Maybe

I had it from the owners of the SPSP Forum that it is up and operational again.  However, I have heard from a number of people that it is not functioning for them.  The powers that be have been informed and are working on the issue however, I have no idea when it might be resolved.

If the embedded link does not work, the URL is http://forum.paymentsecuritypros.com/.

I am glad to have this resource back and look forward to people getting back into the swing of using the Forum.

UPDATE: I wish I could tell you that this resource is going to be functional sometime soon, but I have no idea if it will.  I am very sorry that the Forum no longer functions.  I can only hope that people resolve their issues and the Forum can be the premier resource for PCI knowledge.

21
Jan
11

Why Stuxnet Matters To PCI Compliance

There is an article in the Sunday, January 16, 2011, New York Times that says the American and Israeli governments were behind Stuxnet, confirming a rumor that has been running around ever since Stuxnet was identified.  The consensus of the experts is that Stuxnet is to cyber warfare what the airplane was to conventional warfare, a radical game changer.

So why does Stuxnet matter to merchants, processors, acquiring banks and the card brands?  A whole lot if you think about it.

First, Stuxnet proves beyond a shadow of a doubt that governments are investing in cyber war and that in cyber war anything on the Internet is fair game.  While Stuxnet was built to target Iran’s centrifuges that are used in refining Uranium, there were a number of incidents of “collateral damage.”  This “collateral damage” was the result of Stuxnet attacking anything that used the same Siemens SCADA controllers.  Granted, Stuxnet was not as nasty to devices that were not centrifuges, but it still caused problems.  Imagine if an entity wrote an attack for a common device or protocol hoping to actually target another particular entity.  Do you think your organization could become “collateral damage” in such an attack?  I would say it is highly likely.

Second, imagine then the damage that could be done if a terrorist group or a government decided to go after another country’s financial systems using a Stuxnet type of attack.  Does such an attack sound unrealistic?  It should not given that the quickest way to bring any country to its knees is through its financial systems.  And what makes things doubly worse is that, thanks to outsourcing, most banks use a very limited number of application solutions thus making such an attack all that much easier.  Given the reliance of countries on their economic capabilities, such an attack would likely only be carried out by a rogue nation such as North Korea that has nothing to lose or even any other country if it is provoked long and hard enough.

But what if the attack was directed against credit card terminals?  While a lot of people would say that would be farfetched, it also is not as wild as it might seem.  All you need is someone on the inside at Ingenico, Verifone and the like to doctor the card terminals’ software to do whatever you want it to do.  Even large merchants do not necessarily monitor their card terminals, so such an attack could go on for quite a while before it was noticed, if it even ever was noticed.  Criminal gangs have been producing limited numbers of doctored terminals for the last four to five years.  Imagine this done on a large scale and you start to understand how nasty a threat this could be.  If introduced from the manufacturers into the distribution stream, there would be no way of knowing that you had been compromised unless you were monitoring your network properly which most organizations do not do.

Finally, there is the doctoring of integrated point of sale (POS) solutions or similar applications.  Again, not as farfetched as you might think.  There have been a number of instances over the years where software was written to provide backdoors or other openings in systems that allowed information to be leaked.  This is why certain governments have gone into the software business.  This is also why there are now valid concerns about how you confirm that your software is only doing what it is supposed to be doing.

The bottom line in all of this is that these concerns are no longer the ramblings of the paranoid among us.  These once imaginary seeming scenarios have actually come to pass and we need to address what to do to mitigate them.  So from a PCI perspective, what should an organization be doing?  While all of the PCI DSS provides methods to protect an organization, the following are what I consider the most important regarding inside attacks.

  • Monitor your internal network – This is the single, most important way to protect your network from doctored devices and applications.  A doctored device or application must transfer the information it has collected either in real time or in a batch process.  Transmission may be outbound or via an inbound process.  Outbound monitoring should be the easiest because most organizations know what external IP addresses to allow.  Inbound communications are always disputed by most organizations.  But if you remind people of how GoToMyPC and others of its ilk operate, they begin to understand how their networks could be easily compromised from the outside.
  • Analyze your logs – This is probably the next area where a lot of organizations are not doing a good enough job.  Most organizations do a good job collecting log data, but then do a mediocre or poor job analyzing that data to find exceptions.  The reason for this poor performance is a lack of defining criteria for attacks.  While the initial implementation did a good job creating initial definitions, as time goes on, the log analyses are not enhanced or updated to reflect changes in attacks and new attacks.
  • Tighten internal controls – Once inside most organization’s security perimeters, security gets pretty loose and free, if it even exists at all beyond a logon.  Unfortunately, attackers understand this fact, hence why they focus on getting inside.  Because once an attacker is inside, it is pretty much a cake walk to get whatever they want.  This is why locking down ports, reviewing firewall and ACL rules, disabling or removing unused services and disabling or removing unused user accounts become so important.  The fewer attack points you provide on your internal network, the more resilient it will be should an attacker get inside.

Remember, while the City of Troy was protected by insurmountable walls, Troy fell because they were attacked from the inside; an attack vector that Troy felt was not realistic or possible.  Troy’s short sidedness was the result of their arrogance and a failure to understand that an enemy determined to defeat you will find a way to overcome the insurmountable.  Learn from Troy’s mistake.

18
Jan
11

Update On PCI DSS v2.0 QSA Scorecard

For all of those QSAs out there, we found out from the PCI SSC late last week that the PCI DSS v2.0 scorecard will not be released until the first part of February.  This document was supposed to be released sometime this month, but as usual it has been delayed.  For those of you that are not QSAs, the scorecard is the document that tells QSAs how they are supposed to conduct their fieldwork.  The scorecard tells the QSA whether they need to interview people, review documentation, observe a system setting or configuration, observe a process, action or state, specify sampling or monitor network traffic.  Reading the PCI DSS tests does not necessarily always define when these activities are required, particularly those that require the interviewing of people, so having the scorecard is a necessity in order to properly conduct a PCI DSS assessment.

The scorecard is also the grading scale that the PCI SSC uses to assess QSACs to determine if they need to go into remediation.  Points are assessed by the number of items of each of the aforementioned categories.  If the reports assessed by the PCI SSC do not achieve a score of 75% or greater of the possible points, then a QSA goes into remediation.

UPDATE: Just got an update from the PCI SSC late yesterday, Wednesday, February 16. The scorecard is further delayed and they are “hoping” to have it published in the next few weeks. How they expect QSAs to conduct v2.0 assessments without knowing what they are expected review and observe and who to interview is insane. I am guessing that the scorecard will not be delivered until the end of March at the earliest. The scorecard needs to be published with the standard, not after it.

15
Jan
11

More On “The Cloud” And PCI Compliance

It amazes me how clueless some IT people are about the technologies and methodologies they use in their business.  You talk to them about the latest technologies and all they can do is parrot back sales rhetoric from the vendors.  As a result, you have to wonder about articles you see in the trades about the adoption rates of these new advances.

This is particularly true about “cloud computing” and other latest and greatest technologies.  I have written about “the cloud” before, but I thought I would add to that based on this article published by InformationWeek on 2011 Cloud Computing predictions.  The number one prediction is that “the cloud” will be PCI compliant.  The article says that “the cloud” was believed to be PCI non-compliant and that the PCI DSS does not address “the cloud.”

Let us clear up the PCI DSS not addressing “the cloud” comment as it is the easiest to address.  Whether people admit it or not, the PCI DSS is actually written in such a way to address changes in technology without directly calling out new technologies.  For example, people complained about virtualization not being explicitly called out years ago.  They pointed to requirement 2.2.1.a that restricted configurations to “one primary function” per server.  Those of us that had been through the QSA training had been told that this requirement was by server, logical or physical, and did not apply to the hypervisor.  However, there were caveats on how to treat the configuration of the hypervisor and any virtual clusters.  While cloud computing is not explicitly called out, the PCI DSS can be applied to “the cloud” as long as the operating constraints of “the cloud” are taken into account during the assessment.

So the PCI DSS can be applied to “the cloud” in its existing form.  Then where is the problem?  The first problem with “the cloud” is in defining “the cloud.”  If you were to ask every vendor of cloud computing to define “the cloud,” I will guarantee you will get a unique answer from each vendor asked.

To address this key issue, I went to the best source I could find, the National Institute of Standards and Technology (NIST).  NIST has possibly the best definition for “the cloud” I have found and it has only taken NIST 15 iterations to get to the one that is currently published.  However, NIST defines the following as key characteristics of “the cloud.”

  • On demand, self service – That means that technicians are not required to establish “the cloud” environment.
  • Broad network access – While this most likely means the Internet, it does not have to, hence private “clouds.”
  • Resource pooling – This typically implies virtual systems and storage area networks and similar technologies that allow the apportionment of processing power and storage capacity across large resource pools.
  • Rapid elasticity – Again, this typically implies some form of virtualization that allows the allocation of additional resources such as CPU or storage on demand.
  • Measured service – This means that you are charged for what you use.  For organizations that only need one processor and ten GB of storage, they will be charged significantly less than an organization using 10 processors and one TB of storage.

For all of the marketing hype that “the cloud” is something radically new, at its heart it is just time sharing with a new name using different technology.  Where “the cloud” comes up different from time sharing is in the service models used to implement “the cloud.”  NIST defines the following three service models for “the cloud.”

  • Software as a Service (SaaS) – This is what most IT people think of when they think of “the cloud.”  As defined by NIST, SaaS is the capability provided to the consumer to use the provider’s applications running on a cloud infrastructure.  The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email).  The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  • Platform as a Service (PaaS) – NIST defines PaaS as the capability provided to the consumer is to deploy onto “the cloud” infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.  The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
  • Infrastructure as a Service (IaaS) – NIST defines IaaS as the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications.  The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems; storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

What the NIST definition does not talk about is how “the cloud” is physically implemented.  From a PCI compliance perspective, how “the cloud” is implemented is very important and determines whether or not it can be PCI compliant.  In my experience there are three ways that “the cloud” can be physically implemented.

  • Virtualized – One or more servers configured as a cluster that runs multiple virtual servers of the same or differing operating systems.  The cluster is attached to a storage area network (SAN) that is configured to provide storage to the individual consumers’ virtual servers based on their storage requirements.  Under this implementation model, each consumer runs on its own virtual server with no other consumers.
  • Multiple instances – Under this implementation model each consumer runs on the same server as a separate instance much like the time sharing model of old where multiple consumers ran on the same mainframe using any of the applications that were installed and made available to the consumers.  The applications execute as a separate instance for each consumer on the same system, but may share database management or other system software.  This implementation approach is going to be most common on UNIX and mainframe implementations of “the cloud.”
  • Hybrid – This is the most complex implementation model as it combines virtualization with multiple instances.  What happens is that the multiple instance implementation model is implemented in on virtual servers.

In the virtual implementation model where the consumer’s cloud functions as a virtual server configured like its physical server cousin (i.e., one server one function), PCI compliance is likely possible.  This is because there is logical segregation of virtual servers just as there is physical segregation with real servers.

However, just because we have logical segregation of servers does not mean that it will automatically be PCI compliant.  Hypervisors such as VMware and Hyper-V offer connectivity between virtual servers that does not require a network and it is these connections that create PCI compliance issues.  So a QSA will need to investigate the virtual implementation to determine whether or not such connections have been implemented.

In the multiple instance and hybrid environments, the consumer’s environment is not logically or physically discrete from other consumers.  As a result, in order to be PCI compliant, all other consumers have to be PCI compliant and must submit themselves to a PCI assessment whenever another of the consumers is being assessed because they are all using the same environment.  While technically these environments could be assessed as PCI compliant, let us face it, in practice getting every consumer to agree to be assessed by your QSA is just not going to happen, even if each consumer in this environment needs to be PCI compliant.  As a result, it is these implementation models that will likely never become PCI compliant.  Or if such an environment does become PCI compliant, it will likely be a painful and arduous process.

Something not widely discussed is the susceptibility of “the cloud” to attacks.  When you bring up this subject with IT professionals, a lot of them have this odd idea that “the cloud” is nothing like any of their existing technology.  “It’s the cloud,” one IT executive recently said to me when asked directly about the vulnerability of “the cloud.”  What people seem to forget is that “the cloud” is usually the same as their existing server environment only virtualized and not in their data center.  In the end, Windows and Linux are still Windows and Linux with all of their potential attack vectors whether they are running on a physical server or in “the cloud.”

But in addition to the consumer visible technology, there is the underlying virtualization that is also an attack vector, albeit a rather small one.  In the multiple instance implementation model, if the platform is a traditional mainframe OS such as zOS, MCP or HP-UX, the likelihood of attack is probably very small.  However in hypervisor environments such as VMware, Xen and Hyper-V, the likelihood of attack is still small, but much higher than the mainframe.

In the end “the cloud” can be PCI compliant, but it all depends on how “the cloud” has been implemented.  And that means more than just relying on sales and marketing hyperbole.  One has to know the architectural foundation of “the cloud.”  Unfortunately, most vendors treat this information as trade secret and it can be very difficult to obtain.  However, if you stick to your guns and explain that you will not sign a contract without this information; most vendors will relent and provide it.  And if you find that your preferred vendor’s cloud is not going to likely be compliant, then you need to walk away and find a cloud that can be compliant.

09
Jan
11

Network Segmentation – One Last Discussion

Based on feedback I am getting, my previous posts regarding network segmentation are still not getting the point across regarding proper network segmentation.  So, this time I am going to use my post regarding the Control Triad and hopefully everyone will now understand what constitutes appropriate network segmentation.

As a quick refresher, the control triad is compromised of preventative controls, detective controls and corrective controls.  All of these control types are required to ensure a secure environment.  The more individual controls you have under each of the three control types, the less likely an incident will occur and the more coverage you should be able to afford your organization should a control go temporarily out of compliance.  However, an individual control can really only appear under one of the control types otherwise that control is diluted as it becomes a single point of failure causing the control triad to not function properly.  With that explanation, let us look at proper network segmentation from the control triad perspective.

Preventative Controls

The following would be considered the minimum preventative controls when talking about network segmentation.

  • Firewall(s) with rules that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Router(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • VLAN(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Private wireless network(s) use a separate VLAN(s) from the cardholder data environment with access controls enforced for any access to the cardholder data environment from wireless.  Private wireless access points are configured with WPA2 using Enterprise authentication and AES 128-bit or greater encryption.
  • Software firewall on server(s) in the cardholder data environment that restricts traffic to a limited number of ports/services to/from the server(s).
  • Restricted administrative access to infrastructure devices in or controlling access to the cardholder data environment.
  • Access controls that restrict administrative and end-user access to applications in the cardholder data environment or that access the cardholder data environment.

Remember, when I say. “limited number of ports to/from” I mean a very limited number of ports.  Yes, there may be instances where you might have 100 ports open to/from your cardholder data environment, but you better have a valid business reason for every one of those 100 ports.  And just so we are all clear, a valid business reason documents the reason why the port needs to be open, the risk presented to the cardholder data environment that the port is open, actions that have been taken to minimize the risks, and management approval of the port being open.  And the business reason for opening a port needs to be more than just “it needs to be open” or “the application will not function unless it is open.”  You need to document why it has to be open so that in the event of a breach you can quickly rule out the ports that might have been the cause based on the type of attack.

When we talk about restricting access, you need to be restricting access.  In small and mid-sized organizations, restricting access might not be feasible.  In those cases, forcing personnel to go to management to gain access is the way to properly provide control.  In large organizations, what we are talking about is restricting access to fewer personnel than everyone that has access to normal production.  The idea is that not everyone in support or business users should have access to the cardholder data environment.  The rule here is the fewer the better but do not make it so few that you create issues.

If you want to go the extra mile, the following controls can further enhance your security.  However, for some organizations, they come at a cost in operational efficiency that is unacceptable.

  • Disable all unused physical jack connections on all infrastructure devices.  Any activation of a jack requires a service ticket and standard management approvals.
  • Disable dynamic host configuration protocol (DHCP) in all retail locations.
  • Public wireless in retail facilities provided by a separate third party and on a separate circuit that connects to the Internet.
  • Required use of encrypted, two-factor authenticated virtual private network (VPN) connections from any wireless network to gain access to any internal network.
  • Access to the cardholder data environment is not allowed for users connecting through any remote access connection.

Detective Controls

The following would be considered the minimum detective controls when talking about network segmentation.

  • Network and host intrusion detection/prevention systems that monitors the aforementioned firewalls, routers, VLANs and servers that are protecting the cardholder data environment and generate alerts to appropriate personnel when an intrusion or incident is detected.
  • Daily analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Daily monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Daily analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or a potential incident.
  • Change management records for all infrastructure devices, servers and applications in-scope for PCI compliance.

The key here is to generate alerts should any anomalous activity be detected.  But that is the rub.  What is anomalous?  Anomalies are not always the easiest things to identify or define.  As a result, your detective controls may take a while to fine tune.  However, the organizations that do the best job of managing their detective controls organize their anomalies by the PCI DSS requirements they are trying to meet.  This allows them to tweak their anomaly detection capabilities by PCI DSS requirement.

Then there is the issue of what do you do if you detect an anomaly?  Most of the time, an anomaly is not dealt with because of one of two reasons.  The first reason is because the detection solutions are new and are not functioning properly because no one has taken the time to tune them.  The second reason is that because of changes in the environment, the detective controls need to be re-tuned to reflect the changes.  Regardless of why, the detective controls need to be adjusted so that they are not generating excess false positives resulting in people chasing phantom issues.

If you want to go the extra mile, the following controls can further enhance your security.  While these sorts of tools are available as open-source solutions, there are also many commercial solutions as well.  Regardless of whether they are commercial or open-source solutions, tools that will perform these functions typically take a significant amount of time and effort to tune so that they provide the right amount of information for the right incidents.

  • Real-time analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Real-time monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Real-time analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or potential incident.

All real-time does is provide you with instantaneous alerting.  Most small and even mid-sized merchants do not need real-time analysis and alerting.  Not that they cannot use it, it is likely overkill for their environments given the threat of attack.  However for governmental agencies/departments, financial institutions, health care organizations and most large merchants; real-time analysis and alerting is mandatory.

And if you think tuning for daily reviews was painful, tuning real-time analysis and alerting systems is at least twice as painful.

Corrective Controls

The following would be considered the minimum corrective controls when talking about network segmentation.

  • Change management procedures.
  • Incident response plan(s) for addressing any issues identified by the detective controls.
  • Root Cause Analysis (RCA) procedures.
  • Action plans that result from the incident response process that require changes to the preventative and/or detective controls.  At a minimum, the action plans must document the correction needed, the person(s) responsible for getting the correction completed and the timeframe for the correction to occur.
  • Internal audit review of the preventative and detective controls.
  • QSA review of the preventative and detective controls.

Here is where a lot of organizations miss the boat.  You have detected an anomaly, you dealt with the anomaly, but you do not analyze why the anomaly occurred or you do an analysis but then you do nothing to correct any issues that might have been identified.  As a result, the anomaly continues to be encountered but actions are not taken to minimize or even eliminate occurrences.  This is why the advanced persistent threat (APT) is successful.  APT relies on the fact that eventually all organizations get sloppy and do not take corrective actions to maintain or even improve their controls.

There may be a number of preventative, detective and corrective controls that I may have missed or did not consider since everyone has unique environments.  At a minimum, if your organization has implemented these controls and they are all operating effectively, you are going to better than the majority of organizations out there and much less likely to have a serious incident that could result in a breach.

And that is the problem all organizations face, keeping these controls functioning effectively every day without missing a beat.  That is why we have defense in depth.  If one control is not functioning properly, there are other controls that will cover in the interim until the control is back functioning properly.

Finally, as I always like to remind people, just because you implement all of these recommendations does make you invincible.  All these recommendations do is just making the likelihood of an incident and the potential damage resulting from an incident lower than if you had little or no controls in place.  How much lower depends on a number of factors, but the risk will be lower.  And after all, it is all about lower risk.

Hopefully the issue of what constitutes appropriate network segmentation has now been put to rest.

07
Jan
11

RTFM

Bear with me as I tell you a short story.

“A long time ago, in a galaxy far, far away,” (thank you George Lucas) I worked with a very seasoned IBM systems programmer.  He had the acronym ’R T F M’ neatly framed hanging behind his desk.  I quickly found out what it meant the first time I had a problem with the mainframe that I could not solve.  As I walked into his office carrying the huge case of paper that was my program dump, he pointed to the picture behind his desk.

“Yeah, so what?” I replied rather indignantly.

He said, “R T F M!”

“Yeah.  And what the [expletive] is R T F M?” I replied a bit confused and frustrated.

He snapped back, “Did you Read The [Expletive] Manuals?”

RTFM was one of his few pet peeves.  If you had read the manuals, he would help you as long as it took to solve your issue.  If you had not read the manuals, you were quickly guided back out of his office and not so politely told to read the [expletive] manuals.  If you then went and read the manuals and still had problems, then you could come back and ask for his help.  Heaven help you if you still did not read the manuals and came back.  I only saw it happen once and it was not pretty.

The reason I was brought back to this memory recently is because I am getting tired of people only reading the PCI DSS.  It is painfully obvious from their questions that this is all that they have read.  The PCI SSC’s Web site contains all of the documentation you need to interpret the PCI standards, yet it seems the only document that people download and read is the PCI DSS.  All the rest of the documentation just seems to get ignored.  If people were just reading the rest of the documentation that is available we would all be better off.

As a result, I thought I would take some time to walk people through the documentation that exists outside of the PCI DSS and explain why they should read it.  In my opinion, the following documents are mandatory reading for anyone involved in PCI compliance efforts.

  • PCI DSS Quick Reference Guide – At 30+ pages long, it is not as “quick” as one might like, but it is probably the best Primer you can get.  If you are new to credit card processing, new to the PCI standards or an Executive just trying to figure this PCI thing out, this will get you up to speed in a hurry.  This is the piece that you put in your Executives’ and Board of Directors’ hands to get them up to speed and should be mandatory reading before discussing PCI compliance.
  • Glossary – This document should have been titled, “READ ME FIRST” instead of the Glossary as it is more than just a traditional glossary of terms.  The Glossary explains key industry concepts as well as the terminology.  In some cases, the Glossary explains key security concepts that are referenced in the PCI DSS.  The bottom line is that this document should be read before reading the PCI DSS and then used as a key reference as you read the PCI DSS.  Even for those of us “veterans” of the banking and technology worlds need to read this document just as a refresher.  I would guess 45% of questions regarding the PCI DSS are answered just by the Glossary.
  • Navigating the PCI DSS – This document explains the other 45% of the questions regarding the PCI DSS (I know, that only adds to 90%.  The other 10% are valid questions).  The key thing you will get out of this document is the intent of each of the requirements and some of the tests.  This document should be read in conjunction with the PCI DSS as it will answer most of those, “Why in the world would I want to do that?” and “What were they thinking?” sorts of questions.
  • Information Supplements – These are white papers published by the PCI SSC that explain technologies or concepts that can enhance PCI compliance and/or improve your security.  As of January 2011 there have been seven of these published on topics such as wireless, penetration testing, code reviews and other key topics.  This is where you can get all of that detailed PCI compliance guidance that QSAs have running around in their heads.  The PCI SSC promises us even more of these in the coming years, so you need to check this section of the Documents Library regularly to make sure you have them all.

These documents are optional reading for any involved in PCI compliance efforts.  However, the Prioritized Approach is a great tool to get you quickly moving on PCI compliance.

  • Prioritized Approach for PCI DSS v1.2 – Okay, this is out of date and I am sure a new one will be produced.  However, for those of you that want to focus on getting PCI compliant, this is for you.  It will take you through the PCI DSS is a way that hits the requirements that are most important to least important so that you focus on big ticket, big bang requirements first and then work your way through the rest of the PCI DSS.  For the most part, it still works with v2.0.
  • PCI DSS Summary of Changes Version 1.2.1 to 2.0 – For those of you familiar with the PCI DSS and you want to know where the changes are between v1.2.1 and v2.0, this is the document for you.

The PCI SSC’s Web site has a wealth of information on its Documents Library page.  Not only is the PCI DSS covered, but they also have all of the Self-Assessment Questionnaires and related documents, Payment Application Data Security Standard (PA-DSS), PCI Pin Transaction Security (PTS) as well as information on Approved Scanning Vendor (ASV) standards and other resources.

In addition to the Documents Library, there is also the ‘FAQs’ system.  This is an interactive system that allows you to research questions that have been posed to the PCI SSC.  So, before you ask your QSA that question, go to the ‘FAQs’ and look for it there.  I would have posted a link, but it is a dynamic Web page and you must go to the page by clicking on the word ‘FAQs’ at the top of the Web page.

RTFM people!  And for those of you that are curious; yes, I had read all of the relevant manuals.

02
Jan
11

What Are The PCI Lessons We Can Learn From The WikiLeaks Debacle?

I know, I know, there have been an over abundance of articles published on what we can learn from the WikiLeaks incident.  However, after reading an interesting article in the Washington Post regarding how the WikiLeaks debacle came about, I thought there were a number of lessons that merchants and service providers could benefit.  The WikiLeaks documents have been traced back to the Net-Centric Diplomacy database developed by the State Department as a result of the 9/11 terrorist attacks.

Everyone Had Access

Access to the Net-Centric Diplomacy database had become unmanageable.  According to the article, not only was the database accessible to State Department employees, but it was also accessible to a number of other government departments including Defense and Homeland Security.  This project was undertaken as a result of the 9/11 attacks to make information that the State Department was collecting available to a wider audience of analysts.  While the database was only four years old in 2010, State Department officials acknowledged that over half a million people had access to the database all over the government including government contractors.

To add insult to injury, State Department personnel admitted that user management was out of control, particularly outside of the State Department.  You see the State Department took a distributed security approach for the Net-Centric Diplomacy database and designated persons at other entities to manage their users.  Unfortunately, there appears that there was no oversight of these people nor was there a requirement for these people to justify why all of their users required access.  This distributed data security approach is very common in the business world.  However, without oversight and periodic review, the distributed approach turns into a free-for-all with almost anyone asking for access being granted access.

Requirement 7 of the PCI DSS is all about access to cardholder data and verifying that those users continue to require access.  The user management situation with the Net-Centric Diplomacy database is why requirement 7 was put into the PCI DSS.  What this situation points out is that if you do not have defined criteria for users that you strictly enforce for access to sensitive data, then you cannot expect to control the data and you can then probably expect to have a breach of that data somewhere down the line.

Users Responsible For Use

This is usually a good thing, but in this case it went horribly wrong.  From an IT perspective, this is exactly what an IT organization wants – user ownership of their application.  However, this is a prime example of how user ownership goes wrong.  In addition to the mismanagement of user access, users were also in control of how the database got used as well as what data went into the database.  Based on my reading of the article, the issues documented are symptoms of a larger problem which was that it is highly likely that little to no training was provided regarding the Net-Centric Diplomacy database and how it was expected to be used.

This is a problem that is very endemic in business as well as government.  Vendors and IT departments leave training up to their end users in the mistaken belief that applications these days are intuitively obvious and all that needs to be provided is a good Help system and that the Help system explains “everything” a user needs to know to use the software.  While users typically are responsible for developing the Help system, how many of us have complained that the help topic we are trying to find is not covered?  The problem with this approach is that it is up to the user to familiarize themselves with the software which no one ever does because the application is intuitively obvious.  If Help systems are so good, why are thousands of books published each year to explain how to use everyday applications like Microsoft Office, Oracle and Lotus Notes?

The first result of this lack of education was that information that did not belong in the database ended up in the database.  The way the input process worked for the database was to code in a mnemonic into a diplomatic message that would trigger the routing of the information into the database.  However, no one apparently explained clearly enough what belonged and did not belong in the database.  As a result, everything was coded to go into the database whether it really did or not.  From a PCI perspective, I cannot tell you how many times that we run into applications that are being used for purposes that their vendors never anticipated.  As a result, cardholder data ends up in fields unprotected just because someone saw a need to retain it in an application never engineered to accept it.  This is also why scoping by the organization needs to be done as cardholder data can end up all over.

The second result of this likely lack of education is that users were unaware of their responsibilities regarding the data they now were allowed access.  Obviously since the information in the database was leaked, users were not aware of their responsibilities or just did not care.  Worse yet, since there was likely no feedback to users that might be misusing the data, they likely were unaware that what they were doing was not allowed.  In the PCI realm, this is why policies, standards and procedures are so important as well as making sure that all users are aware of them.  While policies, standards and procedures do not in and of themselves stop a leak, most people do not want to break the rules if they are constantly made aware of them.  It is likely that users of the Net-Centric Diplomacy database were not regularly made aware of their responsibilities like PCI DSS requirement 12.6 requires.

You Need To Go Above And Beyond

Another concern that was identified was that data could be downloaded at will by any user.  While the State Department could limit downloads to thumb drives, it could not control downloads from other agencies.  Based on the article, it appears there was also no limit to the amount of information that could be downloaded.  As a result, whoever downloaded the information from the Net-Centric Diplomacy database could do so without worrying about being quickly discovered.

This is one of the biggest problems with information management today; ensuring that the information within the data store is properly used and remains in the data store.  Thanks to Microsoft, Oracle, IBM and other database vendors, access to databases can be obtained through a multitude of ways such as ODBC, direct SQL query, and directly from tools such as Microsoft Office.  The bad news is that not all of these methods require authentication, so anonymous access can be obtained.  This is why PCI DSS requirement 7 exists; to make sure that authentication is always required in order to gain access to cardholder data.  However, we constantly run across people in organizations that are doing valuable data analysis, but are using access methods to databases containing cardholder data that do not require authentication.  In a few instances, we have run across organizations that have written access control systems for ODBC to secure their data.

The PCI DSS has a requirement to monitor the access to cardholder data in requirement 10.2.1, but there is no requirement in the PCI DSS that calls out limiting the downloading of data.  This is an area where organizations need to go above and beyond the PCI DSS.  Most database management systems will allow you to limit the amount of data returned by any query.  While this is usually used to control runaway queries, it is also a good security practice as you can then make sure that no users can get a hold of the entire database without having to get special permission.

I am sure as time goes on, more and more of the details of how the WikiLeaks breach occurred will be revealed.  However, just what has been revealed to date can provide a lot of lessons that we should all take to heart.




January 2011
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Months