Archive for the 'Requirement 11 – Regularly test security systems and processes' Category



24
Sep
13

Coming Attractions

On September 12, 2013 the PCI SSC released the drafts of version 3 of the PCI DSS and PA-DSS.  In reviewing the PCI DSS, there are six new requirements that will be considered ‘best practices’ until July 1, 2015 when they will become requirements.

  • 6.5.6 – Insecure handling of PAN and SAD in memory.
  • 6.5.11 – Broken Authentication and Session Management
  • 8.5.1 – Service providers with access to customer environments must use a unique authentication credential (such as a password/phrase) for each customer environment.
  • 9.9 – Protect point-of-sale (POS) devices that capture payment card data via direct physical interaction with the card from tampering and substitution.
  • 11.3 – Develop and implement a methodology for penetration testing that: is based on industry-accepted penetration testing approaches (for example, NIST SP800-115), includes coverage for the entire CDE perimeter and critical systems, includes testing from both inside the network, and from outside of the network attempting to get in, includes testing to validate any segmentation and scope-reduction controls, defines application-layer penetration tests to include, at a minimum, the vulnerabilities listed in Requirement 6.5, defines network-layer penetration tests to include components that support network functions as well as operating systems, includes review and consideration of threats and vulnerabilities experienced in the last 12 months, and specifies retention of penetration testing results and remediation activities results.
  • 12.9 – Additional requirement for service providers: Service providers acknowledge in writing to customers that they will maintain all applicable PCI DSS requirements to the extent the service provider handles, has access to, or otherwise stores, processes, or transmits the customer’s cardholder data or sensitive authentication data, or manages the customer’s cardholder data environment on behalf of a customer.

I will discuss requirements 6.5.6 and 11.3 in separate posts.  I am not going to discuss 6.5.6 until I have a better understanding of how the PCI SSC expects QSAs to test that memory is being managed properly.  I am avoiding 11.3 because it contains enough for a post of its own.  But the others can be addressed in this post.

First, I have to say that I was amazed that these actually had to be codified as they are addressed through a number of other requirements.  But having run into numerous instances where I have encountered these situations, I understand why the PCI SSC felt the need to explicitly codify them.

For requirement 6.5.11, the guidance provided states:

“Secure authentication and session management prevents unauthorized individuals from compromising legitimate account credentials, keys, or session tokens that would otherwise enable the intruder to assume the identity of an authorized user. “

This requirement is targeting the botnets and Trojan attacks such as with Citadel and Zeus.  The problem here is that these are attacks on the end user, not the merchant.  As a result, what this new requirement is going to likely be looking for is for the merchant to be using methods to secure authentication and communications such that man-in-the-middle, man-in-the-browser and similar attacks are minimized or even eliminated.  It will be interesting to see how the PCI SSC expects this to be accomplished.

It has been a long time coming for 8.5.1.  Most QSAs have encountered this situation and we never liked it.  The situation that I speak of is managed service providers and software vendors using the same user identifier and password for all of their customers which they support.  While one can appreciate why this occurs, it does create a problem should those common credentials become known outside of the organization which has been the case in a number of breaches.  As a result, the PCI DSS has been changed to include this new requirement to require managed service providers and software vendors to use unique authentication credentials with each customer.

Requirement 9.9 is to explicitly address a best practice that has been used by a lot of merchants.  A number of merchants have experienced the tampering of card terminals over the years.  This typically was in the form of soldering a USB thumb drive or SD card into the terminal to collect track data and then replacing a good terminal with the doctored terminal at the merchant.  This threat is typically mitigated by video monitoring of terminals as well as the use of serialized security tape or tamper evident seals over a terminal’s case seams that is checked at least daily to ensure that terminals have not been changed out or tampered with.

And finally, requirement 12.9 calls out that service providers explicitly acknowledge in a document that they will maintain compliance with the PCI DSS for all relevant services.  Apparently the existing requirements in 12.8 were not providing enough assurance that service providers were complying with the PCI DSS.  So now we are going to require that all service providers acknowledge, in writing, that they will maintain compliance with all relevant PCI DSS requirements for all services provided to their customers.

Advertisement
04
Jul
13

Is It The Standard Or Process?

I question whether or not there really is a need to change the PCI DSS this Fall.  You look at the requirements and if they were truly followed as designed; the vast majority of breaches would either not occur or would be fairly limited and contained.  But when you look at the breach reports from Verizon, Trustwave and the like, they tell a different story.  Most breaches were the result of one or more PCI requirements that just were not working properly and a breach occurred as a result.

I think where the changes need to be made is in the assessment process as I believe this is where the most impact could be realized.  The current assessment process falls way short in confirming that an organization is complying with the PCI DSS all of the time, not just when the QSA is around.  This results in the very valid complaint by PCI critics that the process is a “check the box” approach.

If you look at the current PCI assessment process, it tests a very limited number of requirements over the period of time of the assessment.  QSAs only ensure that an organization has had: four quarters of passing external and internal vulnerability scans (11.2), quarterly facility wireless scanning (11.1), and that any changes to the cardholder data environment have been appropriately documented and approved (6.4).  All of the other requirements are assessed whenever the QSA assesses them at that point in time.  The rest of the time the organization could be non-compliant and the QSA would be none the wiser.

If we are truly going to address the findings in these breach reports, we need to be ensuring that organizations comply with the PCI DSS all of the time.  That means that QSAs need to be testing more requirements over the assessment time period to ensure that an organization is actually complying with requirements.

Based on that premise, here are my recommendations.

  • Change controls.  The PCI DSS requires QSAs to request all changes to the CDE so that they can determine if more than quarterly vulnerability scanning and annual penetration testing was required during the assessment period.  The breach reports indicate that attackers are consistently finding simple misconfigurations as a way into networks which means that change control is likely not being followed consistently.  QSAs should be testing the entire change control process to ensure that changes to infrastructure are being appropriately tracked, reviewed and approved throughout the assessment period.  The reason is that we regularly encounter information security personnel that are only involved in evaluating and reviewing changes that affect PCI compliance and nothing else.  We also encounter instances where only changes that affect PCI compliance are tracked.  You have to wonder how changes are determined to affect PCI compliance.  Obvious changes to the CDE are easy to identify.  But other changes could implicitly affect the CDE but not necessarily be identified as such because the people reviewing them do not see the connection.  As a result, organizations have no idea if changes outside of the CDE could impact their PCI compliance because there is either no record of those changes or information security has not been consulted.  These organizations are typically relying only on luck to protect them.
  • Mitigation of vulnerabilities.  Most QSAs assess patching by reviewing the quarterly vulnerability scans and making sure that vulnerabilities do not appear on the next quarterly scan.  If any vulnerabilities appear on subsequent scanning reports, then QSAs are supposed to assess what mitigating controls were put in place while the vulnerability was unpatched.  QSAs typically do a pretty good job proving that organizations’ patching processes work reliably.  But when it comes to mitigation, QSAs do not necessarily do a great job determining that open vulnerabilities are truly mitigated.  This is not always the QSA’s fault as the organization may not be keeping the necessary documentation to support that open vulnerability risks are being mitigated.  The bottom line here is that the assessment process needs to assess all vulnerabilities that were left unpatched during the assessment period to ensure that they were mitigated while they remained unpatched.
  • Access controls.  As with change controls, the current PCI assessment process only requires the QSA to test the accounts of those accounts that have access to the cardholder data environment (CDE).  To add insult to injury, most organizations have multiple access control systems in use for securing servers, infrastructure, third party monitoring, etc.  All of these systems are typically in-scope for assessment, but a lot of QSAs focus only on those in-house.  Access control is an all or nothing proposition, you are either doing it consistently across the board or you are not.  Based on the breach reports, attackers are using whatever accounts they can to get a foothold and then work their way to accounts that provide more privileges.  If that is the case, then how will testing only accounts that have access to cardholder data (CHD) help this situation?  It does not.  Therefore, the assessment of access controls needs to look at the whole picture, not just a privileged few.  The bulk of testing of the process may be relegated to those with access to CHD, but the entire process of granting and revoking access needs to be assessed to ensure that controls are being followed for everyone.  Testing of privileged accounts needs to be enhanced to address what those accounts have access.  Do the DBAs have administrative access to the OS?  Do network administrators have administrative access to servers?  Do system administrators have administrative access to network devices?  These are just examples of questions that a lot QSAs do not answer during their assessments.  A lot of QSAs are only interested in determining that privileged access is controlled, not who has access, why they have access and is that access justified.
  • Monitoring and Alerts.  QSAs are to ensure that logging and alerting is enabled as part of their testing.  QSAs are to ensure that an organization has log data online for at least three months and offline for a year.  QSAs then move to observing the various monitoring consoles to ensure that alerts are generated.  However no testing is done for the period of the reporting period to ensure that alerting functioned as configured by sampling the alerts generated and then ensuring that those alerts were properly investigated to ensure they were not serious or required further investigative techniques.  Based on the breach reports, anomalies are not being researched and this is resulting in breaches taking months to identify or, worse, going unnoticed.
  • Sampling.  Finally, testing needs to be more than just testing three or four items in each of these categories.  There needs to be sampling over the entire assessment period not just the day the QSA is investigating the requirement or one item per quarter.  A random statistic sample would be best but given populations that might be not feasible in the time frame required to produce a report.  However it is not inconceivable that this could result in at least 50 to 100 items being tested in some of these categories.

These are the key tests that would go a long way in improving the assessment process and address the findings from the breach reports.  The trouble is that this sort of enhanced testing is obviously going to drive up the cost of a PCI assessment whether the QSA does the testing or an organization’s internal audit function does the testing.  If the PCI SSC and the card brands are truly serious about making the standards meaningful, then this is where I would focus, not on making changes to the standard.

Remember my mantra, “Security is not perfect.”  So while these changes are not going to absolutely, positively prevent all breaches, they will go a long way in ensuring that organizations are as secure as they can be all of the time rather than just when the QSA is around.

28
May
13

BlackPOS

I got a Tweet from a friend today regarding this new piece of malware found out in the wild and dubbed ‘BlackPOS’.  BlackPOS is very similar in nature to vSkimmer.  Now before everyone goes off and panics, if you are religiously following the PCI DSS, BlackPOS should not be an issue and here is why.

  • Requirement 11.5 – Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.  BlackPOS does a lot of manipulation around known file names, but the hash values of those files should change from the known good values, so any file monitoring system should alert on that fact.  It also uses file names that would never exist on a production system, so those should also generate an alert.  In addition, BlackPOS creates a TXT file that also should generate an alert when created.  However, if you are not alerting in real-time, you should be so that you pick up these issues as soon as possible.  This is where the bad guys are headed with their attacks, so you may as well alert as soon as an incident occurs so that you can address it before it gets out of control.
  • Requirement 1.1.5 – Documentation and business justification for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure.  BlackPOS uses FTP to move the TXT file from the POS system to their server.  If you are allowing FTP to flow freely from your POS or cardholder data environment (CDE) to anywhere on the Internet, you were not PCI compliant in my opinion, even if you had some bizarre business justification.
  • Requirement 5.1 – Deploy anti-virus software on all systems commonly affected by malicious software (particularly personal computers and servers).  While BlackPOS was only identified today, the anti-virus vendors will most likely have signatures out by the time you read this, so they will be looking for BlackPOS by the time you get your updated signatures.

Just these three requirements can stop this sort of an attack.  Yet, time and again we see these attacks succeed because people are not properly implementing their file integrity and not restricting network traffic flowing out of their internal networks.

PCI compliance does work when you use it the way it was intended.

24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.

06
Aug
12

Third Party Service Providers And PCI Compliance

There seems to be a lot of confusion regarding third parties that provide networking or hosting services and their obligations regarding PCI compliance.  This confusion is not uncommon as merchants and their service providers have not necessarily been provided enough guidance to understand their obligations.  I hope this post will clarify those obligations for all involved.

If you learn nothing else from this post, if a third party is providing your organization a service that has access to your cardholder data environment (CDE) or the third party could come into contact you’re your cardholder data (CHD), then that third party must ensure that the service complies with all relevant PCI requirements.  As a result, the third party needs to either allow you or your QSA to assess the services that they are providing or provide you with an Attestation Of Compliance (AOC) that documents that those services have been assessed and they are PCI compliant.

In the past, I have stated that third parties could also submit a letter signed by an officer of the third party stating that all of the services provided to their customer are PCI compliant.  Now that v2.0 of the PCI DSS has a separate AOC and the PCI SAQs have the AOC built into the SAQ, there should be no reason to need such a letter or to ask for one.  If a letter is what your third party is offering, it is better than nothing, but you should be pushing them hard for an AOC.  If they are reluctant to get you an AOC, as part of your vendor management process, you should take that into account and probably begin looking for a new vendor that will provide an AOC for their services.

The most common issue we run into with third parties is that their AOC or other representations of PCI compliance do not cover all of the services provided to the customer.  In case after case, we see the AOC covering requirements 9 and 12 and nothing else even though the services provided may require compliance with some or all of PCI requirements 1, 2, 3, 4, 5, 6, 7, 8, 10 and 11.

In a lot of cases, it is not that the third party does not want to comply with PCI; it is they are taking the lowest common denominator approach and only picked those services where all customers requiring PCI compliance are asking for an AOC.  That way they have reduced their costs of a QSA to assess their environment.  These third parties are accepting the fact that any customer that needs more services assessed will have to do it themselves.

Related to this issue is the third party that offers their SSAE 16 Service Organization Control (SOC) 1 report has proof of PCI compliance.  While a SOC 1 report can cover a few PCI requirements, people must remember that the SOC 1 report is structured specifically for financial auditors to ensure that the controls at a third party are properly constructed to support financial reporting at the customers.  As a result, a SOC 1 report is not going to be a substitute for an AOC that covers all services.  There is an alternative to this and that is to have the third party go through a SSAE SOC 2 report that focuses on the security controls of the PCI in-scope services provided.  We are hearing from third parties inquiring into the SOC 2 report, but cost and a lack of customers requesting such a report are driving why we do not see more SOC 2 reports available.

Another common issue we encounter is the refusal of the third party to cooperate in assessing the services provided to ensure they are PCI compliant.  There are still third parties that argue their services are not in-scope for PCI compliance even when it is painfully obvious that the third party’s personnel have access to their customer’s CDE and/or CHD.

The most common third party relationship we encounter is the management of routers or other layer 3 devices.  Where we encounter the most confusion in this relationship is in regards to the use of encryption to keep the network services organization out of scope for PCI compliance.  The key here is if the network services organization manages the encryption of the network, then they are in-scope for PCI compliance.  The reason is that the employees of the network services organization have access to the encryption keys and therefore could decrypt the communications and gain access to CHD transmitted over the network.  As a result, at a minimum, the network services organization is responsible for complying with some or all of requirements 1, 2, 4, 6, 7, 8, 9, 10 and 12.  If you receive such services and are not getting an AOC that covers these requirements, then you should be doing more work on your own as well as asking the third party why they are not covering more of the necessary PCI requirements.

The next most common service we encounter is the network services firm that is managing or monitoring an organization’s firewalls, remote access or intrusion detection/prevention.  Such services always put the third party in-scope for PCI compliance.  Some or all of requirements 1, 2, 6, 7, 8, 9 and 12 will need to be assessed for compliance with the PCI DSS.  The log capture and analysis requirements in requirement 10 may also be complied with if your organization is not capturing and analyzing the log data from these devices.

Another group of third parties we encounter a lot are records retention vendors.  Organizations like Iron Mountain have conducted their own PCI compliance project and readily hand out their AOC to customers.  However, where we see issues is with such vendors that provide their own tape library for their customers to use for backup.  We have encountered a number of third party’s doing the encryption at their library which puts them in-scope for PCI compliance, at a minimum, for requirements 3, 4, 6, 7, 8, 9, 10, 11 and 12.

We encounter outsourcing the data center a lot with large organizations, but small and mid-sized organizations are also hopping on the data center outsourcing bandwagon.  Where this puts the third party in-scope for PCI compliance is when the third party is responsible for maintaining the environment such as applying patches, managing servers or any other activities that would allow the third party’s personnel to potentially have access to CHD.  In such situations, at a minimum, the third party is responsible for complying with some or all of requirements 2, 5, 6, 7, 8, 9, 10 and 12.  Compliance with some or all of requirement 1 may be applicable if the third party is managing your firewalls or routers.  Compliance with some or all of requirements 3 and 4 may also be applicable if the third party is responsible for managing encryption keys for encrypting CHD or encrypting communications.

Where the most confusion regarding third party responsibilities occurs is in regards to “The Cloud.”  The most common reason for this is that every vendor seems to have a different definition for what “The Cloud” is, based on their particular services.  Using the definitions provided by the National Institute of Standards and Technology (NIST) in their publication SP800-145, ‘The NIST Definition Of Cloud Computing’, I can provide the following guidance.

If your organization is purchasing Infrastructure as a Service (IaaS), then the third party providing these services will typically be out of scope for PCI compliance except for requirements 9 and 12.  There are some instances where IaaS implementations may require compliance with the PCI DSS if the third party is managing network infrastructure that comes into contact with CHD as is usually the case with private cloud environments.

For Platform as a Service (PaaS) and Software as a Service (SaaS), the third party will have to provide PCI compliance for the services they are providing to your organization.  That is because with either of these service offerings, the third party must have access to the CDE and will have the potential of coming into contact with CHD.

The problem with the majority of PaaS and SaaS vendors is that they only deal with your organization through a Web-based interface, i.e., everything is automated – contracts, support, etc.  As a result, the contract is a “take it or leave it” situation that does not usually cover everything needed for PCI compliance, there is no way to independently verify the representations made by the third party as well as the fact that the AOC provided by the third party typically only covers only the physical security requirements in requirement 9 and possibly some of requirements 11 and 12 and nothing related to the other requirements, even though the third party may have responsibilities for PCI compliance outside of what is represented in their AOC.

If this is the case, there is little you or any QSA can do to properly assess the environment to ensure it is truly PCI compliant.  As a result, we have a lot of organizations that try to develop compensating controls for these cloud implementations.  These organizations very quickly and frustratingly find out that there are very few, if any, controls on their side of the equation that can get them to “above and beyond” the original requirement.

I know there are a lot of other examples of services being provided to merchants.  But, hopefully these examples can assist you in clarifying what you need or do not need from your third parties when it comes to PCI compliance.

02
Oct
11

Defense In Depth

I have a slide in my security presentation deck that discusses the concept of defense in depth and how when you start opening ports or start using encrypted data streams how you are punching holes into one or more of your security layers.  It amazes me how many people still do not understand how defense in depth works and how much security standards such as the PCI DSS rely on this concept.

So let us take a look at the various elements of security and the requirements of the PCI DSS and see how they bring defense in depth to bear.  Keep in mind this is an example and does not encompass everything an organization could do to increase defense in depth.

For most organizations, the first level of defense is at their firewall.  Requirements 1 and 2 talk to how you should use a firewall and secure it.  The biggest mistake that organizations make is not configuring their firewall properly.  And by configuration, I am not just talking about the configuration of the firewall’s software; I am also talking about where and how the firewall is used in the network.

The next level of defense for most networks is usually some form of intrusion detection/ prevention system.  Some of the requirements in 10 and 11 talk to intrusion detection/prevention.  IDS/IPS capability may be provided in a separate appliance or may be part of an organization’s firewall.  The key to using an IDS/IPS is ensuring that it is kept current with its attack signatures, monitoring its log data and/or console and ensuring that it is not be overwhelmed by network traffic.

One thing that continues to amaze me is how many implementations of IDS/IPS I encounter where the IDS/IPS are in the middle of encrypted data streams.  IDS/IPS systems cannot examine encrypted data streams unless they have the decryption keys which they typically do not have access.  As a result, encrypted data streams are not examined and therefore sensitive data and or attacks could be going right past the IDS/IPS.

How users authenticate to your network and devices is also a level of defense.  Requirements 7 and 8 of the PCI DSS talk to this point.  And it is not just authentication to applications that process, store or transmit cardholder data, it is also authentication to infrastructure devices and to databases.  

It has been more than five years since the “sa” default password debacle and yet you still encounter applications that use service accounts to access their database and those service accounts have no password.  The rationale?  “We did not want to code the password into the application,” is the common reply.

The other big area of authentication issues that you encounter is with firewalls, routers switches and other network infrastructure.  The problem is that the network administrators all use the same account and password.  You can understand their rationale, particularly those networks where you are administering thousands and thousands of devices.  

There are a number of ways to address this situation, but these are my favorite two.  The first is to implement 802.1X authentication using a RADIUS server.  Under this scenario, every network administrator has their own unique account and password to access the network devices.  Those unique accounts should be different from the network administrator’s account they use to get email and network access like every other user.  A lot of organizations already have the RADIUS server implemented for remote access, so adding in network administration access control is relatively easy.

The second way to address network administration access is to use a “jump box.”  In a “jump box” implementation, two or more “jump boxes” are placed at strategic points on the network and all network administration access is conducted through a “jump box.”  The “jump box” is fully instrumented in that all keystrokes, applications, etc. are logged and those logs are reviewed at least daily to ensure that network administrators are not changing things they should not be changing.  That means comparing service tickets for the network against the logs from the “jump boxes” and ensuring that only what was required to be changed was changed.  “Jump boxes” can also be used to control access for server administration.

A level of defense that usually gets little recognition is operating system (OS) hardening.  What some people seem to forget is that any computerized device has an OS whether it is a firewall, router, switch or server.  Requirement 2 talks not only to the hardening of wireless, but also firewalls, switches, routers and servers.  Every vendor publishes a guide that explains how to securely implement their OS.  Where things can get sticky is with third parties that argue that their product or software will not function if you follow the vendor’s OS hardening recommendations.  In my experience, testing a vendor’s product or software in a hardened environment typically does not have an adverse result.  However, the key is to conduct a test.

Another level of defense is anti-virus and anti-malware software.  This solution also usually includes a personal firewall on mobile devices such as notebooks, netbooks and smartphones.  Requirement 5 of the PCI DSS talks to anti-virus and anti-malware while a requirement in 1 talks to personal firewalls.  Nothing gets some people wound up more than anti-virus software.  The requirements in 5 can have compensating controls, but implementing those compensating controls consistently on mobile devices is usually just about impossible.  So while you may not have anti-virus/malware on your e-Commerce servers, you should have it on all of your desktops, notebooks, netbooks and other systems.

A level of defense that most organizations poorly manage is their collection and analysis of log data from their network devices and servers.  Requirement 10 speaks to the importance of log data.  As I have written before, log data is IT’s version of commercial aircraft’s flight data recorder.  If you want to why a problem occurred, log data from your devices can usually point you to the reason.  The problem most IT professionals have with log data is that they do not want to log everything because that generates too much data in their opinion.  However, until you have an incident, you do not know what log data will be important in identifying why the incident occurred therefore you need all of it.  The last thing you want to have happen is to tell management that you could not determine the cause of an incident because you did not record the critical information required in identifying the incident.

The final defense most people think of is application development which is covered by requirement 6.  If you are going to get push back, this is the most likely and consistent place you will get push back to the PCI DSS or any other security program.  Application developers are very protective of their environments, so when you start infringing in their area, they can get rather upset.  As a result, you hear the typical lament from developers that security, “restricts their creativity.”

In today’s rush to get things done, application developers usually do not have security at the front of their minds.  As a result, by the time anyone knows that there is a security issue, it is too late for it to be fixed and the application goes into production with the fix to be part of version 2.  That was the whole point the PCI DSS addresses in requirements 6.4, 6.5 and 6.6; avoid putting susceptible software into production.  The whole point of these requirements is to build a certain amount of security into the development process to minimize the amount of security issues that end up in production.

The real final defense is an organization’s policies, standards and procedures.  Yes, that paperwork that everyone thinks is “make do” work, really does have a purpose.  An organization’s policies, standards and procedures are the rules that everyone is to follow to ensure security.  Those rules also provide a way to measure people’s compliance so that, in the event of an incident, those people that did not follow the policies, standards or procedures can be shown their mistakes so that they can correct their actions in the future.  These rules also provide an organization’s framework for explaining to personnel as to how the organization is protecting their information assets and defining those assets.

There are a lot more options for defense in depth, but I think you get the idea.  Now that you understand how defense in depth works, you should now also understand what happens when security personnel are asked to open ports for an application or change configurations that reduce the number of levels in an organization’s defenses.  The fewer levels involved the higher the likelihood that a lapse in control can result in a breach, particularly when a number of lapses in controls are occurring simultaneously.  This is how supposedly PCI compliant organizations end up breached.

02
May
11

Draft PCI DSS v2.0 “Scorecard” Released

It has been a long wait, but the PCI SSC has finally given us a look at the new “scorecard” for v2.0 of the PCI DSS.  For those of you that never knew about the “scorecard,” it was given to QSAs to assess Reports On Compliance (ROCs) to ensure that QSAs have properly conducted PCI assessments.  I have not had a chance to get through all 112 pages of this document, but I have gotten through the first part of it and I wanted to share my thoughts.

The first change to the “scorecard” is its name.  It is no longer the “scorecard,” it is titled ‘ROC Reporting Instructions for PCI DSS v2.0’.  The naming seems to indicate that once the QSA review period is over it will be posted to the PCI SSC’s Web site in the Documents Library.

Overall, the document is similar to the scorecard for v1.2.1, but no longer documents the scores that the PCI SSC QA team will use to assess QSAs.  However, from the way it is written, I would assume that if a requirement in the ROC does not contain everything documented in the Reporting Instructions, that it is considered to have not met the QA requirement.

Another general comment I have is that it is woefully lacking in examples.  While there seems to be a significant amount of guidance provided for what to write in the ROC, there are also ambiguous or unclear references that could be explained if the PCI SSC provided relevant examples of what they desires the QSAs to write.

The biggest change I have found thus far is the removal of the requirement to observe network traffic as the Network Monitoring column is gone from the Reporting Instructions.  Prior to this point, QSAs were required to obtain network traffic via WireShark or similar tool to prove that network traffic is encrypted.  I reviewed requirements 1.2.1.a, 1.2.1.b, 3.2.1, 3.2.2, 3.2.3 and 11.4.a that had the Network Monitoring requirement in the v1.2.1 scorecard.  Based on the training for the 2011 QSA recertification, networking monitoring testing is still something needed for confirming compliance with requirement 1, so even though it has been removed as a column, it appears to still be required.  However, from the Reporting Instructions, the network monitoring is not explicit, so this is one of those areas where the PCI SSC will definitely need to clarify things.

The section in the Executive Summary at the front of the ROC that discusses how a network is segmented to minimize scope will now require a fairly detailed discussion regarding that segmentation.  All network segments need to be described along with their purpose as well as a discussion of how the segments are architected and whether the segments carry cardholder data (CHD).  If access is provided to the cardholder data environment (CDE), that access needs to be described and that description needs to document how access is controlled.  It is very clear from the write up surrounding this section that QSAs and their clients will have to put much more work into this section to satisfy the PCI SSC.

Another clarification area is with the review of system configurations done as part of requirements 1 and 2.  The guidance now given by the PCI SSC is that they no longer want the documentation to be a list of configuration files that were reviewed by the QSA.  However, in the next breath, the Reporting Instructions tell the reader that a QSA must provide enough detail to prove that configuration files were reviewed.  So what is an acceptable level of detail?  Can we say that we reviewed 5 or 25 firewall configuration files?  In the past, we were told that this sort of approach was unacceptable.  The PCI SSC will need to provide one or more examples of language that they will accept.

Of all of the things I have read thus far, the one that just gets me seething is from the “Dos and Don’ts” page.  One of the “Don’ts” is “Don’t copy responses from one Testing Procedure to another.”  Further down on the list is “Don’t cross reference between responses”.  After going through our QA assessment and remediation, we were told by the QA person that we needed to do a better job of putting all of the information from earlier requirements that was relevant into every requirement as each requirement needs to be able to stand on its own.  But now, according to the Reporting Instructions, you cannot bring all of that documentation to the new requirement by using cut and paste.  What a bunch of “make do” work.

But this “make do” work is all because the PCI SSC is basically implying that it cannot trust its QSACs to do the work that is required to ensure an organization is complying with the PCI DSS.  However, just because a QSA writes something in a ROC does not mean they actually did the work.  It just means that the QSA knows how to write what the PCI SSC wants to read.  And to make matters worse, the PCI SSC provides the Reporting Instructions to provide guidance on just what to write as well as telling QSACs to develop ROC templates to speed the writing process.

A prime example of this is a new requirement in the section of the ROC where the QSA documents the list of people interviewed and/or observed.  The PCI SSC now requires the QSA to document what these people were interviewed about or were observed doing.  The purpose of this new requirement is to provide even more “proof” that a QSA did their job.  Another minor example of the PCI SSC trying to get “proof” of a QSA’s work effort is the increase in the level of detail being asked to document is the dates that QSAs were on-site for fieldwork.  In the past, QSAs were only required to document the period covered by the assessment.  However, QSAs are now being required to also document all of the dates of their fieldwork as well as the duration of their fieldwork and review period.

This is one of my biggest issues with the ROC process.  The PCI SSC refuses to adopt a more intelligent and cost effective reporting process of documenting requirement exceptions.  Instead, the PCI SSC requires QSAs to document their fieldwork process in the report.  As a result, an inordinate amount of time, paper and hence money is spent on what is really, in my humble opinion, a totally worthless effort.

I understand why this was required.  When the PCI SSC did not have the right to review a QSA’s work papers and other documentation, having such documentation in the ROC was the only way the PCI SSC, card brands and acquiring banks could assess whether or not a QSA had done their job.  Now that more than a year has gone by since the PCI SSC required all QSACs to include verbiage that allows the PCI SSC to review a QSAC’s work papers, putting all of this effort into a response writing requirement should no longer be required.  QSAs should be able to mark a requirement either ‘In Place’ or ‘Not In Place’.  If a requirement is ‘Not In Place’, then the QSA should document why the requirement is not in place and what the organization is doing to remediate the problem and when the remediation will be complete.  Such an approach would make the creation of the ROC much faster and would make the ROC much quicker to read and easier to understand.  This is the approach used in the accounting industry for their SSAE 16 reports and there is no reason why the PCI SSC could not adopt the same approach.

The PCI SSC continues to cling to this inane reporting requirement because it apparently is relying on the readers of the ROCs to “rat out” those QSACs that are producing inadequate reports.  I hate to be the bearer of bad news, but based on my review of ROCs from other QSACs that I have encountered over the last year, the “bad eggs” are not being weeded out.  Based on my interaction with acquiring banks and various card brands, there are a lot of ROCs that are not being read in detail.  And even those ROCs that are being read, most comments surround anything that is determined to have been ‘Not In Place’.  Occasionally we get a question about an ‘In Place’ item.  Obviously the current approach is not working and as long as the PCI SSC continues this approach, we are not going to build trust between the PCI SSC and QSACs.

I know that this is a dilemma for the PCI SSC, but it is something that needs to be addressed and soon.  Organizations that have to go through the ROC process are pressuring QSAs to reduce costs as much as possible not only due to our current economic conditions but also because of the thin margins retailers live on.  In order to keep the PCI compliance process relevant, the PCI SSC needs to get out in front of this issue.  The PCI DSS assessment process is very labor intensive, so the only cost savings to be obtained will be in making the process less labor intensive.

UPDATE: On the morning of September 20, 2011, the PCI SSC released the final version of the Reporting Instructions along with an FAQ.  These documents can be obtained from the PCI SSC Document Library under the Addition Documents – QSA heading.

09
Feb
11

The “Magic” Vulnerability – Revised

What started this post is that I have recently received a number of calls and messages from clients and colleagues.  The conversations have all gone basically the same.  They were calling me and telling me that their ASV had failed their vulnerability scan because the OS detected was unsupported and they are wondering whether or not I have encountered this before.

My first question usually was along the lines of; “So, what vulnerabilities did they detect?”

“None,” was the confused answer at the other end of the line.

“What?  They must have detected at least one high, severe or critical vulnerability?  That is the only way you can fail,” I would ask, now also confused.

“Nope.  Nothing.  Just the fact that the OS is unsupported,” I was told.

Do not get me wrong.  I am not advocating the use of unsupported operating systems, particularly unsupported versions of Windows.  The risk of course is that one or more vulnerabilities show up that the vendor will not fix because the OS is no longer supported.  So there is good reason to avoid this situation.  However, there are also situations when you just get no other choice either due to your own organization’s issues and politics or software vendor issues.

This situation got me thinking and doing some research since I did not remember ever seeing or being told that an unsupported OS was an automatic vulnerability scan failure.  I no longer do external vulnerability scanning, so my recollections of training and working on the ASV side of our business is a bit fuzzy and very rusty.  However, I had never failed a client for an unsupported OS.  So when this issue came up, my only action was to determine what had changed.

The first thing I did was review the latest version of the PCI ASV Scanning Procedures, v1.1.  I searched for terms such as ‘old’, ‘unsupported’, ‘out of date’, ‘OS’ and ‘operating system’.  No matches.  So there is nothing in the ASV scanning procedures that fail an organization for running an unsupported OS.  Even the PCI DSS does not call out unsupported software, so procedurally; I am thinking there is nothing explicit regarding unsupported OSes causing a failed vulnerability scan.

So when I made the original posting, I got a comment from one of my readers pointing me to the ASV Program Guide.  Low and behold on the top of page 16 is the following:

“The ASV scan solution must be able to verify that the operating system is patched for these known exploits. The ASV scanning solution must also be able to determine the version of the operating system and whether it is an older version no longer supported by the vendor, in which case it must be marked as an automatic failure by the ASV. “

So there is no “magic” vulnerability I was missing as the PCI SSC does specify that a scan automatically fails if the OS is unsupported.

But that is not the entire story.  The key to this whole process is that the vulnerability scanner used must be able to verify the operating system.  While all vulnerability scanners attempt to identify the operating system, the reliability of this identification process is suspect at best.  I am not aware of any vendor of security testing tools that makes a claim that they will identify an operating system 100% of the time.  This is because of the fact that there are many, many things that can influence the OS signature that the tools cannot control and therefore can greatly affect the ability of the tool to identify the OS, particularly when talking about external scanning.  And if an organization follows the OS security hardening guidelines, a lot of unsupported OSes will not be properly or reliably identified by vulnerability scanners.  As a result, I find it hard to believe that the PCI SSC intended to have ASVs only rely on the results of a vulnerability scanner, but that seems to be the case.

So with this clarification, I contacted our ASV personnel and they have told me that they too have been failing vulnerability scans if they run across unsupported operating systems.  I ask if the OS signature is inconclusive, then there is not a failure?  Yes, if the scan comes back and does not identify the OS, then they have nothing to go on to fail the scan and the scan passes.  Given the difficulties vulnerability scanners can have identifying the target operating systems such as when scanning through network firewalls, Web application firewalls, load balancers and the like, I now ask if they feel that these identifications are reliable enough to fail a scan.  I am told this is why they confirm the information with the client before issuing the report so that the report is accurate.  So if a client is not honest, they could influence the results of their scan?  I am reluctantly told that is probably true.

Then there is the issue that not all operating systems are created equal.  Operating systems such as MVS, VMS and MCP are nowhere as risky, if they are even risky to begin with, as Windows and Linux.  A lot of ASVs would argue that they never come across these operating systems running Web services.  However, all of them have the capability of running Web services and I personally know of a number of organizations that run their Web services from such environments.  Organizations are running these older versions of operating systems mostly because of the financial considerations of migrating to something else.  However, I can guarantee that none of the dozens of vulnerability scanners that I have used in the last 10 years would accurately identify any of these operating systems, let alone tell you the version unless some service message header information was retrieved by these tools.  And even then, most tools do not parse the header to determine the OS so it would take human intervention to make that determination.

Regardless of the failure, most ASVs do have a review or appeal process that allows organizations to dispute findings and to submit compensating controls to address any failures.  So for organizations that cannot get rid of unsupported OSes, they can use a compensating control.  Like compensating controls for the PCI DSS, the organization is responsible for writing the compensating control and the ASV needs to assess the compensating control to ensure that it will address the issues identified by the vulnerability scan.

So, if you can fail an organization over an unsupported OS, why is it that you do not automatically fail on unsupported application software?  I went through the Program Guide and there are all sorts of other criteria for applications but nothing regarding the fact of what to do if they too are unsupported.  Applications such as IBM Websphere and Oracle Commerce can become unsupported just as easily as their OS brethren.  And in my experience, use of unsupported application software is even more prevalent than unsupported OSes under the idea that if it is not broken and does not have vulnerabilities, why upgrade?  When I asked our ASV group if they fail organizations on unsupported applications I got silence and then the response that they will fail an application if the vulnerability scanner provides a high, severe or critical vulnerability.  To tell you the truth, while vulnerability scanners regularly return text header information for a lot of applications, I would be hard pressed without doing a lot of research to find out if the version being reported was unsupported.  However, scanners could provide this feedback if they were programmed to provide it.

Then there are all of the conspiracy theories out there that say the PCI SSC and technology companies are working together to drive software sales by forcing organizations to upgrade and there would appear to be a lot of anecdotal evidence that would seem to support this argument.  In reality it is not that the software companies are working together with regulators such as the PCI SSC so much as software companies operate this way in order to focus development and support resources on fewer, more current versions.  As a result, it is just happenstance that regulations cause organizations to have to update their software.

The bottom line in all of this is that you have options to avoid a failing vulnerability scan because of an unsupported OS.  The best method, and the one I most recommend, is do not use unsupported operating systems in the first place.  However, as a former CIO, I do understand the real world and the issues IT departments face.  As a result, I recommend all of the following which may or may not require you to develop a compensating control.

  • Implement not only a network firewall, but also a Web application firewall (WAF) and make sure that the rules are extremely restrictive for servers running unsupported operating systems.
  • Configure your firewalls to block the broadcasting of any OS signature information.  Masking the OS signature will provide the benefit of not advertising to the world that the OS running whatever application is unsupported.  This is not a perfect solution as, 9 times out of 10, the application itself will likely advertise the fact that the underlying OS is unsupported.  It is very important to note that this is only a stop gap measure and you should still be actively in the process of migrating to a supported OS.
  • Implement real-time monitoring of firewalls, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Implement a host-based intrusion detection/prevention solution on any servers that run the unsupported OS.  If using a HIPS solution, you may also want to consider using its preventative capabilities for certain critical incidents.
  • Implement real-time log analysis for firewall, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Actively use your incident response procedures to address any incidents that are identified with any unsupported OS.
03
Feb
11

Intent Of Requirements – 11.2

As I discussed in my earlier post on the intent to requirement 6.1, requirement 11.2 is another requirement where there is a hard and fast metric of four “clean” quarterly vulnerability scans.  Nice idea, but tough to execute.

To remind everyone, the intent of vulnerability scanning is to; (a) ensure that if a patch has been applied, it is in fact in place and the vulnerability has been addressed and, (b) identify any new vulnerabilities.  11.2 is in response to at least two issues.  The first is that organizations that do not vulnerability scan have no way to assess the larger risk their technology infrastructure poses to the potential loss of data.  The second is to ensure that updates and patches are in fact truly correcting vulnerabilities.  I have personally seen situations where patches have removed other patches thus leaving a system at risk of compromise.  Granted these situations are rare, but without vulnerability scanning, the removal of these patches would have never been known.

Requirement 11.2 requires that vulnerability scanning is performed at least quarterly.  However, given the 30 day patching rule, the fact that most vendors have a monthly release of patches and the fact that scanning must be performed after all “significant” changes, an organization really needs to conduct monthly vulnerability scanning at a minimum just to stay compliant.  As a result, if you have an organization doing anything less than monthly scanning, you really need to question if they understand the vulnerability scanning requirement.  However, from a ROC reporting perspective, you really only need to report the results of four quarterly scans even though 12 or more scans are available.

Then there is the problem of the definition of a “clean” scan.  According to the PCI SSC, a “clean” scan is one that contains no severe, high or critical vulnerabilities.  You hear complaints about this time and again from organizations, “How do you ever get a ‘clean’ scan when new vulnerabilities are introduced almost every day?”  This is a very fair question when most security professionals know that the odds for obtaining a “clean” scan are against them because 94% of all vulnerabilities are rated severe, high or critical.  Let alone the fact that tests for vulnerabilities can show up in scanning engines long before the vendor has a patch just exacerbates this situation.  Is it any wonder security professionals are frustrated with this requirement?

This is an area that the PCI SSC has really failed to address and bring some reality to the requirement.  For a lot of large organizations, vulnerability scans that have no severe, high or critical vulnerabilities documented cannot physically be achieved without some rare alignment of the planets, stars and moons.  They could be scanning 24×7 and still could not have a complete, truly “clean” scan of their environment.  As a result, a “clean” scan is usually all in the eye of the beholder.

But that is not the entire problem.  A lot of organizations run packages such as Oracle Commerce, IBM Websphere or MICROS Simphony.  Oracle, IBM and MICROS typically have issues with some OS or system software vendor patches such as those for Windows, Linux or Apache.  They usually issue advisories to their customers indicating that these patches cannot be installed.  In these cases, severe, high or critical vulnerabilities may not get patched for months as these vendors develop an update to their solution as well as packaging the other vendors’ patches.  As a result, organizations with these solutions can have severe, critical or high vulnerabilities in their scans for quite a while.  In these cases, a QSA needs to determine if the organization has followed the vendors’ recommendations or used any other recommendations for mitigating the risks of having these vulnerabilities.  QSAs also need to see proof that once Oracle, IBM or MICROS issue their fixes for these situations, that these organizations implement them in a timely manner.

As a result of these constraints, I have developed my own definition of a “clean” scan.  Again, I am not advocating that vulnerability scanning does not have to result in “clean” results.  I am advocating that QSAs bring reality to the process.  In my world, a “clean” scan is a scan that results in severe, critical or high vulnerabilities that were not documented in any of the previous scans.  If there are repeated severe, critical or high vulnerabilities, those vulnerabilities should only be the result of vendor patching issues and the organization must have documentation that proves that management understands that they exist, what measures the organization has taken to mitigate their existence and when the vendor anticipates to have the vulnerabilities addressed.

Here is my rationale for this definition.  There are just too many factors that go against getting what the PCI SSC defines as a “clean” vulnerability scan.  Real world experience says that getting a “clean” scan is near to impossible, particularly in large environments.  I can count on one hand the number of organizations that I have worked with that have truly four quarters of “clean” scans and none of them are very large.  Therefore, as long as I can prove that severe, high or critical vulnerabilities are being addressed in a reliable and as timely a manner as possible, then the scans are clean.

All of you anal-retentive types I am sure are going to flame me as not following the standard.  However, what is a standard worth if it cannot be applied equitably to all?

21
Jan
11

Why Stuxnet Matters To PCI Compliance

There is an article in the Sunday, January 16, 2011, New York Times that says the American and Israeli governments were behind Stuxnet, confirming a rumor that has been running around ever since Stuxnet was identified.  The consensus of the experts is that Stuxnet is to cyber warfare what the airplane was to conventional warfare, a radical game changer.

So why does Stuxnet matter to merchants, processors, acquiring banks and the card brands?  A whole lot if you think about it.

First, Stuxnet proves beyond a shadow of a doubt that governments are investing in cyber war and that in cyber war anything on the Internet is fair game.  While Stuxnet was built to target Iran’s centrifuges that are used in refining Uranium, there were a number of incidents of “collateral damage.”  This “collateral damage” was the result of Stuxnet attacking anything that used the same Siemens SCADA controllers.  Granted, Stuxnet was not as nasty to devices that were not centrifuges, but it still caused problems.  Imagine if an entity wrote an attack for a common device or protocol hoping to actually target another particular entity.  Do you think your organization could become “collateral damage” in such an attack?  I would say it is highly likely.

Second, imagine then the damage that could be done if a terrorist group or a government decided to go after another country’s financial systems using a Stuxnet type of attack.  Does such an attack sound unrealistic?  It should not given that the quickest way to bring any country to its knees is through its financial systems.  And what makes things doubly worse is that, thanks to outsourcing, most banks use a very limited number of application solutions thus making such an attack all that much easier.  Given the reliance of countries on their economic capabilities, such an attack would likely only be carried out by a rogue nation such as North Korea that has nothing to lose or even any other country if it is provoked long and hard enough.

But what if the attack was directed against credit card terminals?  While a lot of people would say that would be farfetched, it also is not as wild as it might seem.  All you need is someone on the inside at Ingenico, Verifone and the like to doctor the card terminals’ software to do whatever you want it to do.  Even large merchants do not necessarily monitor their card terminals, so such an attack could go on for quite a while before it was noticed, if it even ever was noticed.  Criminal gangs have been producing limited numbers of doctored terminals for the last four to five years.  Imagine this done on a large scale and you start to understand how nasty a threat this could be.  If introduced from the manufacturers into the distribution stream, there would be no way of knowing that you had been compromised unless you were monitoring your network properly which most organizations do not do.

Finally, there is the doctoring of integrated point of sale (POS) solutions or similar applications.  Again, not as farfetched as you might think.  There have been a number of instances over the years where software was written to provide backdoors or other openings in systems that allowed information to be leaked.  This is why certain governments have gone into the software business.  This is also why there are now valid concerns about how you confirm that your software is only doing what it is supposed to be doing.

The bottom line in all of this is that these concerns are no longer the ramblings of the paranoid among us.  These once imaginary seeming scenarios have actually come to pass and we need to address what to do to mitigate them.  So from a PCI perspective, what should an organization be doing?  While all of the PCI DSS provides methods to protect an organization, the following are what I consider the most important regarding inside attacks.

  • Monitor your internal network – This is the single, most important way to protect your network from doctored devices and applications.  A doctored device or application must transfer the information it has collected either in real time or in a batch process.  Transmission may be outbound or via an inbound process.  Outbound monitoring should be the easiest because most organizations know what external IP addresses to allow.  Inbound communications are always disputed by most organizations.  But if you remind people of how GoToMyPC and others of its ilk operate, they begin to understand how their networks could be easily compromised from the outside.
  • Analyze your logs – This is probably the next area where a lot of organizations are not doing a good enough job.  Most organizations do a good job collecting log data, but then do a mediocre or poor job analyzing that data to find exceptions.  The reason for this poor performance is a lack of defining criteria for attacks.  While the initial implementation did a good job creating initial definitions, as time goes on, the log analyses are not enhanced or updated to reflect changes in attacks and new attacks.
  • Tighten internal controls – Once inside most organization’s security perimeters, security gets pretty loose and free, if it even exists at all beyond a logon.  Unfortunately, attackers understand this fact, hence why they focus on getting inside.  Because once an attacker is inside, it is pretty much a cake walk to get whatever they want.  This is why locking down ports, reviewing firewall and ACL rules, disabling or removing unused services and disabling or removing unused user accounts become so important.  The fewer attack points you provide on your internal network, the more resilient it will be should an attacker get inside.

Remember, while the City of Troy was protected by insurmountable walls, Troy fell because they were attacked from the inside; an attack vector that Troy felt was not realistic or possible.  Troy’s short sidedness was the result of their arrogance and a failure to understand that an enemy determined to defeat you will find a way to overcome the insurmountable.  Learn from Troy’s mistake.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031