Archive for the 'Requirement 11 – Regularly test security systems and processes' Category

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

07
Dec
13

POS Botnets

Just in time for the holidays.

An article came out this past week regarding botnets that are specifically targeting point of sale (POS) systems.  The reason I bring this up is because of this quote.

“StarDust developers have intimate knowledge of the inner workings of PoS applications such as Clearview PoS.  As a result, the malware can ferret out where in computer memory sensitive data, in some cases in cleartext form, is stored.  StarDust can also sniff network traffic and is able to extract Track1 and Track2 card data.  To remain covert, the software transfers card details only when the terminal is inactive and the screensaver is on. It also uses the RC4 cipher to encrypt data before sending it to the control server.”

Obviously, if your organization uses Clearview POS software you should probably be examining your systems and networks to ensure that they have not been compromised by StarDust.

However, the larger issue is that most merchants do not see themselves as targets of such attacks, let alone have they constructed a secure environment for their POS systems.  Some of this is not entirely the merchant’s fault.  A lot of merchants outsource the maintenance and management of their POS systems to a value added reseller (VAR) and that VAR is the one responsible for the POS network configuration.  Regardless of responsibility, a merchant needs to be aware of these threats and take appropriate action either internally or with their VAR to address these threats and minimize risk.

Regardless of whether it is StarDust or a similar threat, here are some steps you can take to minimize and detect such threats.

  • Segment your POS network from the rest of your internal network and limit POS network segment traffic to only communication to your processor and internal network and system support and operations systems.  This will require the development of network access rules so that traffic can only reach your processor or internal system support and operations systems.  This will limit the number of systems that could compromise your POS environment.
  • Monitor your POS network segment for any traffic that is directed to an external network other than your processor or system support and operations systems.  Your firewall rules should only allow secure connections between your POS network and your processor or your system support and operations systems.  Network traffic going anywhere else should be stopped and reported for further investigation.
  • Monitor your POS systems for any file or configuration changes.  Most anti-virus solutions can provide this capability, but there are also solutions that are specifically engineered for this task.  Regardless of which you choose, configure the tool to alert you as soon as it identifies a potential change to files or configuration of the POS system.  If approved changes were not made to the POS systems and you received an alert, you likely have been compromised.
  • Develop an incident response plan should you receive an alert indicating that your POS systems have been compromised.  An incident response plan provides the organization with a “battle plan” should a compromise occur.  This type of plan is key to minimize the potential reputational impact to the organization should such an attack be confirmed.  A good incident response plan can keep you from making mistakes as you navigate the mine field that is the media circus that comes with a breach.

Three straight forward and simple steps that can minimize the threat of StarDust and a documented incident response process should you unfortunately be breached.

Security does not have to be rocket science.

09
Nov
13

Removing The Drama Of A PCI DSS Assessment

I had to prepare a presentation for a client a while back giving them some tips on how to prepare and get through a PCI assessment as easy as possible.  I thought it might be good to share those thoughts.

Trust But Verify

This famous quote from US President Ronald Reagan is the mantra of a PCI assessment.

The PCI DSS is based on the “trust” that organizations are complying with the PCI DSS.  However self-assessment processes and QSAs are used to “verify” that the organization is, in fact, complying with the PCI DSS.  As a result, the organization being assessed not only has to produce documentation to that effect, but the QSA must also observe that the PCI DSS requirements are being followed.

The net is that, just because you say something is fact, your QSA must substantiate your statements so that they, too, will treat them as fact.  If you remember nothing else but this simple truth, you will understand why a QSA must do what they do.

Scope

If PCI assessments go wrong for any reason, this is probably the primary reason.  It fascinates me that people often profess ignorance of the PCI DSS, yet somehow become experts on the subject when it comes to scoping.

Remember point number one, trust but verify.  Under that premise, the PCI SSC makes a QSA’s primary responsibility to confirm the scope of the PCI assessment as they verify the facts.  As a result, in order to confirm that scope, the QSA must look at everything and then, through investigation and evaluation, determine that the areas you deem out of scope are, in fact, truly out of scope.

Let your QSA ask their questions and conduct their observations without arguing with them about scope.  They are only doing this because they are required to confirm the facts and your fighting with them about scope is only going to making them wonder what you are trying to hide.  The bottom line is that arguing with your QSA about scope only makes your assessment all the more painful and time consuming.

If you truly want to avoid arguing over scoping, get a copy of the Open Source PCI Scoping Toolkit.  Go through your environment and determine the categories of all of your systems and networks.  This is a good annual exercise because you need to prove your scope every year.

Applicability

According to the PCI SSC, there are five PCI DSS requirements that can never, ever be marked as ‘Not Applicable’: 1.2.3, 3.2.1, 3.2.2, 3.2.3 and 11.1.  I have discussed these all before but they deserve another quick discussion here.

Clients will argue ad nauseam that wireless is not implemented or is out of scope and therefore refuse to discuss wireless.  For requirement 1.2.3, a QSA is required to document the procedures they followed to rule wireless in or out of scope.  That of course means the QSA must investigate any wireless networks and evaluate if the controls are rigorous enough to keep wireless out of scope.  For requirement 11.1, the QSA must investigate and evaluate if the organization’s controls surrounding the detection of rogue wireless are appropriate regardless of whether or not the organization has implemented wireless networking.

3.2.1, 3.2.2 and 3.2.3 are all related to the securing of cardholder data when it is stored.  Even if an organization is not storing cardholder data on their systems, a QSA must document the procedures they used to confirm that cardholder data is not stored on the organization’s systems.  This usually involves a review of flat files and database schemas and the running of utilities and queries against those systems and databases looking for cardholder data.

The bottom line is do not argue about something being ‘Not Applicable’ and then hinder the QSA’s investigation to prove it is ‘Not Applicable’.  Do not get me wrong, you need to keep your QSA on point, but remember that QSAs are required to evaluate the situation and then document the process used to determine that a particular requirement is ‘Not Applicable’.  All you do by complicating that investigation is add more time to your assessment and, potentially, cause a requirement to be marked as ‘Not In Place’ instead of ‘Not Applicable’.

Yes, I Did Kind Of Ask That Earlier

Like security, the PCI DSS also works from a ‘defense in depth’ approach.  A lot of the questions QSAs ask are very similar just asked from a different perspective.  The people that develop assessment and audit programs will tell you that this is the most effective way to uncover the level of compliance with a given program.  The reason is that organizations who have not integrated a compliance program into their day-to-day operations will typically provide inconsistent or confusing answers to the similar questions.  Not that this is a perfect technique mind you, but it does work the majority of the time.

Please be patient with your QSA.  They did not write these procedures, but they are required to execute them.

Answer The Question

Most people suck when being questioned, particularly in a legal proceeding, including yours truly.  Lawyers always instruct anyone that will be called to testify in a legal proceeding to take their time, focus on the question being asked and only answer the question being asked.  Never, ever, ever provide any information outside of the question, i.e., do not elaborate.  The trouble is that lawyers know that silence is a vacuum and it is human nature to fill that vacuum with extraneous information.  Hence why they typically have long pauses between questions.

QSAs and auditors tend to operate under the same principle as a lawyer.  People get into trouble when they start talking about things that are outside of the question, out of scope or not relevant to the assessment.  Such responses will at first confuse the QSA for a moment as they try to reconcile your remarks.  But then, the QSA may question whether they truly understand the environment and, possibly, the scope of the assessment.  It is then that they may start quizzing you and your staff as they go back and reconfirm their understanding of the environment.  All of this takes time, time away from the assessment process as you cover old ground while the QSA re-verifies the facts.

The lesson to be learned here is that there is nothing wrong with saying, “I do not know.”  Or “I will have to look into that question and get back to you.”  The worst thing you can do is try and “tap dance” around the question or never really answer the question.  If you do not have the answer, then find out who does have the answer and point the QSA to that person.

Prepare

And finally, the best thing you can do to avoid all of these issues is to walk through the PCI assessment process and requirements with those of your staff that will be interviewed/observed and make sure they understand the questions to be asked and how they should be answered.

If you really want to know what the QSA will ask, why they will ask and the evidence they will require, get a copy of the PCI DSS ROC Reporting Instructions from the PCI SSC Document Library.  The Reporting Instructions document is the “Bible” for QSAs as it documents how they will be assessed in a PCI SSC Quality Assurance review.  Reviewing and understanding this document will go a long way to minimizing the “What do you need that for?” questions that all QSAs encounter.

For each requirement’s tests, the Reporting Instructions will tell you:

  • What observations, if any, need to be performed and documented.
  • What documents, if any, need to be collected and reviewed and what information needs to be identified in those documents.
  • What people, if any, need to be interviewed and about what topic(s).
  • What processes, actions taken or states of equipment, if any, need to be observed and documented.
  • Whether or not sampling can be used.

Using the Reporting Instructions, you can also gather a lot of the observations ahead of time.  Your QSA will still have to conduct some observations such as that default passwords are not used, that timeouts occur, that change management operates and the like.  But by gathering screen shots and documenting what you used as testing conditions will go a long way to making your assessment go much more smoothly and quickly.

Hopefully this discussion will help you get through your next PCI assessment without all of the associated drama that can come from such an exercise.

24
Sep
13

Coming Attractions

On September 12, 2013 the PCI SSC released the drafts of version 3 of the PCI DSS and PA-DSS.  In reviewing the PCI DSS, there are six new requirements that will be considered ‘best practices’ until July 1, 2015 when they will become requirements.

  • 6.5.6 – Insecure handling of PAN and SAD in memory.
  • 6.5.11 – Broken Authentication and Session Management
  • 8.5.1 – Service providers with access to customer environments must use a unique authentication credential (such as a password/phrase) for each customer environment.
  • 9.9 – Protect point-of-sale (POS) devices that capture payment card data via direct physical interaction with the card from tampering and substitution.
  • 11.3 – Develop and implement a methodology for penetration testing that: is based on industry-accepted penetration testing approaches (for example, NIST SP800-115), includes coverage for the entire CDE perimeter and critical systems, includes testing from both inside the network, and from outside of the network attempting to get in, includes testing to validate any segmentation and scope-reduction controls, defines application-layer penetration tests to include, at a minimum, the vulnerabilities listed in Requirement 6.5, defines network-layer penetration tests to include components that support network functions as well as operating systems, includes review and consideration of threats and vulnerabilities experienced in the last 12 months, and specifies retention of penetration testing results and remediation activities results.
  • 12.9 – Additional requirement for service providers: Service providers acknowledge in writing to customers that they will maintain all applicable PCI DSS requirements to the extent the service provider handles, has access to, or otherwise stores, processes, or transmits the customer’s cardholder data or sensitive authentication data, or manages the customer’s cardholder data environment on behalf of a customer.

I will discuss requirements 6.5.6 and 11.3 in separate posts.  I am not going to discuss 6.5.6 until I have a better understanding of how the PCI SSC expects QSAs to test that memory is being managed properly.  I am avoiding 11.3 because it contains enough for a post of its own.  But the others can be addressed in this post.

First, I have to say that I was amazed that these actually had to be codified as they are addressed through a number of other requirements.  But having run into numerous instances where I have encountered these situations, I understand why the PCI SSC felt the need to explicitly codify them.

For requirement 6.5.11, the guidance provided states:

“Secure authentication and session management prevents unauthorized individuals from compromising legitimate account credentials, keys, or session tokens that would otherwise enable the intruder to assume the identity of an authorized user. “

This requirement is targeting the botnets and Trojan attacks such as with Citadel and Zeus.  The problem here is that these are attacks on the end user, not the merchant.  As a result, what this new requirement is going to likely be looking for is for the merchant to be using methods to secure authentication and communications such that man-in-the-middle, man-in-the-browser and similar attacks are minimized or even eliminated.  It will be interesting to see how the PCI SSC expects this to be accomplished.

It has been a long time coming for 8.5.1.  Most QSAs have encountered this situation and we never liked it.  The situation that I speak of is managed service providers and software vendors using the same user identifier and password for all of their customers which they support.  While one can appreciate why this occurs, it does create a problem should those common credentials become known outside of the organization which has been the case in a number of breaches.  As a result, the PCI DSS has been changed to include this new requirement to require managed service providers and software vendors to use unique authentication credentials with each customer.

Requirement 9.9 is to explicitly address a best practice that has been used by a lot of merchants.  A number of merchants have experienced the tampering of card terminals over the years.  This typically was in the form of soldering a USB thumb drive or SD card into the terminal to collect track data and then replacing a good terminal with the doctored terminal at the merchant.  This threat is typically mitigated by video monitoring of terminals as well as the use of serialized security tape or tamper evident seals over a terminal’s case seams that is checked at least daily to ensure that terminals have not been changed out or tampered with.

And finally, requirement 12.9 calls out that service providers explicitly acknowledge in a document that they will maintain compliance with the PCI DSS for all relevant services.  Apparently the existing requirements in 12.8 were not providing enough assurance that service providers were complying with the PCI DSS.  So now we are going to require that all service providers acknowledge, in writing, that they will maintain compliance with all relevant PCI DSS requirements for all services provided to their customers.

04
Jul
13

Is It The Standard Or Process?

I question whether or not there really is a need to change the PCI DSS this Fall.  You look at the requirements and if they were truly followed as designed; the vast majority of breaches would either not occur or would be fairly limited and contained.  But when you look at the breach reports from Verizon, Trustwave and the like, they tell a different story.  Most breaches were the result of one or more PCI requirements that just were not working properly and a breach occurred as a result.

I think where the changes need to be made is in the assessment process as I believe this is where the most impact could be realized.  The current assessment process falls way short in confirming that an organization is complying with the PCI DSS all of the time, not just when the QSA is around.  This results in the very valid complaint by PCI critics that the process is a “check the box” approach.

If you look at the current PCI assessment process, it tests a very limited number of requirements over the period of time of the assessment.  QSAs only ensure that an organization has had: four quarters of passing external and internal vulnerability scans (11.2), quarterly facility wireless scanning (11.1), and that any changes to the cardholder data environment have been appropriately documented and approved (6.4).  All of the other requirements are assessed whenever the QSA assesses them at that point in time.  The rest of the time the organization could be non-compliant and the QSA would be none the wiser.

If we are truly going to address the findings in these breach reports, we need to be ensuring that organizations comply with the PCI DSS all of the time.  That means that QSAs need to be testing more requirements over the assessment time period to ensure that an organization is actually complying with requirements.

Based on that premise, here are my recommendations.

  • Change controls.  The PCI DSS requires QSAs to request all changes to the CDE so that they can determine if more than quarterly vulnerability scanning and annual penetration testing was required during the assessment period.  The breach reports indicate that attackers are consistently finding simple misconfigurations as a way into networks which means that change control is likely not being followed consistently.  QSAs should be testing the entire change control process to ensure that changes to infrastructure are being appropriately tracked, reviewed and approved throughout the assessment period.  The reason is that we regularly encounter information security personnel that are only involved in evaluating and reviewing changes that affect PCI compliance and nothing else.  We also encounter instances where only changes that affect PCI compliance are tracked.  You have to wonder how changes are determined to affect PCI compliance.  Obvious changes to the CDE are easy to identify.  But other changes could implicitly affect the CDE but not necessarily be identified as such because the people reviewing them do not see the connection.  As a result, organizations have no idea if changes outside of the CDE could impact their PCI compliance because there is either no record of those changes or information security has not been consulted.  These organizations are typically relying only on luck to protect them.
  • Mitigation of vulnerabilities.  Most QSAs assess patching by reviewing the quarterly vulnerability scans and making sure that vulnerabilities do not appear on the next quarterly scan.  If any vulnerabilities appear on subsequent scanning reports, then QSAs are supposed to assess what mitigating controls were put in place while the vulnerability was unpatched.  QSAs typically do a pretty good job proving that organizations’ patching processes work reliably.  But when it comes to mitigation, QSAs do not necessarily do a great job determining that open vulnerabilities are truly mitigated.  This is not always the QSA’s fault as the organization may not be keeping the necessary documentation to support that open vulnerability risks are being mitigated.  The bottom line here is that the assessment process needs to assess all vulnerabilities that were left unpatched during the assessment period to ensure that they were mitigated while they remained unpatched.
  • Access controls.  As with change controls, the current PCI assessment process only requires the QSA to test the accounts of those accounts that have access to the cardholder data environment (CDE).  To add insult to injury, most organizations have multiple access control systems in use for securing servers, infrastructure, third party monitoring, etc.  All of these systems are typically in-scope for assessment, but a lot of QSAs focus only on those in-house.  Access control is an all or nothing proposition, you are either doing it consistently across the board or you are not.  Based on the breach reports, attackers are using whatever accounts they can to get a foothold and then work their way to accounts that provide more privileges.  If that is the case, then how will testing only accounts that have access to cardholder data (CHD) help this situation?  It does not.  Therefore, the assessment of access controls needs to look at the whole picture, not just a privileged few.  The bulk of testing of the process may be relegated to those with access to CHD, but the entire process of granting and revoking access needs to be assessed to ensure that controls are being followed for everyone.  Testing of privileged accounts needs to be enhanced to address what those accounts have access.  Do the DBAs have administrative access to the OS?  Do network administrators have administrative access to servers?  Do system administrators have administrative access to network devices?  These are just examples of questions that a lot QSAs do not answer during their assessments.  A lot of QSAs are only interested in determining that privileged access is controlled, not who has access, why they have access and is that access justified.
  • Monitoring and Alerts.  QSAs are to ensure that logging and alerting is enabled as part of their testing.  QSAs are to ensure that an organization has log data online for at least three months and offline for a year.  QSAs then move to observing the various monitoring consoles to ensure that alerts are generated.  However no testing is done for the period of the reporting period to ensure that alerting functioned as configured by sampling the alerts generated and then ensuring that those alerts were properly investigated to ensure they were not serious or required further investigative techniques.  Based on the breach reports, anomalies are not being researched and this is resulting in breaches taking months to identify or, worse, going unnoticed.
  • Sampling.  Finally, testing needs to be more than just testing three or four items in each of these categories.  There needs to be sampling over the entire assessment period not just the day the QSA is investigating the requirement or one item per quarter.  A random statistic sample would be best but given populations that might be not feasible in the time frame required to produce a report.  However it is not inconceivable that this could result in at least 50 to 100 items being tested in some of these categories.

These are the key tests that would go a long way in improving the assessment process and address the findings from the breach reports.  The trouble is that this sort of enhanced testing is obviously going to drive up the cost of a PCI assessment whether the QSA does the testing or an organization’s internal audit function does the testing.  If the PCI SSC and the card brands are truly serious about making the standards meaningful, then this is where I would focus, not on making changes to the standard.

Remember my mantra, “Security is not perfect.”  So while these changes are not going to absolutely, positively prevent all breaches, they will go a long way in ensuring that organizations are as secure as they can be all of the time rather than just when the QSA is around.

28
May
13

BlackPOS

I got a Tweet from a friend today regarding this new piece of malware found out in the wild and dubbed ‘BlackPOS’.  BlackPOS is very similar in nature to vSkimmer.  Now before everyone goes off and panics, if you are religiously following the PCI DSS, BlackPOS should not be an issue and here is why.

  • Requirement 11.5 – Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.  BlackPOS does a lot of manipulation around known file names, but the hash values of those files should change from the known good values, so any file monitoring system should alert on that fact.  It also uses file names that would never exist on a production system, so those should also generate an alert.  In addition, BlackPOS creates a TXT file that also should generate an alert when created.  However, if you are not alerting in real-time, you should be so that you pick up these issues as soon as possible.  This is where the bad guys are headed with their attacks, so you may as well alert as soon as an incident occurs so that you can address it before it gets out of control.
  • Requirement 1.1.5 – Documentation and business justification for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure.  BlackPOS uses FTP to move the TXT file from the POS system to their server.  If you are allowing FTP to flow freely from your POS or cardholder data environment (CDE) to anywhere on the Internet, you were not PCI compliant in my opinion, even if you had some bizarre business justification.
  • Requirement 5.1 – Deploy anti-virus software on all systems commonly affected by malicious software (particularly personal computers and servers).  While BlackPOS was only identified today, the anti-virus vendors will most likely have signatures out by the time you read this, so they will be looking for BlackPOS by the time you get your updated signatures.

Just these three requirements can stop this sort of an attack.  Yet, time and again we see these attacks succeed because people are not properly implementing their file integrity and not restricting network traffic flowing out of their internal networks.

PCI compliance does work when you use it the way it was intended.

24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response to too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

August 2014
M T W T F S S
« Jul    
 123
45678910
11121314151617
18192021222324
25262728293031

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 923 other followers


Follow

Get every new post delivered to your Inbox.

Join 923 other followers