Archive for the 'Requirement 2 – Do not use vendor-supplied defaults' Category

09
Jun
17

We Need A Change To 2.3.b

I just wanted to give everyone a “heads up” about some guidance we recently received from the PCI SSC regarding jump boxes or out-of-band (OOB) system management solutions and the use of insecure protocols such as SNMPv1/2 and Telnet.

But did everyone know that this solution also requires a compensating control worksheet (CCW)?

For years (at least since the Phoenix Community Meeting years ago), the Council has been recommending the use of firewalls and jump boxes as a way to secure instances where organizations need to use insecure protocols.  These enclaves are firewalled, VLAN’d and configured so that only the jump box can be used to remotely connect to the devices over Telnet and allowing other insecure protocols to be kept away from other networks.  However, I do not recall any of those discussions ever explicitly calling out the need for a CCW.  I suppose the Council just figured we would all be bright enough to write one up.

What led me to this revelation you ask?

When I was going through my QSA Requalification this spring, they had a scenario with a jump box solution.  One of the questions related to the scenario involved how you would create a CCW for the insecure protocols used in the administrative VLAN that the jump box provided access.  While I answered the questions correctly, it triggered a new question regarding why a CCW was needed in the first place.

Then when the question was posed back to the Council, we got a reply indicating that a CCW would be required because of requirement 2.3.b which states:

“Review services and parameter files on systems to determine that Telnet and other insecure remote-login commands are not available for non-console access.”

The problem with the requirement is that it treats all Telnet with equal distain regardless of risk.  Yes, Telnet is always a clear text protocol, but when it is buried two or three layers away from any general network or the internet and requires administrator credentials and MFA, it is hardly as “at risk” as it would be when PCI started over 15 years ago and networks were as flat as a piece of paper.

As a result, I would like to recommend that the Council work to change 2.3.b to take into account the use of network segmentation, firewalls, VLANs, ACLs, MFA and jump boxes to allow the use of Telnet and insecure protocols when in a properly isolated and secure environment.  It seems silly to me that someone goes through all of the right steps to secure their environment only to be told that they still need a compensating controls to meet a requirement that does not reflect the real risk.

The other reason I feel this needs to be addressed is that a lot of banks and processors seem to see CCWs as a huge red flag.  Something to be avoided at all costs because it implies to them non-compliance.  And non-compliance is a “bad” thing.  I cannot tell you the collective hand wringing some banks go through for really simple CCWs all because they do not want to have any PCI assessments with CCWs.

Ultimately I think this all comes down to the fact that those banks and processors have no clue as to the amount of risk any CCW presents.  This is because most banks and processors staff their PCI compliance areas with auditors and compliance professionals, not technicians.  Given that the PCI DSS is predominately all about security technology and its implementation, these auditors and compliance people are not equipped to make the decisions that typically need to be made regarding CCWs.  As a result, they are all high risk in their eyes and treated accordingly.

Hopefully the Council can address this situation and we can avoid needless documentation for a preferred “best practice”.

Advertisements
11
May
16

Heads Up – Changes To SAQ A

I had a question this week regarding v3.2 of SAQ A that pointed out there have been some changes to that SAQ that people may have not noticed given the larger issues with the changes to the PCI DSS.  As a reminder, SAQ A is the least amount of PCI requirements any in-scope organization can comply.

Those added requirements to SAQ A are:

  • 2.1(a) – Are vendor-supplied defaults always changed before installing a system on the network?
  • 2.1(b) – Are unnecessary default accounts removed or disabled before installing a system on the network?
  • 8.1.1 – Are all users assigned a unique ID before allowing them to access system components or cardholder data?
  • 8.1.3 – Is access for any terminated users immediately deactivated or removed?
  • 8.2 – In addition to assigning a unique ID, is one or more of the following methods employed to authenticate all users?
  • 8.2.3(a) – Are user password parameters configured to require passwords/passphrases meet the following?
  • 8.5 – Are group, shared, or generic accounts, passwords, or other authentication methods prohibited
  • 12.10.1(a) – Has an incident response plan been created to be implemented in the event of system breach?

Even when a merchant outsources all of their card processing, these are controls that can still apply because in a lot of cases, a merchant is responsible for setup, management and maintenance of their outsourced payment processes and applications.

In addition, merchant employees will also interact with an outsourced payment system to handle chargebacks and disputes.  Those user accounts used by the outsourced environment will normally be managed by someone at the merchant, not necessarily the service provider.

In regards to incident response, the merchant will be involved with incident response even when they have totally outsourced their payment environment.  The merchant will work with their outsourcer to work through an incident and those responsibilities of the merchant need to be documented.

As a result, the addition of these controls should not be a surprise to anyone.

25
Jul
15

Compensating Control Refresher

From time to time, organizations find themselves in the predicament of not being able to meet a PCI DSS requirement due to business or technical constraints. To address that situation, the PCI SSC has provided the compensating control worksheet (CCW) as a way to work around those requirements that cannot be met directly as stated in the PCI DSS. When the CCW was updated back in 2010 for v1.2, I wrote about those changes and how to write a CCW. However, here we are at v3.1, five years down the road and I still see a lot of poorly and improperly written CCWs. As a result, I think it is time to take people through a refresher on the CCW.

First and foremost, the writing of any CCW is your organization’s responsibility. Your QSA can provide input and guidance, but the origination of the CCW is up to the organization. Once developed, your QSA can review it and make suggestions to enhance and improve the CCW. Once that has been completed, you will then want your acquiring bank to review it to ensure that they will accept it as part of your self-assessment questionnaire (SAQ) or Report On Compliance (ROC) filing.

Secondly, the format of the CCW is dictated by the Council and that format is provided in Appendix B of the SAQ D or in Appendix C of the ROC. Failure to use the proper format will create issues with your QSA, your bank and with the Council, particularly if you are doing a ROC. So please use the Council supplied format and not develop something on your own.

Finally, the PCI SSC has stated that any requirement can have a CCW. In the past, the Council instructed QSAs and ISAs that requirement 3.2 [Do not store sensitive authentication data after authorization (even if encrypted). If sensitive authentication data is received, render all data unrecoverable upon completion of the authorization process] was not allowed to have a CCW. At the 2014 Community Meeting, the Council backed away from that restriction and said that any requirement can have a CCW with no restrictions. However, as a QSA I would have a serious problem accepting a CCW for requirement 3.2 because storing sensitive authentication data (SAD) is the whole reason why the PCI DSS was created to stop.

To remind everyone, the CCW is broken into seven sections.

  • Identification of the PCI DSS requirement(s) being compensated.
  • The constraint or business justification for needing the CCW.
  • The original objective of the requirement(s) being compensated.
  • Identification of any additional risks because of the CCW
  • The compensating controls.
  • The procedures your QSA/ISA followed to confirm that the compensating controls are in place and functioning.
  • The procedures followed by your organization to maintain the compensating controls.

While the Council tells everyone to have an individual compensating control for each requirement, there are some places where a compensating control is the same for a number of requirements. This most often occurs for requirements in section 8 around the various user management requirements or 6.1, 2.2, 11.2 and the processes of vulnerability management. I would highly recommend using one CCW per requirement, but I can understand why you might combine some. Just be judicial in combining them. Also, list not only the number of the requirement(s), but also the text of the requirement from the Requirements column in the PCI DSS. While your QSA might have memorized the PCI DSS requirements, bankers and others that will read the CCW have typically not committed to that level of detail and it will help them with the context of the CCW.

The business justification needs to be more than just “we don’t want to” or “it was too hard”. Believe it or not, I have had a lot of organizations provide just such simplistic and silly reasons for justifying a CCW. Proper justifications can involve budgetary constraints, timing (e.g., not enough time to complete remediation by the end of the assessment period), application requirements (e.g., the application requires XP to run) and/or vendor requirements (e.g., the vendor requires a hardware upgrade to correct the issue). If you do have a target date for addressing the CCW, this is where you want to provide that information so that readers know that the CCW has some time limit.

The original objective is the easiest part of the CCW to develop. The Council has provided the “Guidance” column in the PCI DSS for each requirement and it is the verbiage in that Guidance column that you should use to explain the original objective of the requirement. If you are using the CCW for multiple requirements, this section can get rather lengthy and I would recommend identifying the Guidance information with its requirement to help understanding of the information.

The next section can sometimes be the toughest to develop and that is identification of any additional risks because you are using a CCW. In some cases, there may actually be no additional risk perceived by using a CCW. One such example is when organizations have a separate system management VLAN where network and system administrators can use telnet, SNMPv2 and other “unsecure” protocols in addition to SSH, RDP and other secure protocols to manage devices/systems. These system management VLANs typically require the use of an out of band (OOB) to gain access, administrator credentials different from the administrator’s user credentials and two factor authentication to name just a few of the controls you see in this example. These management/administrative VLANs are no more risky than using only secure protocols.

However, if you are compensating for having to keep Windows XP running, that will likely be a very different story and depending on the compensating controls put in place, the risk could be moderately higher than not have XP around. The key here is that it is that the risk should be assessed and then honestly discussed in the CCW. If you think you are going to say that having XP does not increase risk to your cardholder data environment (CDE), I would seriously think again regardless of your compensating controls in place because any outdated Windows implementation is a security problem waiting to happen regardless of how you think you have mitigated the risk.

The compensating controls section is where the rubber finally meets the road. It is here that you document each individual control that compensates for your organization’s inability to meet the requirement(s) in question. I recommend that people either bullet point or number list each individual control. The reason is that in the next two sections, you need to tie the validation and maintenance items to the controls in this section and doing some sort of list makes it easy for people to ensure they have covered all controls in each section.

The most common mistake made in this section is organizations state that they have a project to remediate the issue(s). Sorry, but this is NOT a control. It is nice information, but it is not a control that can be relied upon. QSAs never want to ever see such statements made about future projects ever in this section. This section is all about what you are doing from a controls perspective to manage the fact that you cannot meet the requirement(s).

Valid controls in this section must also go “above and beyond” what is required by the PCI DSS. Examples of “above and beyond” include:

  • Reviewing log data in real time for a particular condition that would indicate an out of compliance condition on a control. This is above and beyond because log data only needs to be reviewed daily for such conditions.
  • Using whitelisting to identify applications that do not belong on a PC and generating an alert in real time if such applications are found. Such technology is above and beyond because it is not currently required by the PCI DSS.
  • Using critical file monitoring to identify rogue applications that do not belong on a PC and generating alerts in real time if found. Critical file monitoring is a PCI requirement, but this goes above and beyond because monitoring is only required on a weekly basis.

The list here can go on and on, but hopefully I have given you some ideas of how to create compensating controls that can actually compensate for your inability to comply with the requirement(s).

One key point though is that you cannot use a requirement in the same requirement group to compensate for a different requirement in the same group. For example, requirement 6.4 has bunches of sub-requirements under it. You cannot write a compensating control for one sub-requirement in 6.4 and then use a different sub-requirement under 6.4 as one of your compensating controls regardless if it is above and beyond.

The next section will list how the controls were assessed by your QSA/ISA to prove they have been implemented. So using our previous bullet list, here is what the control validation bullets would look like.

  • Observed the system information event management (SIEM) solution and verified that alerts are generated in near real time for [control failure condition] and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [whitelisting solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [whitelisting solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.
  • Observed the [critical file monitoring solution name] and verified that if rogue applications are loaded on a workstation a near real time alert is generated back to the [critical file monitoring solution name] master console and that the alert is followed up by the security analyst to determine if the alert is valid. If valid, the security analyst opens a service ticket and assigns that ticket to the appropriate area for further investigation.

Finally, you need to document what your organization will do to ensure that the controls remain implemented and effective. This is where most compensating controls fall apart. The organization gets through their assessment and then neglects to keep the compensating controls working. Using our list from the compensating controls section, the maintenance controls would look something like this.

  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the SIEM and test that the alerts for the [control failure condition] are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [whitelisting solution name] and test that the alerts for rogue applications are still functioning as designed.
  • [Organization name] reviews on a [weekly/monthly/quarterly] basis the [critical file monitoring solution name] and test that the alerts for rogue applications are still functioning as designed.

A good idea in the maintenance section is to set timeframes for remediating any control testing failures.

One other important item of note about the controls, validation and maintenance lists. Notice that there are no “forward looking” statements made such as someone “will” perform or “will” review. CCWs must be shown to be in place and operating. A promise of implementing a control is NOT a control either. The control must be shown to be operating and maintained. That is an important point a lot of organization miss. It means that CCWs cannot be created at the last minute and then be operational past the filing of your SAQ or ROC. If you are going to have to use a CCW, that means you will need to identify the situation early and then get the compensating controls implemented, validated and through at least one maintenance cycle before it can be accepted.

CCWs can buy organizations time while they address issues that will take longer to address than their PCI assessment period. Unfortunately, there are organizations that see the CCW as a way to be judged PCI compliant without addressing their serious security shortcomings. It is not unusual for large organizations to have a number of CCWs particularly if they have legacy applications and hardware. However, I would highly recommend that all organizations only rely on CCWs if there are no other options to achieving PCI compliance.

01
Jan
15

The Three Hop Rule

At the 2014 Community Meeting, the PCI SSC responded to a question about network segmentation with what has come to be termed the “Three Hop Rule”.  The statement was made that if a device/system was “three hops or more” away from the cardholder data environment (CDE), then it was out of scope.  A lot of us in the room were taken aback by this statement.  And based on some questions of late regarding this subject, there is a lot of confusion out there regarding what the Council was trying to say.

First, the term “hop” is not a network security term nor does it even have any security implications.  The term “hop” is defined as:

“Data packets pass through routers and gateways on the way.  Each time packets are passed to the next device, a hop occurs.”

The count of three therefore is the number of hops or “hop count” between devices.  Hop count is defined as:

“Each router along the data path constitutes a hop, as the data is moved from one Layer 3 network to another.  Hop count is therefore a basic measurement of distance in a network.”

Nowhere in these definitions is there any statement about hops, the number of hops between devices and any correlation of hops and hop count as some form of security.  Hence why a lot of us were really concerned about this statement and likely why there is so much confusion and discussion resulting from the comment.

What we believe the Council was getting at was the number of network segments there are between a device/system and the CDE.  However, having three network layers between the CDE and devices/systems is also no guarantee of security.

What provides security at Layer 3 are the access control lists (ACL) or rules that allow or deny packets to traverse particular paths of the network.  ACLs can be implemented to control what devices and/or ports and services can communicate between various networks.  But just because there are ACLs implemented at each hop is also no guarantee that the number of hops between devices also secure the devices.

This is why the requirements in requirement 1 of the PCI DSS require that the QSA review all relevant ACLs to ensure that the network is truly segmented.  It is also why in v3, requirement 11.3 requires that the penetration testing also prove that the network is truly segmented.  As a result, the number of hops between the CDE and a device should not be considered a guarantee and never will be a guarantee that a device is out of scope.

The bottom line is that, in order to be truly out of scope, there needs to be ZERO hops between a device and the CDE.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

27
Oct
13

Atom AMPD And PCI Compliance

Here is a relatively new player in network security space for small and mid-sized businesses (SMBs).  A friend of mine that does a lot of work with SMBs is encountering this solution more and more.  And is there any wonder why when it is portrayed as a God send for SMBs.  On the Atom AMPD Web site, they explain their Kwick Key solution.

“Kwick Key™, a USB flash drive, is the delivery device behind the AtomOS.  The Kwick Key is bootable and plugs into nearly all servers.  Kwick Key users experience a significant savings achieved with a high quality “one key” solution for their networking needs.

Simply install the Kwick Key into an internet-connected server, display the web interface, configure the features and you’re done.  The server is transformed into a multi-functional networking and communication device.

The underlying operating system behind the Kwick Key AtomOS is Linux. The content stored on the server is also backed up on the Kwick Key.  Once configured, the Kwick Key can be transferred to new equipment while maintaining its configuration, providing portability in the event of equipment failure.  A redundant option is also available.”

What is wrong with this picture?

If you said, “Too good to be true,” you would be correct.  There are no silver bullet solutions to security.  However these sorts of “all in one” security solutions are being marketed to SMBs all of the time as a cost saving way to be secure.  And since SMBs do not typically have any significant IT personnel, they are always looking for ways to reduce IT workload and save money.  However, if you need to be PCI compliant, this is not a solution for your organization.  Why?

If you read the Savings page on their Web site, they state:

“Your current IT infrastructure is likely requiring multiple boxes to serve your network and communication needs.  This likely includes multiple boxes supporting firewalls, content filters, routing and VoIP applications; each requiring individual training, maintenance, and ongoing licensing fees.  The AtomOS provides just one platform, one interface, one operating system. It brings to bear the BEST practices via a convergent technology.  All modules are tied together by our proprietary user interface.”

That “all in one” solution approach violates PCI DSS requirement 2.2.1 which states:

“Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)”

The reason for requirement 2.2.1 is to leverage the concept of “defense in depth”.  Defense in depth relies on multiple layers of defense such that if one layer develops vulnerability, the other layers still can provide some security and mitigate for the vulnerability until the vulnerability is fixed.  Under the Atom solution, vulnerability anywhere potentially creates a situation where the whole solution is at risk because of one part’s failure.

As a result, in order to be PCI compliant, it will require you to purchase multiple Kwick Keys.  I would assume that multiple keys will result in costs that negate Atom’s cost advantage over other PCI compliant solutions.

Then go to the solution’s product page for Kwick Key.  Take a look at all of the firewall features that are available.  Looks pretty good until you realize there is one notable feature missing – stateful packet inspection (SPI).  Basically, Atom has implemented port filtering which comes standard on Linux distributions.  Not that this is not secure, but it does not comply with requirement 1.3.6 which explicitly requires that SPI be implemented.

There are ways to add SPI to this solution.  However, that will mean you will have to support it yourself and the whole point of the Atom solution is to get out from under supporting such a solution for your organization.

My assumption is that with an appropriate wireless adapter in the system running Kwick Key that the solution will serve as a wireless access point.  Under requirement 1.2.3, wireless is required to be segregated from an organization’s cardholder data environment (CDE) by a firewall.  Given that the wireless is operating on the same device, it is questionable if compliance with this requirement could be truly accomplished.

The same concerns with wireless would exist with the virtual private network (VPN) solution.  Having the remote access to the internal network also running on the same system is not a best practice.  And how secure such a situation would be on this device is questionable.

You need to remember, this is not a purpose built networking device, this is a repurposed computer running Linux.  It is potentially susceptible to any number of Linux-based attacks and vulnerabilities depending on the services running.  And the more services you pile onto this device, the more potential for vulnerabilities.

Then there is the ability to add a voice over IP (VoIP) call manager solution.  Seriously?  What a silly and very dangerous idea.  Why?  VoIP protocols are primarily stateless (i.e., UDP) which means that they cannot be protected by today’s firewall technology which only work with stateful protocols (i.e., TCP).  I have actually had vendors correct me on this because VoIP call set up (pick up the handset) and tear down (hang up the handset) are conducted using TCP.  What these folks always miss is that the actual conversation is conducted over UDP so that the conversation can be streamed between the phones in use which is the bulk of the activity with a telephone call.  And it is not just one or a few UDP ports that can be open; it is typically a range of thousands of UDP ports that are open to support telephony.  Talk about a target rich environment.

Adding a VoIP call manager on top of your firewall is probably the most dangerous thing an organization could do because VoIP is so easy to attack due to the stateless nature of its protocols.  By implementing VoIP on a firewall you are essentially negating the firewall.  Running VoIP on anything but its own dedicated server on its own dedicated network is the only way VoIP should be configured for security, regardless of a need to be PCI compliant.

Finally, there is no pricing provided for the USB “key”.  I always get concerned about “wonder” solutions that do not provide pricing without contacting the vendor’s sales operation.  Nine times out of ten, all this does is force potential customers to then be contacted relentlessly by sales people until they purchase the solution which is likely overpriced.

This post is not to say that this solution is not appropriate for other organizations.  However, if you need to be PCI compliant, this solution is not for your organization if it is implemented as the vendor describes.

24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

November 2017
M T W T F S S
« Oct    
 12345
6789101112
13141516171819
20212223242526
27282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,898 other followers