Archive for the 'Requirement 8 – Assign a unique ID to each person' Category


An Update On Multi-Factor Authentication

In the November 2016 Assessor Newsletter there is an update to the Council’s statements at the 2016 Community Meeting’s QSA Forum discussion regarding multi-factor authentication (MFA).

“We had a moment of excitement at the North America Community Meeting in September when we responded to a question in the Assessor Session about MFA. As several of us from the Council pointed out, some techniques historically in use are falling out of favor as acceptable approaches to MFA because, as they are becoming used, they fail to meet the basic requirements of MFA. A recent NIST announcement associated with a proposed revision to NIST Special Publication 800-63 series raised the potential of a sunset date for use of SMS as an out-of-band mechanism for a factor in MFA. Based on the questions asked, we felt a refresher on MFA would be of value.

Assessors should understand that multifactor authentication requires two or more independent factors used in such a way that the presenter of the factors gains no knowledge of whether any factor is valid until all factors have been presented. For example, if the first factor is submitted and results in an indication to the user that it is valid before the second factor is requested, then what you actually have is two, single-factor authentications. The critical issue is not when the validation is actually done; rather it is when feedback to the user is provided. If the user can’t tell which factor failed to grant access, then you have MFA. This common practice is illustrated in Figure 1. Figure 2 illustrates the better practice.


Figure 1 is sometimes referred to as a multistep authentication. Figure 2 unifies authentication into a single step. By doing the validation of both factors before providing an indication of authorization success or failure, no information is leaked about either factor.

MFA also requires that the factors be different in type. That is, at least two of the usual three types given below are required:

  • Something you know (e.g., password, PIN, security question challenge)
  • Something you possess (e.g., ICC card, physical token, cryptographic token or private key)
  • Something you are (e.g., physical biometric or behavioral biometric)

The factors must also be independent. Access to one should not grant access to the other. For example, if I use my mobile phone as my device for logging into a system and the system can validate my device with a high-degree of assurance, then it might be the something I possess. However, if it is also where I store my password (or the device to which a one-time-password (OTP) or password reset would be sent), then possession of the device may grant access to both factors. NIST acknowledges this as a risk in its DRAFT NIST Special Publication 800-63B Digital Authentication Guideline: Authentication and Lifecycle Management (5.1.3. Out-of-Band Devices).

Other circumstances may also result in loss of independence, for example, relying on a digital certificate as one factor if it is on the same device into which you are entering your password. If compromise of the device equates to having both the digital certificate and your password, then independence is lost. A similar issue exists when one factor gives access to more than one of the factors used in MFA. This is common with mobile devices that use a single factor to unlock (whether it be a passcode or a biometric) that then gains you access to other authenticators, e.g., stored passwords, the device’s identity, private keys, or software tokens. The assessor should carefully examine any method alleged to be multifactor to verify that it meets all of the requirements. For more information on this subject, consider the following publications:

  • DRAFT NIST Special Publication 800-63-3 Digital Authentication Guideline
  • DRAFT NIST Special Publication 800-63B Digital Authentication Guideline: Authentication and Lifecycle Management
  • DRAFT NIST Special Publication 800-63C Digital Authentication Guideline: Federation and Assertions
  • ISO 19092:2008 Financial Services Biometrics Security Framework
  • ISO/IEC 27040:2015 Information technology — Security techniques — Storage security

[1] Per our current PCI DSS FAQ, multistep authentication may also qualify as multifactor, as long as at least two types of factors are used and the first step is not sufficient to gain knowledge of (or constructive use of) the second authentication factor. Note that an updated version of this FAQ will be published shortly.”

So let us discuss what we probably agree with the Council on in their statements above because that is the easier discussion.

I think most security professionals would agree with the discussion that the factors must be independent of the device being used to log onto the systems.  As a result, if you have the RSA SecurID Software Token or Symantec VIP apps on a cell phone or tablet, that device should also not be able to log onto the systems you are trying to protect.  The same holds true with the practice of putting a certificate on a device for MFA.  The rationale being that if an attacker has the device and the device owner’s credentials, MFA is doing nothing because the second factor will either already be on the device or will be displayed there.

However, the “moment of excitement” occurred because that was not the discussion that occurred at the QSA session.  What was stated at that session was that ALL out-of-band MFA to anything other than a traditional fob was no longer allowed.  I know that was what I heard and I was not the only one that interpreted the statements made that way.  So it was not like I was the only one that heard something wrong as there were a lot of people in that ballroom that heard the exact same thing.  That is what we all heard and why there was a “moment of excitement”.  And rightly so, as that would have put about 90% of MFA solutions as totally non-compliant.

There has been a lot of back channel discussion between QSAs regarding the Community Meeting MFA discussion.  One of the first discussions was about the risk involved.  While we mostly agree with the Council’s position on the independence issue, we have concerns about full adoption of all of NIST’s recommendations regarding MFA.  The Council has acted like SMS and Voice MFA was killed by NIST but that is not the case.  What NIST is saying is:

“Note: Out-of-band authentication using the PSTN (SMS or voice) is deprecated, and is being considered for removal in future editions of this guideline.”

Deprecated means that it is not recommended, but is still allowed.  Why?

Because there is a risk of SMS being intercepted, but to do that is not necessarily an easy task as say a man-in-the-middle attack of Wi-Fi.  During the back channel discussions, it was questioned whether or not the Council truly realizes the real world risk of intercepting SMS and how that plays against a government entity or a bank versus your run of the mill organization.  It is not a risk that has a “one size fits all” rating because of the complexity of the task.  And that is what has the security community up in arms about is that NIST’s recommendation is probably a good thing for the government or a bank to follow, but might still be acceptable for small business versus no MFA or even worse, lying to their bank that they have MFA.

Keep in mind that this is interception, so the target will not receive the message, only the attacker will receive it.  If you want to pass something else along, that further adds to the complexity.  In order to intercept SMS, one has to accomplish one of the following.

  • Infect the target’s smartphone with a virus.
  • Reissue the target’s SIM.
  • Hack the PSTN.
  • Intercept the target’s cell service via a Stingray type of device.

It is relatively easy to infect smartphones on a large scale.  However it is very hard to infect a particular smartphone or group of smartphones without the attacker physically getting their hands on the phone(s).  Given the prevalence of using fingerprints and patterns to log onto phones, even physically having the phone makes infecting it not a quick task and requires equipment to break in and infect the device.  Doing that without the target(s) being suspicious is probably very low.

Reissuing a target’s SIM is relatively easy but creates a huge timing issue.  Because it works only once, that means the attacker must reissue the SIM right at the time the target is receiving the SMS MFA or they will miss the code.  The risk of that timing happening is very, very low even for employees of government entities.

So this leaves us with hacking the PSTN and using a Stingray device.  Hacking the PSTN is also supposedly relatively easy.  Here are the steps required to intercept SMS.

  • The attacker must create their own fake call processing capability (MSC).
  • The attacker must then get the real MSC to release the target’s phone to the fake MSC.
  • The attacker must then point his fake MSC to their own device for the SMS MFA message.
  • The attacker must then wait for the target to logon to generate an SMS MFA request.
  • The attacker must then use the SMS MFA before the target generates a new SMS MFA because they did not receive the original SMS MFA.

The first problem is creating a fake MSC.  This is not as easy as you might think for your run of the mill attacker.  Governments have them, criminal organizations have them, but your average hacker going after credit cards is not going to have such capability unless they are extremely serious about their craft as there are much easier ways to go after cardholder data (CHD).

But assuming we have someone that is truly determined and has such a capability, they must then intercept the SMS MFA message and use it before the target gets wise that their SMS is being intercepted.  This means the attacker has to hope that their target is not a heavy user of SMS.  Portio Research estimates that there are around 16 million SMS messages sent every minute in the world.  Given there are approximately 6.8 billion phones in the world, that means that your target will, on average, receive just over three messages in a day via SMS.  One of those likely to be the MFA message you are trying to intercept probably the first message of the day.  So predictability is on the side of the attacker.

That said, most users of SMS MFA are going to likely only try twice to get their SMS MFA message before they call the help desk to find out what the problem is with the MFA solution.  It will likely be at that point that any attacker will likely be found out because the help desk will discover that the user complaining is already logged onto the systems.  So just because the attacker has access does not necessarily mean they are home free and can do as they please.

As a result, hacking SMS through the PSTN, while possible, is probably only a risk at a very high value target will likely have to face.

So in this discussion of SMS MFA risk, what we have left is using a Stingray device to intercept the target’s mobile service.  This will be like drinking water through a firehose because you will not only have to grab your target’s service, but everyone else that is nearby your Stingray device.  Which brings up the next issue which is that your Stingray device will have to stay in near proximity to your target in order to grab the information you desire.  If you target is truly mobile, that could be very problematic unless you have the resources to install Stingray devices like the FBI or CIA on every cell tower in town.  Again, I would say the likelihood of such an attack is relatively low for all but the most determined attackers which will stop at nothing to get into an organization.

At the end of this mental exercise, we again question the Council adopting NIST’s recommendation regarding SMS MFA without considering the actual real world risk.  Just because a threat exists, does mean the risk is automatically high because NIST is getting ready to deprecate it.  Again, NIST is securing the government and is sharing the results of their research with the rest of us because we, as taxpayers, have paid for it and deserve the results of their research.  That said, that does not mean that everything they produce is always relevant to every organization outside of the government.  Most of it is, but not everything.  This SMS MFA deprecation is probably relevant at some point, but for the current timeframe, SMS MFA is better than no MFA.

But that brings us to the fact that NIST did not say that SMS MFA cannot be used as they did with SSL and Early TLS.  All NIST did say was that they do not recommend it and that sometime in the future they may not allow it.  As a result, if an organization is using SMS MFA, it is still allowed to be used.  NIST has only put organizations on notice that at some point, SMS MFA will no longer be allowed.

But by their statements, the Council has taken NIST’s future deprecation comment to mean that SMS MFA is dead now and that is false.  Yes, organizations should probably look at any SMS MFA solution skeptically from here on out, but SMS MFA is still allowed by NIST just not recommended because of the risk.  That said and as has been discussed, we question if the risk presented is realistic for all organizations given the effort required.

So let us bring this back to the real world.  The vast majority of large retailers have or are in the process of implementing P2PE/E2EE solutions with tokenization.  Those implementations that are in process will likely be done by the end of 2017.  Those remaining 98% of the rest of retailers will likely never ever encounter it because of the effort required to tap SMS just does not justify the reward.

There is a tremendous MFA infrastructure installation and the Council by their statements threatened the vast majority of that install base with their statements that did not match what NIST was stating.  That is what we are arguing over and what drew the “moment of excitement” at the Community Meeting.

In the end, while it is good to know that NIST believes SMS MFA to be a bad solution going forward, exactly what is the Council protecting with their statements?  With CHD no longer stored by large retailers, the risk is at the small retailers, transaction gateways, transaction processors and banks.  So the Council’s and NIST’s recommendations should be focused at those entities that actually pose a risk and not painted with a broad brush against all organizations.

The Council has chastised us all over the years for not focusing on the risk presented in our assessments.  It is time for the Council to take some of that same medicine and recognize that not every NIST pronouncement needs to be tossed out to the PCI community as though it is gold.  The Council also needs to recognize the risk presented and act accordingly.  It is no longer 2008 and organizations are not protecting SAD/CHD.

A lot has changed in the decade since the Council was founded.


The Great Multi-Factor Authentication Debate

The Council brings back the Assessor Session to this year’s Community Meeting and it takes only one question to get passions flowing.  The question was to get a clarification of a comment made by Ralph Poore, Director, Emerging Standards at the Council, about multi-factor authentication (MFA).

First a little background to get everyone up to speed remembering that the US National Institute of Standards and Technology (NIST) SP800-63B standard in question is still a draft and has not been finalized.  However, everyone expects this standard to be adopted largely unchanged and with only minor wording revisions that would not affect the overall recommendations in the standard.

What NIST stated about SMS was in section Out-of-Band Verifiers of SP800-63B which states:

“Due to the risk that SMS messages or voice calls may be intercepted or redirected, implementers of new systems SHOULD carefully consider alternative authenticators. If the out-of-band verification is to be made using the public switched telephone network (PSTN), the verifier SHALL verify that the pre-registered telephone number being used is not associated with a VoIP (or other software-based) service. It then sends the SMS or voice message to the pre-registered telephone number. Changing the pre-registered telephone number SHALL NOT be possible without two-factor authentication at the time of the change. OOB using the PSTN (SMS or voice) is deprecated, and may no longer be allowed in future releases of this guidance.”

NIST is only calling out that new implementations of SMS or voice MFA should consider the security implications of using SMS or voice for MFA.  But NIST has not totally invalidated any existing SMS and voice MFA solutions.  They just do not want any new implementations unless there is no choice because the process is already underway.  So while SMS or voice MFA can still be used in existing implementations, NIST is saying that future implementation of SMS and voice MFA are out of the question, have basically killed those solutions.

With that as our background, in a Community Meeting session, Ralph Poore stated that MFA to devices such as smartphones or back to the same device or browser (i.e., “soft” solutions) were not considered secure because of statements in the NIST Draft of SP800-63B.  I was attending a different session when Ralph made his statements, but I can tell you that my cell phone started buzzing with text messages from various people asking if we had all heard what we had heard.  But since there was no Q&A at that session, there was no way to clarify Ralph’s statements.

As a result, this issue was brought up in the Assessor Session to clarify those MFA comments.  Ralph stood and reiterated his remarks and that sent the room into an absolute tizzy.  It was pointed out that NIST had only invalidated SMS and voice for future two-factor authentication, not all soft token solutions such as RSA’s or Symantec’s application solutions.  However, Ralph continued to repeat his remarks saying that they had invalidated all soft solutions.  That brought the house down and people were loudly explaining that his comments were invalidating decades of recommendations for OOB MFA solutions.  Eventually the room calmed down and the Council agreed to review their position on such “soft” MFA solutions.

So that is where we are with this subject.  Time will tell if the Council revises its statements on MFA and comes into line with what NIST is saying on the subject.


Heads Up – Changes To SAQ A

I had a question this week regarding v3.2 of SAQ A that pointed out there have been some changes to that SAQ that people may have not noticed given the larger issues with the changes to the PCI DSS.  As a reminder, SAQ A is the least amount of PCI requirements any in-scope organization can comply.

Those added requirements to SAQ A are:

  • 2.1(a) – Are vendor-supplied defaults always changed before installing a system on the network?
  • 2.1(b) – Are unnecessary default accounts removed or disabled before installing a system on the network?
  • 8.1.1 – Are all users assigned a unique ID before allowing them to access system components or cardholder data?
  • 8.1.3 – Is access for any terminated users immediately deactivated or removed?
  • 8.2 – In addition to assigning a unique ID, is one or more of the following methods employed to authenticate all users?
  • 8.2.3(a) – Are user password parameters configured to require passwords/passphrases meet the following?
  • 8.5 – Are group, shared, or generic accounts, passwords, or other authentication methods prohibited
  • 12.10.1(a) – Has an incident response plan been created to be implemented in the event of system breach?

Even when a merchant outsources all of their card processing, these are controls that can still apply because in a lot of cases, a merchant is responsible for setup, management and maintenance of their outsourced payment processes and applications.

In addition, merchant employees will also interact with an outsourced payment system to handle chargebacks and disputes.  Those user accounts used by the outsourced environment will normally be managed by someone at the merchant, not necessarily the service provider.

In regards to incident response, the merchant will be involved with incident response even when they have totally outsourced their payment environment.  The merchant will work with their outsourcer to work through an incident and those responsibilities of the merchant need to be documented.

As a result, the addition of these controls should not be a surprise to anyone.


PCI DSS v3.2 Draft Released

On Friday, April 15, 2016 while a lot of you were probably getting your US income taxes done, the PCI SSC decided to release the draft of v3.2 of the PCI DSS.  I know the announcement message to me from the Council ended up in my company’s spam filter, so you may want to check there if you did not receive a message.  I was lucky enough for a colleague to forward his copy along to me.  However to get the draft, you need access to the PCI Portal to obtain the draft PCI DSS v3.2 and the requisite change log.

These are some of the more notable changes in the new PCI DSS version.

  • The draft provides an official sunset date for v3.1 of the PCI DSS. Regardless of the date in April that v3.2 is released, v3.1 will be withdrawn on October 31, 2016.  So any assessments done after that date will need to comply with and use v3.2.
  • Two new sections to Appendix A have been added. In addition to the Appendix for shared hosting providers (now marked A.1), we get Appendices A.2 and A.3.  2 covers SSL and early TLS for those of you that will miss the June 30, 2016 date.  For those of you that thought 2018 was the deadline and missed discussions on the Webinar about the SSL/early TLS deadline, while the deadline was extended to June 30, 2018, any organizations missing the June 30, 2016 date must fill out Appendix A.2.  A.3 is where the Council added the designated entities supplemental validation (DESV) requirements.
  • There are a number of new requirements for service providers that are best practices until February 1, 2018. Those new requirements include: (1) maintain a documented description of the cryptographic architecture, (2) detect and report on failures of critical security control systems, (3) perform penetration testing on segmentation controls at least every six months, (4) executive management to establish responsibilities for the protection of cardholder data and a PCI DSS compliance program, and (5) perform reviews at least quarterly, to confirm personnel are following security policies and operational procedures.  I would bet that numbers three and five will likely create a lot of contention with service providers.  But you have until February 1, 2018 to get those in place.  However, if experience teaches us anything, service providers had better start now getting these new requirements in place and operating.
  • All organizations picked up the following new requirements that are best practices until February 1, 2018: (1) change control processes to include verification of PCI DSS requirements impacted by a change, and (2) multi-factor authentication for all personnel with non-console administrative access to the CDE. As with the aforementioned new requirements for service providers, these will also require a lot of organizations to get started now to ensure these new requirements are in place and operating.
  • The Council clarified requirement 8.1.5 to show that it is intended for all third parties with remote access, rather than only vendors. While most organizations understood the intent of this requirement, there were a few that played “legal eagle” and refused to require compliance for non-vendors.
  • Requirement 6.5 has been clarified that developers must go through secure coding training at least annually. This change will likely create some consternation for some organizations that are developing their own software that is in-scope for PCI compliance.
  • Clarified 11.5.a by removing “within the cardholder data environment” from the testing procedure for consistency with requirement, as the requirement may apply to critical systems located outside the designated CDE. This will likely expand the number of systems that require critical file monitoring.
  • Clarified 12.8 1 by saying that the list of service providers now must include a description of the service(s) provided.
  • Clarified 12.8.2 by adding guidance that service provider responsibility will depend on the particular service(s) being provided and the agreement between the two parties.
  • One of my pet peeves has finally been addressed. I have always had an issue with requirement 1.1.6 and the use of the terminology “insecure protocols”.  The reason is that in one way or another, all protocols have their insecurities whether they are known or not.  In v3.2, the Council has finally removed the “insecure” designation as, in their words, “these may change in accordance with industry standards.”  It is those small battles at times that make your day.

There are other clarifications and edits that have been made to the new version.

For all of us QSAs, we await the Reporting Template which will detail out the actual testing to be performed which will allow us to assess the real impact to the effort required to conduct an assessment.  As a result, there could still be some surprises with this new version of the PCI DSS.  So stay tuned.


Multi-Factor Authentication

During the recent PCI SSC’s presentation on the coming changes to v3.2 of the PCI DSS, one of those changes is the adoption of the term “multi-factor authentication” instead of “two-factor authentication”.  This change resulted in some heated discussion in the Q&A session that followed their presentation.

Even though the Council was very, very clear what they meant by multi-factor a lot of us have concerns about that terminology.  The reason for all of this concern?  It is because most of us in the information security profession dislike the term “multi-factor authentication”.  It is driven by the fact that the term typically includes stupid practices such as using two sets of credentials.  I have discussed this in a previous post, but I thought the time was right to discuss this topic again before QSAs start running into organizations trying to pawn off two passwords as valid multi-factor authentication.

Multiple factors of authentication are defined as:

  • Knowledge also referred to as ‘something you know’ such as a password or passphrase,
  • Possession also referred to as ‘something you have’ such as an RSA SecurID or Symantec VIP token, and
  • Inherence also referred to as ‘something you are’ such as a fingerprint, hand or voice.

In order for multi-factor authentication to be secure, you must use one of the factors from each category, hence the terms one, two and three factor authentication.  Using only a password is single or one factor authentication.  Using a PIN with a fingerprint is two factor authentication.  Using a password with the token number and a fingerprint is three factor authentication.

Therefore using two passwords is not using factors from two of the three categories.  It is using the same category twice which is not considered secure.  This holds true for using a fingerprint and an iris scan as those are also two items from the same category.  Although those biometric factors are arguably much stronger than just two passwords.

Why are two passwords not considered secure?  An attacker only has to compromise your authentication system and they would likely have access to those two sets of credentials.  But if you also require either of the other two factors, the attacker may have credentials but they do not have those other factors needed to use those credentials.  Therefore if you are using true two or three factor authentication your security is still effective.

So all of you out there thinking the Council has approved of using two passwords as an approved multi-factor authentication solution need to think again.  Although I know there will be some that do not get this message and will try and use it in that context anyway.


Significant Change And Periodic

UPDATED: Changed comments on requirement 10.6.2 to reflect the correct interpretation of that requirement.

No words or phrases in the PCI standards elicit more comments and questions than “significant change”, “periodic” and “periodically”.

So what do these mean?  Whatever you want to define them to mean as it is up to each organization to come up with formal definitions.  Those definitions should be based on your organization’s risk assessment.

Here are some suggestions as to appropriate definitions.

Significant Change

Significant changes are those changes that could impact or affect the security of your cardholder data environment (CDE).  Examples of significant changes are:

  • Changing devices such as firewalls, routers, switches and servers. Going from Cisco to Checkpoint firewalls for example is typically understood as a significant change.  However, people always question this concept particularly when going from a Cisco ASA 5505 firewall to an ASA 5520 or moving a virtual machine from one cluster to another.  The problem is that these moves can potentially introduce new vulnerabilities, network paths or even errors that would go unknown until the next vulnerability scan and penetration test.  And your luck would be that those tests are months away, not just a few days.
  • Changes to payment applications. This should be obvious, but I cannot tell you how many people argue the point on changes to applications.  Yet, application changes are possibly the biggest changes that can affect security.  Not only should applications be vulnerability scanned and penetration tested before being put into production, but code review and/or automated code scanning should be performed as well.  If any vulnerabilities are found, they must be corrected or mitigated before the application goes into production.
  • Upgrades or changes in operating systems. Upgrades and changes in operating systems should also be obvious as significant changes.  However, I have run into network and system administrators that want to split hairs over the impact of OS changes.  In my opinion, going from one version of an OS to another is just as significant as changing OSes.
  • Patching of operating systems or applications. While I do not think that patching necessarily results in a significant change, there are some patches such as updates to critical services such as .NET or the IP stack that should be considered significant.  If you are properly working through requirement 6.1 (6.2 in PCI DSS v2) for patch management, you should take this into consideration and indicate if vulnerability scanning and penetration testing are required after any particular patch cycle because of the nature of any of the patches being applied.
  • Network changes. Any time you change the network you should consider that a significant change regardless of how “minor” the change might appear.  Networks can be like puzzles and the movement of devices or wires can result in unintended paths being opened as a result.

I have a lot of clients that have an indicator in their change management system or enter “Significant Change” in the change comments for flagging significant changes.  That way they can try and coordinate significant changes with their scheduled vulnerability scanning and penetration testing.  It does not always work out, but they are trying to make an attempt at minimizing the number of special scans and tests that are performed.  But such an approach also has a side benefit when it comes time to do their PCI assessment as they can call up all significant changes and those can be tied to the vulnerability scans and penetration tests.

I would see this list as the bare minimum of significant changes.  As I stated earlier, it is up to your organization to develop your own definition of what constitutes a significant change.

Periodic and Periodically

Branden Williams was on a Podcast shortly after the PCI DSS v3 was released and made a comment that he felt that the number of occurrences for the words “periodic” or “periodically” were higher in the new version of the PCI DSS than in the previous version.  That got me thinking so I went and checked it out.  Based on my analysis, these words occur a total of 20 times in the PCI DSS v3 with 17 of those occurrences in the requirements/tests.  That is a 150% total increase over v2 and an increase of 113% in the requirements/tests.

First off, just to shatter some people’s perception of the word, “periodic” does not equate to “annual”.  Yes, there may be instances where an activity can occur annually and still meet PCI DSS compliance.  But that is likely a rare occurrence for all but the smallest organizations and is definitely not how the Council has defined it.

The Council uses the words “periodic” and “periodically” to reflect that an organization should be following the results of their risk assessment to determine how often or “periodically” they should perform a certain activity.  For some organizations, that might happen to work out to be annually.  But for most organizations it will work out to be something more often than annually.

So what requirements specific a periodic time period?  Here are some of the more notable occurrences.

  • 5.1.2 For systems considered to be not commonly affected by malicious software, perform periodic evaluations to identify and evaluate evolving malware threats in order to confirm whether such systems continue to not require anti-virus software.Typically this would be done annually, but forensic analysis of breaches has indicated that it needs to be done more often, particularly with Linux and other Unix derivatives. Based on threats semi-annual or even quarterly reviews may be needed for systems you believe to not warrant an anti-virus solution.
  • 5.2 Ensure that all anti-virus mechanisms are maintained as follows: Are kept current, Perform periodic scans, Generate audit logs which are retained per PCI DSS Requirement 10.7.Periodic scanning is always an issue with servers but, surprisingly, even more so with workstations. In my opinion, at a minimum, scans for viruses and malware should be done at least weekly.  This might need to be done daily if the systems are particularly at risk such as in call centers where the workstations my go to the Internet to be able to access competitor sales offerings.
  • 8.2.4.b Additional testing procedure for service providers: Review internal processes and customer/user documentation to verify that: Non-consumer user passwords are required to change periodically; and Non-consumer users are given guidance as to when, and under what circumstances, passwords must change.This requirement pairs with 8.6.2 which requires service providers with remote access to customers’ systems to not use the same credentials for each customer. A number of recent breaches have pointed out the issue such a practice can lead.  Not only are different credentials needed by the password for those credentials needs to change periodically, typically every 90 days.  This will likely spur the sales of enterprise credential vaults and similar solutions in the service provider ranks.But it is not just service provider’s credentials; it is also their customers’ credentials.  Service providers need to advise their customers to change their passwords periodically as well.  And that should also be at 90 day intervals at a minimum.
  • 9.7 Obtain and examine the policy for controlling storage and maintenance of all media and verify that the policy requires periodic media inventories.For this requirement, the PCI DSS already provides a required timeframe of at least annually.
  • 9.8 Examine the periodic media destruction policy and verify that it covers all media and defines requirements for the following:Periodic here typically means quarterly or even monthly if you have the volume of media to be destroyed. The key though is to secure the media until it is destroyed.
  • 9.9 Examine documented policies and procedures to verify they include: Maintaining a list of devices, Periodically inspecting devices to look for tampering or substitution, Training personnel to be aware of suspicious behavior and to report tampering or substitution of devices.Here periodic means at least daily, if not more often. I have clients that examine their points of interaction (POI) at every management shift change which works out to three or four times a day.  Given the POI is becoming the primary target of attacks, this will only become more important as time goes on given the current paradigm.
  • 9.9.2 Periodically inspect device surfaces to detect tampering (for example, addition of card skimmers to devices), or substitution (for example, by checking the serial number or other device characteristics to verify it has not been swapped with a fraudulent device).Again, periodic means at least daily, if not more often. I have clients that examine their points of interaction (POI) at every management shift change which works out to three or four times a day.  Given the POI is becoming the primary target of attacks, this will only become more important as time goes on given the current paradigm.
  • 10.6.2 Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.This requirement allows systems to be ranked using an organization’s risk assessment to drive how often log data from systems have to be reviewed.  While systems that directly process, store or transmitcardholder data (CHD) must have their log data reviewed at least daily, other systems that are in-scope can have their log data reviewed less often based on the risk they present to the CDE systems.  Based on assessing the risk to these “connected to” systems, you might be able to justify weekly or even monthly review of log data. I doubt this will have a significant impact because most organizations have implemented internal or outsourced system information and event management (SIEM) solutions and are monitoring all in-scope systems in near real time.  But for those few organizations that are struggling with log reviews without a SIEM, this will afford them a bit of breathing space.
  • 12.10.4 Verify through observation, review of policies, and interviews of responsible personnel that staff with responsibilities for security breach response are periodically trained.It amazes me the number of organizations that claim to not have had an incident in the last year, even a virus or malware outbreak. Either they were totally dealt with by their anti-virus solution (hard to believe) or I am not talking to thepeople that deal with these issues (probably more likely).  As a result, testing (which can satisfy this training requirement) is only being done annually just like business continuity plan testing.Given the ever increasing amount of threats, this sort of training needs to be done more often than just annually.  Organizations should be at least testing their incident response plan on a quarterly basis so that people keep their skills up as well we just exercising the plan and finding any gaps or processes that need adjustment.

Hopefully we are now all on the same page with these terms.


Lawyer Or Security Professional?

“It depends upon what the meaning of the word ‘is’ is. If ‘is’ means ‘is and never has been’ that’s one thing – if it means ‘there is none’, that was a completely true statement.” –President of The United States of America, William Clinton

It has been an interesting time as the December 31, 2014 deadline approaches and version 2 of the PCI DSS comes to its end of life.  I have started to notice that there are a lot of security professionals and others that are closet lawyers based on the discussions I have had with some of you regarding compliance with the PCI DSS.

The first thing I want to remind people of is that if you do not want to comply with one or more of the PCI DSS requirements, all you have to do is write a position paper defining for each requirement you find onerous, why it is not relevant or not applicable for your environment and get your management and acquiring bank to sign off on that paper.  But stop wasting your QSA’s or ISA’s time with your arguments.  It is not that we do not care, but without such approval from your management and acquiring bank, QSAs and ISAs cannot let you off the hook for any requirement.

With that said, the first lawyerly argument we are dealing with these days revolves around the December deadline.  We continue to get into arguments over what the deadline actually means.

It appears that the PCI SSC and card brands’ repeatedly saying that version 2 is done as of December 31, 2014 was not clear enough for some of you.  And further clarifications from them that any reports submitted after that date must be under version 3 are also apparently too much for some of you to handle.  I do not know how there could be any misinterpretation of ‘DEADLINE’, ‘DONE’ or “AFTER THAT DATE’ but apparently, there are a lot of people out in the world that do not understand such words and phrases.  Then there are the amazing contortions that some people will go to in a twisted dance to the death to get around this deadline.

Where have you been?  How could you have missed this deadline?  It has been known since the PCI SSC announced their change when standard updates would be issued back with the release of the PCI DSS v2 more than three years ago.  But even assuming you were not involved back then, the PCI SSC announced the deadline over a year ago with the release of PCI DSS v3.  Either way, it certainly should not have been a surprise as there has been plenty of warning.

But then do not take this out on your QSA.  QSAs are just the messenger in this process and had nothing to do with setting the deadline.  The PCI SSC and the card brands set that deadline.  You have a problem with the deadline, complain to them.  But if you are willing to listen, I can save you that discussion.  They will politely tell you the deadline is the deadline.  You are out of luck.  If you do not like that answer, then stop taking credit/debit cards for payment for your organization’s goods and services.

The next lawyerly argument is around the June 30, 2015 deadlines for requirements 6.5.10, 8.5.1, 9.9, 11.3 and 12.9.  Again, it is as though these dates were kept from you, which they were not.  I even wrote a post about these requirements titled ‘Coming Attractions’ back in September 2013.

For those that are calendar challenged, June 30, 2015 is practically just around the corner in business terms.  If you had years to get ready for the PCI DSS v3, what makes you think that you can just turn something on in a year and a half?  Yet we continually see people arguing that until that date, they are not going to address any of these requirements.  All as though, like a light switch, something magical will occur on July 1, 2015 that will meet those requirements.

For merchants, requirements 9.9 and 11.3 are going to be huge issues particularly for those of you with large networks and lots of retail outlets.  If you have not gotten started on these requirements now, there is no way you will be compliant with these requirements by July 1.  Both of these require thought, planning and training.  They cannot just be started overnight resulting in compliance.

For requirement 11.3, the new approach required for penetration testing is resulting in vulnerabilities being uncovered.  Organizations that did not want to get caught flat footed are finding that their network segmentation is not as segmented as they once believed.  They are also finding new “old” vulnerabilities because of these network segmentation issues.  The bottom line is that these early adopters are scrambling to address their penetration testing issues.  In some cases ACLs need to be adjusted, but I have a few that have found they need to re-architect their networks in order to get back to compliance.  Obviously the latter is not an overnight kind of fix.

Requirement 9.9 is all about ensuring the security of points of interaction (POI) as card terminals are referred.  Because of all of the POI tampering and hacks that have occurred, the Council has added the requirements in 9.9 to minimize that threat.  The biggest problems early adopters are running into is getting their retail management and cashiers trained so that they understand the threats and know how to deal with those threats.  This requires creating new procedures for daily or more often inventorying of the POIs and visually inspecting them to ensure they have not been tampered with.  Companies are rolling out serialized security tape that must be applied to the seams of POIs so that any opening of the case can be visually determined.  Locking cradles are being installed for every POI to secure them to the counter.  Let alone implementing those new procedures for doing at least daily inspections and what to do if you suspect tampering and how to inform corporate of potential issues.  Again, not something that just happens and works day one.

For service providers, besides 11.3, requirement 8.5.1 is going to be their biggest issue.  This requires the service provider to use different remote access credentials for every customer.  This is in response to the breaches that occurred at a number of restaurants in Louisiana a few years ago as well as more recent breaches.

The problem that early adopters of 8.5.1 are finding is that implementing enterprise-wide credential vaults is not as simple as it appears.  The biggest impact with these implementations is that service providers start missing their service level agreements (SLA).  Missing SLAs typically costs money.  So these service providers are not only incurring the costs related to implementing the credential vault solution, but they are suffering SLA issues that just pile on the injuries.

But the final straw is all of the people that closely parse the PCI DSS and only the DSS.  You saw this with some of the questions asked at the latest Community Meeting.  You also see it in the questions I get on this blog and the prospects and I clients I deal with daily.  These people are hunting for a way to get around complying with a particular requirement.

This occurs because people only read the DSS and not the Glossary, information supplements and other documents provided by the Council.  At least with v3 of the DSS the Council included the Guidance for each of the requirements.  Not that adding Guidance makes a whole lot of difference based on the arguments laid out by some people.  The Council could do us all a favor if they generally published the Reporting Template with all of the other documents.  Not so much that people would necessarily read it, but it would give QSAs and ISAs more ammunition to use when these discussions come up.

Successful security professionals understand the purpose of security frameworks.  These frameworks are meant to share the collective knowledge and lessons learned regarding security with everyone so that everyone can have a leg up and know ways of detecting and mitigating threats.  Successful security professionals use these frameworks to get things done, not waste their time developing scholarly legal arguments or twisting the English language as to why they do not need to meet some security requirement.  They put their heads down, review the frameworks, develop plans to implement the changes necessary to improve security, work the plan and deliver results.  Do those plans always meet requirement deadline dates?  Not always, but they are close or as close as they can get given other business issues.

The bottom line is that security professionals are not lawyers and good security professionals certainly do not sound like lawyers.  But if you constantly find yourself sounding like a lawyer digging so deep to split legal hairs, in my very humble opinion, you really need to re-examine your career or lack thereof.  I say lack thereof because, in my experience, security professionals that operate like lawyers do not have long careers.  They move around a lot because once people realize that they cannot deliver, they are forced to move on.  Eventually a reputation is developed and after that point these people end up forced to find a new career because the security community knows their modus operandi.


The Flaw In Requirement 8.5.1

Today it was unceremoniously announced that a number of major restaurant chains’ franchisees had been potentially hacked between February 28, 2014 and April 18, 2014 because their point of sale (POS) vendor’s remote access account had been compromised.  I say franchisees because I know a couple of these restaurant chains’ corporate operations and they were not using a third party to manage POS.

In a nutshell, the attackers gained access to the POS vendor’s LogMeIn account.  LogMeIn, like a lot of similar remote access facilities, has an address book where you can store remote access credentials.  So with access to LogMeIn, by default, the attackers had access to the address book that contained credentials for any customer environments in the address book (likely all customers, but possibly not).

To remind everyone, requirement 8.5.1 of the PCI DSS v3 states:

 “Additional requirement for service providers: Service providers with remote access to customer premises (for example, for support of POS systems or servers) must use a unique authentication credential (such as a password/phrase) for each customer.

Note: This requirement is not intended to apply to shared hosting providers accessing their own hosting environment, where multiple customer environments are hosted.

Note: Requirement 8.5.1 is a best practice until June 30, 2015, after which it becomes a requirement.”

The PCI SSC guidance for requirement 8.5.1 states:

 “To prevent the compromise of multiple customers through the use of a single set of credentials, vendors with remote access accounts to customer environments should use a different authentication credential for each customer. Technologies, such as two-factor mechanisms, that provide a unique credential for each connection (for example, via a single-use password) could also meet the intent of this requirement.”

It is likely that the vendor was trying to get a jump on complying with requirement 8.5.1 in the PCI DSS v3.  However, this vendor may have been using such an approach all along to manage customer remote access which is also not uncommon with technology companies.

The first thing to note is that requirement 8.5.1 is a best practice until June 30, 2015 after which it becomes a full requirement.  However, as I pointed out in an earlier post, a lot of vendors will likely have to start rolling out a remote access solution as soon as possible to minimize service level agreement (SLA) issues.

One of the most likely ways vendors are addressing compliance with 8.5.1 is through services such as LogMeIn, GoToMyPC and similar services.  These are inexpensive services available to any organization or anyone.  There are also enterprise solutions such as those from Bomgar and the like that purport to have better security.  However, all of these solutions share the concept of an address book to make gaining remote access easier for the vendors’ users that rely upon them.  And that is their Achilles’s heel.  If an attacker gains access to the remote access service, they gain access to the address book and, therefore, to the customers’ credentials stored in that address book.  Game over.

It is important to note though that what this vendor was doing fully complies with requirement 8.5.1.  But even though this service provider was complying with the intent of 8.5.1, the implementation was flawed.  This is just another example of how PCI compliance does not mean that security issues cannot still occur.

How easy is this to happen?  Think a spear phishing attack against any vendor that does remote support and maintenance.  Regardless of the customer credential management solution (in-house or cloud based), once access to the credential management solution is compromised any concept of customer security is over.

So what should vendors being doing to mitigate this situation?  Exactly what our vendor who was breached did, implement two-factor authentication on the credential management system.  Spear phishing attacks will not be successful because even with the credentials to LogMeIn or similar, the attacker will need the second factor.  Yes, the attacker can still compromise the support person’s desktop, but they will not have access to customer credentials.

Trouble is, some vendors will want a cheap two-factor solution meaning something that sends out codes via SMS, email or telephone, versus RSA SecurID or SafeNet to name a few.  Solutions over SMS, telephone and email have a variety of known vulnerabilities and can easily be intercepted or even redirected.  In the case of LogMeIn, they indicate that they only support SecurID.

Regardless, all of you service providers out there that have remote access to your customers managed by some enterprise credential management solution, please implement a strong two-factor authentication solution on your customer credential management solution before you too become a newspaper headline.

I would like to believe that this vendor thought they were doing the right thing and got burned because of how they implemented their solution.  At least they stepped up and fixed it the right way.  Unfortunately, this is how we sometimes learn, from our mistakes.


Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.


2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.

December 2021