25
May
15

SSL and TLS Update

At the beginning of March, a new vulnerability to SSL and TLS was announced called FREAK. This compounded the announcement last fall of POODLE that caused the PCI SSC to abruptly call SSL and “early” TLS (i.e., TLS versions 1.0 and 1.1) as no longer acceptable as secure communication encryption.

In April, the PCI SSC issued v3.1 of the PCI DSS and gave us their take on how to address POODLE. Their plan is to have organizations remediate SSL and “early” TLS as soon as possible but definitely by June 30, 2016. While remediating SSL and “early” TLS, organizations are required to have developed mitigation programs for these protocols until they are remediated. There are some exceptions to the June 30, 2016 deadline for devices such as points of interaction (POI) but those exceptions are few and far between and still require some form of mitigation.

Reading the explanations for the POODLE and FREAK vulnerabilities, while they are technically possible over the Internet, they are much more realistic to be performed successfully internally. As such, these vulnerabilities are more likely to be used as part of an attacker’s toolkit when compromising a network from the inside. This is not good news as an organization’s internal network is much more vulnerable since a lot of appliances and software have SSL and TLS baked into their operation and will not be quickly remediated by vendors, if they are remediated at all (i.e., you will need to buy a new, upgraded appliance). As a result, organizations need to focus on their internal usage of SSL and “early” TLS as well as external usage.

The remediation of these vulnerabilities on the Internet facing side of your network should be quick. Stop supporting SSL and TLS versions 1.0 and 1.1 for secure communications. While I do know of a few rare situations where taking such action cannot be taken, most organizations can simply turn off SSL and TLS v1.0/1.1 and be done with the remediation.

As I pointed out earlier, it is the internal remediation that is the problem. That is because of all of the appliances and software solutions that use SSL/TLS and vendors are not necessarily addressing those issues as quickly. As a result the only approach is to mitigate the issues with appliances that are at risk. Mitigation can be as simple as monitoring the appliances for any SSL or TLS v1.0/1.1 connections through log data or using proxies to proxy those connections.

The answer to SSL and TLS vulnerabilities are to remediate as soon as possible. If you are unable to remediate, then you need to mitigate the risk until you can remediate.

Advertisement

17 Responses to “SSL and TLS Update”


  1. 1 Neon
    October 29, 2015 at 5:20 AM

    Our applications facing the internet for users are TLS1.2. However few of the applications have connections to our partner sites running on TLS1.0, because our partners were certified in Dec2014 to Mar2015. Does this TLS 1.0 connection affect our PCI compliance?

    • October 30, 2015 at 6:11 AM

      The PCI SSC has always stated that one entity’s non-compliance with the PCI DSS does not impact another entity’s PCI compliance.

      That said, you do need to work with business partners that have potential PCI compliance issues to either mitigate the risk those issues present or find business partners that are PCI compliant. In this instance, your business partner using TLS 1.0 (aka early TLS) is an issue that needs mitigation but is not a true PCI compliance issue until July 1, 2016.

  2. 3 Brian
    May 26, 2015 at 9:35 AM

    Whoa Whoa Whoa…Let’s slow down and take a step back here…

    In your interpretation, ‘early’ TLS includes TLS 1.0 and 1.1. Let’s look at the this holistically:

    PCI DSS v3.1 says to “Refer to industry standards and best practices for information on strong cryptography and secure protocols (e.g., NIST SP 800-52 and SP 800-57, OWASP, etc.).

    NIST SP 800-52r1 (http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-52r1.pdf)
    Servers that support citizen or business-facing applications shall be configured to support version 1.1 and should be configured to support version 1.2. These servers may also be configured to support TLS version 1.0 in order to enable interaction with citizens and businesses. These servers shall not support SSL version 3.0 or earlier. If TLS 1.0 is supported, the use of TLS 1.1 and 1.2 shall be preferred over TLS 1.0. Some server implementations are known to implement version negotiation incorrectly. For example, there are TLS 1.0 servers that terminate the connection when the client offers a version newer than TLS 1.0. Servers that incorrectly implement TLS version negotiation shall not be used.

    OWASP (https://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols)
    SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including SSLv2 and SSLv3 hence SSL versions 1, 2 and 3 should not longer be used. The best practice for transport layer protection is to only provide support for the TLS protocols – TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.

    OWASP is advising any TLS option, but I’m not buying that – Let’s just assume ‘early’ TLS 1.0 is the protocol that the SSC is referring to when mentioning ‘early’ TLS. There are no earlier versions of TLS, so we must assume that when PCI DSS mentions “early” TLS, they are referring to TLS 1.0.

    I think you are jumping the gun here with banning TLS 1.1 outright. The reason the SSC is using the term “early TLS” is to allow interpretation of the new requirement. I’m not saying you’re wrong, but to influence others to outright ban TLS 1.1 as an option is a bit overkill. I am security minded too, and TLS 1.2 should always be the preferred option.

    Just my two cents backed up by NIST, hope to see you in Vancouver!
    Brian

    • May 26, 2015 at 10:01 AM

      I think everyone agrees that TLS v1.0 is just SSL v3 dressed a bit differently but vulnerable to attack.

      It is TLS v1.1 that is in doubt, particularly now with FREAK. I think there are ways to mitigate the risk with TLS v1.1, but for a lot of people that may not be possible (i.e., appliances and packaged solutions versus homegrown applications). As a result, I think the Council believes the best course of action is for organizations to follow the NIST guidance if they can. Otherwise organizations need to get to TLS v1.2 as soon as they can and avoid the other versions.

  3. 5 JJ
    May 25, 2015 at 3:44 PM

    This change is going to cause significant issues with customer-facing systems and that will push fixing the problem right to mid-2016. While all major current browsers support TLS 1.2 there certainly is no guarantee that people will be using them and that will translate to lost revenue if Internet-facing systems are fixed immediately. Not. Going. To. Happen.

    IE 9 and IE 10 are TLS 1.2-capable but it’s turned off by default. Since you cannot even use a TLS 1.0 listener to divert people to a page telling them why they cannot connect, their payment attempt will just fail.

    Android prior to 4.4 (Jelly Bean and earlier) is not TLS 1.2 capable but you could install Chrome or an alternate browser. But, again,you have no way to warn people once you enable only TLS 1.2. Fortunately iOS since v5 seems to be OK.

    All of this means that companies will set their mitigation plan schedule to the last possible date to avoid an impact on revenue for Internet-facing systems.

    Internal is hopeless. Our non-IBM mainframe does not support anything beyond TLS 1.0.

    • May 26, 2015 at 4:44 AM

      Bankers made a HUGE noise over the killing of IE6 by their regulators years ago. According to the bankers, the sky was falling and not supporting IE6 would totally kill online banking. Turned out that ditching IE6 was, for the most part, a non-event.

      I’m not saying that there will not be impacts, I am just saying that you might be overestimating the impact.

      • 7 Bad Dos
        May 26, 2015 at 5:48 PM

        There is a huge difference from someone checking their account balance to a voluntary purchase of goods from someone’s website where there are other vendors willing to do it over the TLS 1.0 protocol.

      • 8 JJ
        May 26, 2015 at 7:41 PM

        Actually, I work for a large bank so I have access to the logs that show browser versions connecting. 🙂

        Android will be the big problem. Half are still running pre-4.4: https://developer.android.com/about/dashboards/index.html?utm_source=suzunone

        I think TLS 1.1 is a non-starter. It had very low server penetration and if some versions are a fail, we’ll be working to convince our QSA that we’re not running a bad variant.

      • May 28, 2015 at 3:59 AM

        Yes, mobile devices are now a huge problem. Vendors produce them and then move on without ever thinking about supporting the older versions. While smartphones at least seem to get periodically replaced and upgraded, tablets are becoming the larger problem because people tend to replace them as often as notebook PCs. Worse, people treat their tablets like notebook PCs so they are much more at risk than their smartphone.

      • 10 JJ
        May 28, 2015 at 4:29 AM

        Precisely, Bad Dos. That’s why this problem will have almost zero resolution for Internet-facing systems for another year.

      • May 28, 2015 at 4:53 AM

        If it ever is truly resolved. I say that because I recently ran across an organization running a significant farm of Win2K Servers. Couldn’t believe it.

      • 12 JJ
        May 29, 2015 at 4:31 AM

        As long as they are not Internet-facing and fully patched for being Windows 2000, isn’t that acceptable under PCI-DSS? I didn’t think there was a “fail” for out-of-date and unsupported software although IMHO there certainly should be.

        BTW, if there was a reverse proxy in front of them, so they technically are not Internet-facing, would it still be a fail as long as the ASV scan couldn’t see the real banner or something that said what the back end servers were?

        Our processor/issuer, one whose name you would instantly recognize, had software we run to access their mainframe over the Internet for card queries and other card work. Yeah, you guessed it. It only ran SSLv3 because it’s built on Java 1.6. After we escalated the issue to their CISO at a conference last year (face to face really is better to get their attention), they proudly released a fixed version. Eight months later. It now supports either SSLv3 or TLS 1.0 only.

        The current APC PowerChute software v9, used for remote location UPS monitoring, is built on Java 1.6 so it does not support TLS. Even better, it runs on export ciphers. Their workaround is to install (and constantly update) Java 1.8 and use a tool they have so PowerChute uses JRE 1.8 instead of 1.6. So our choices are to install the second-most vulnerable software in the world on our servers or discontinue remote monitoring of their UPS units or wait a year to see if APC gets their act together. We’re opting for “C”.

      • May 29, 2015 at 5:26 AM

        Unsupported operating systems is an automatic fail under the ASV scanning rules (page 19 of the ASV Program Guide). Because of that, most QSAs have carried this requirement into their PCI assessment programs. Organizations are allowed to create compensating controls for those unsupported environments. However, as an example with Windows 2000, Windows 2000 creates all sorts of other possible security issues that makes a compensating control virtually impossible to create that would protect it and the remaining network devices.

        ASVs are required to NOT be blocked when running their scans, so that does not protect these old solutions either. This is discussed on page 14 of the ASV Program Guide in the section ‘ASV Scan Interference’.

        Java is a real pain in the behind to keep current. Every time a new version comes out with patches, it seems to break applications written with the older versions. To patch, you must at times rewrite your applications to work with the new version. As a result, you get the results you are concerned about.

        A number of years ago, a major outsourcer for bank technology had a similar situation. They had a key part of their check processing solution that ran on Windows NT v4.0. It took them almost eight years to get them to get it to (wait for it), Windows 2000 right as 2000 was going out of support. We could not believe it. It took them another three years to get it to XP where I believe it still runs.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2015
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031


%d bloggers like this: