Archive Page 2

25
May
15

SSL and TLS Update

At the beginning of March, a new vulnerability to SSL and TLS was announced called FREAK. This compounded the announcement last fall of POODLE that caused the PCI SSC to abruptly call SSL and “early” TLS (i.e., TLS versions 1.0 and 1.1) as no longer acceptable as secure communication encryption.

In April, the PCI SSC issued v3.1 of the PCI DSS and gave us their take on how to address POODLE. Their plan is to have organizations remediate SSL and “early” TLS as soon as possible but definitely by June 30, 2016. While remediating SSL and “early” TLS, organizations are required to have developed mitigation programs for these protocols until they are remediated. There are some exceptions to the June 30, 2016 deadline for devices such as points of interaction (POI) but those exceptions are few and far between and still require some form of mitigation.

Reading the explanations for the POODLE and FREAK vulnerabilities, while they are technically possible over the Internet, they are much more realistic to be performed successfully internally. As such, these vulnerabilities are more likely to be used as part of an attacker’s toolkit when compromising a network from the inside. This is not good news as an organization’s internal network is much more vulnerable since a lot of appliances and software have SSL and TLS baked into their operation and will not be quickly remediated by vendors, if they are remediated at all (i.e., you will need to buy a new, upgraded appliance). As a result, organizations need to focus on their internal usage of SSL and “early” TLS as well as external usage.

The remediation of these vulnerabilities on the Internet facing side of your network should be quick. Stop supporting SSL and TLS versions 1.0 and 1.1 for secure communications. While I do know of a few rare situations where taking such action cannot be taken, most organizations can simply turn off SSL and TLS v1.0/1.1 and be done with the remediation.

As I pointed out earlier, it is the internal remediation that is the problem. That is because of all of the appliances and software solutions that use SSL/TLS and vendors are not necessarily addressing those issues as quickly. As a result the only approach is to mitigate the issues with appliances that are at risk. Mitigation can be as simple as monitoring the appliances for any SSL or TLS v1.0/1.1 connections through log data or using proxies to proxy those connections.

The answer to SSL and TLS vulnerabilities are to remediate as soon as possible. If you are unable to remediate, then you need to mitigate the risk until you can remediate.

15
May
15

Whole Disk Encryption Explained

There are a lot of security professionals and lay people that seem to believe that encryption is encryption and that is simply not the case. Worse yet, vendors play into this misconception and obfuscate the issue with lots of words and phrases discussing seemingly complicated encryption key management procedures and the like that are actually meaningless when it comes to protecting data on running systems. As a result, it is time to clarify all of this misunderstanding.

First things first, we need to discuss how whole disk encryption works. As its name implies, whole disk encryption encrypts an entire disk drive. When a file on the whole disk encrypted drive is accessed, the encryption solution decrypts the file necessary using the decryption key provided at system startup and the rest of the drive remains encrypted. That way if a system failure occurs or the system is shutdown deliberately, the drive is always protected.

That is the key concept of whole disk encryption. The drive is technically only encrypted when the system is shutdown. If the system is running, the encryption is technically not in place because the operating system has the decryption key to access the disk at will. This is why whole disk encryption is great for devices like notebooks and the like that are shutdown at some point.

This is also why whole disk encryption is meaningless when applied to a server. When is a server shut down? Never. When using whole disk encryption on a running server, the only control that protects data is access controls, not encryption.

So using this definition, let us examine requirement 3.4.1 in the PCI DSS. That requirement states:

“If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.”

The first statement that throws people is “logical access must be managed separately and independently of native operating system authentication and access control mechanisms”. In short, whole disk encryption cannot rely only on the operating system’s authentication process.

The best example of this is Microsoft BitLocker. BitLocker has a number of modes under which it can operate. It can integrate with Active Directory (AD), it can rely on a trusted platform module (TPM) chip in the computer or it can operate stand-alone.

In stand-alone mode, BitLocker requires the user to provide the BitLocker key by either manually keying it in or providing it on a USB device that stores the BitLocker key in order to boot the system. If that key is not provided, the system will not even offer the user to logon. This form of BitLocker meets the requirements set forth in requirement 3.4.1.

But then the requirement goes on and say, “Decryption keys must not be associated with user accounts”.

In stand-alone mode, the BitLocker key is not associated with the user’s credentials so it also meets this part of 3.4.1.

However, in the AD or TPM modes, BitLocker operates behind the scenes and the end user never knows that their disk is encrypted and the user still logs onto the system as always using their Windows credentials. These modes do not meet the independence requirement in 3.4.1 because all that is protecting the data is the user’s Windows credentials. And in the case of AD mode, BitLocker also does not meet the user credential disassociation requirement because the BitLocker decryption key is tied to the user’s Windows credentials.

But if people would fully read the Guidance column for requirement 3.4.1 they would read the following at the end of the guidance for 3.4.1 where the Council states:

“Full disk encryption helps to protect data in the event of physical loss of a disk and therefore may be appropriate for portable devices that store cardholder data.”

BINGO!

Whole disk encryption helps protect data in the event of physical loss of a disk. Period.

So with a server that never shuts down, if a drive in a SAN or NAS fails, the failed disk with whole disk encryption will be encrypted when it is pulled from the array. But if things are running just fine, whole disk encryption does nothing to protect the data.

So do not be baffled by the statements from those vendors trying to convince you that whole disk encryption on your server is going to protect your data while the server is running. That is not true.

Now you know.

03
May
15

By All Means, Do As Little As Possible

I write this because I have had enough of arguing over the lowest common denominator when it comes to securing networks, servers and applications. Reading the articles in the various media and trade journals, one would get the distinct impression that putting forth any sort of effort is beyond a lot of peoples’ capacity.

Do you people complaining about the difficulty of achieving compliance with a security framework ever listen to yourselves? I would say the answer is “No” because if you did, you would understand where I am going.

Do you realize that you are arguing over doing the bare minimum? I would guess that would be a resounding “No” because, again, you would understand where I am going.

If none of this rings a bell, then maybe this does. When was the last time anyone told you that only doing the minimum was acceptable? If they did, then they are people I would not want to associate with because they are likely on their way out the door as you will be shortly once that breach occurs.

All security frameworks are a bare minimum. They do not guarantee security of anything. What they do is define the “best practices” or “common knowledge” of what it takes to have a reasonable chance of being secure. But it gets worse. Security frameworks require perfect execution, i.e., being compliant 24x7x365, in order to succeed. And as those of you complaining are rudely finding out, that just does not happen when people are involved.

In order to address the shortcomings of people, security frameworks are layered. You must have heard the phrase “layered approach” time and again during security discussions. The layers are there so that when people fail, their failure does not result in a total failure of an organization’s security posture. Where things go wrong is when there are multiple failures. It does not matter that things are layered when the vast majority of those layers are circumvented by multiple failures.

Oh, you do not think that is how a breach happens? Read the Verizon DBIR or PCI reports on breaches and it lists out the multiple processes that failed that led to the breach, not just a spear fishing email or the breach of a firewall. Those were the start of it all, but it was a lot of other things that ultimately led to the success of the breach.

Another rude awakening for management and security professionals alike is how easily all of that security technology they have invested in does nothing once a phishing email corrupts an insider’s account. That is because a lot of organizations’ security posture is like an M&M candy – hard on the outside with that soft chocolate center on the inside. If you go back to the Verizon reports, read the details of how many attacks came to fruition over insider accounts being corrupted. They may not necessarily be categorized as insider attacks, but an insider was compromised as part of the successful attack.

Which brings me to security awareness training and the fact that people consistently complain that it is worthless. Did you people really believe that one session, once a year is really going to change peoples’ bad habits? If you did, I have some property I would like to sell you. You must harp on this topic constantly and consistently. I know that is not what you want to hear, but people only learn by being told repeatedly to stop their bad habits. Even though a lot of people approach this subject by making it annoying and painful, it does not have to be that way. But it is the only way to have an effect and it will not happen overnight and not everyone will learn the lessons. Security awareness takes years and lots of patience, but it does eventually pay off.

The bottom line is security is a war between you and the people that want your organization’s intellectual property, card data, medical records, financial information, whatever information you are trying to protect. Wars are won or lost on the strategy used and the battle intensity of the soldiers involved. Wars and battles are not won with mediocrity which is the approach upon which you are arguing. Mediocrity in war is how people die, not how they survive.

Let me know how that mediocre approach works out. That is, if you are even around to let me know.

25
Apr
15

PCI Survey

A gentleman working on his Masters Thesis at the University of East London approached me a month ago for my take on PCI and his questions resulted in the post PCI SWOT Analysis.

Now this gentleman is looking for further feedback via a Survey he has constructed. I would like to encourage my readers to take his survey to further his research as well as provide all of us with feedback on PCI.

The survey can be taken here.  (https://docs.google.com/forms/d/1ua_KAPvBu0Ziu1GQ18vZj6qq1selSMMnLtVD-D1NBJo/viewform)

Thank you.

20
Apr
15

Why Requirement 5 Must Change

This issue came to a head recently when a colleague of mine attended an ISSA chapter meeting where there was a session given on anti-virus by someone from a US government intelligence operation. I had entirely forgotten about this until they brought it back up. The issue is the ineffectiveness of anti-virus solutions and why they are ineffective.

Most of us have seen the anti-virus testing results that are periodically pumped out by the various trade journals. They all point out that anti-virus is only around 30% to 40% effective in detecting malware. But what never seems to get brought up and clearly discussed is why anti-virus solutions are so bad at their job.

The reason is that anti-virus solution providers have taken a page out of the United States Centers for Disease Control (CDC) influenza playbook. The reason is the statistics that the speaker shared.

  • For every current piece of original malware, there are around 400,000 variants of that malware making the rounds on the Internet. Variants are easy to make which is why there end up being so many so quickly.
  • To scan a computer for every piece of malware developed since day one including variants would take around 40,000 hours (almost a month) to complete. And that is if you dedicate a core for that to run as well as a core to scan everything coming at you.
  • The signature files required to track all malware and their variants from day one would take up a significant portion of your hard drive.

Like the CDC does a scientific wild-ass guess (SWAG) to figure out what influenza vaccine to make every spring, anti-virus vendors do the same thing with their signature files every day. What anti-virus vendors do is select the most likely malware and variants your computer will encounter and that is what your anti-virus signature file will contain. The idea is that their heuristic engines and firewalls will hopefully detect the malware not included in the signature file.

Getting back to the PCI DSS, requirement 5.1.1 states that anti-virus solutions:

“Detect all known types of malicious software, remove all known types of malicious software, and protect against all known types of malicious software.”

Guess what?

Given the aforementioned revelations that signature files are incomplete, there is no anti-virus solution available today that meets those requirements of detecting and protecting against “all known types of malicious software”. All of us have, unknowingly or not, been “checking the box” on this requirement.

I along with a number of other security professionals have stated for years that anti-virus alone has never been adequate for protecting systems as portrayed in the PCI DSS, by the PCI SSC and by the card brands. If you truly want to protect systems from “all” malware as specified in the requirement, you need to use anti-virus in conjunction with a whitelisting/blacklisting and/or file change detection solution. Anti-virus alone is just not enough as the repeated tests of these solutions have pointed out over the years.

The reason you still need to keep anti-virus is that these solutions do what the others do not – quarantine or remove the malware. Quarantining or removing malware is truly an art form and has gotten even more so as operating systems have become more sophisticated in how they manage applications. The reason for this is that, while it is easy to install software, it has become very tricky in uninstalling it, if you can even uninstall it at all.

Anti-virus vendors spend the bulk of their research and development time and money in determining the best way at quarantining and/or removing malware. While a lot of whitelisting/blacklisting vendors have promised to add the ability of quarantining and removing malware, most have come to the realization that providing such features are beyond their current capabilities and not as simple as they have portrayed it in their sales meetings. As a result, I would expect it will take these whitelisting/blacklisting vendors years to have this capability if they even bother to develop it.

So what should the PCI SSC do?

The Council needs to require additional malware detection measures to requirements 5 so that organizations are truly protecting their systems against malware. In the immortal words of Bruce Scheier, what we have now is “security theater” – the appearance of security without security. Anti-virus alone is not cutting it, so it is time to enhance that capability by requiring more than just anti-virus.

The Council should also work with and demand that the anti-virus, whitelisting/blacklisting and file monitoring vendors provide some sort of integration between their respective products. That way when the whitelisting/blacklisting or file monitoring solutions detect an issue, the anti-virus solution can do the quarantine or removal of the suspected malware which it is typically very good.

Is this going to detect every piece of malware?

Sorry, but some will still get through (remember, security is not perfect). But the amount that gets through should be significantly less than with just anti-virus alone.

How much gets through will be up to how the tools are configured. As a lot of you have found out, just installing file monitoring software does not detect all file changes. That is because the installation does not get tweaked to protect everything it should. That takes time and effort that a lot of people do not provide because they have other things to get done. The better you implement the other tools, the fewer pieces of malware that will get through.

Reach out to the Council and let them know that you also think that requirement 5 needs improvement.

16
Apr
15

ASV Guidance For SSL/TLS Vulnerabilities

Hidden by all of the news about v3.1 of the PCI DSS being published, is a notice that was sent to all PCI approved scanning vendors (ASV) from the PCI SSC regarding how to handle SSL and “early TLS” vulnerabilities.

In regards to the “early TLS” comment, the Council did define the term by referencing everyone to NIST SP800-52 rev1. That NIST document essentially tells the reader that while TLS 1.1 is allowed, whenever possible, TLS 1.2 should be the only version used. In fact, NIST is highly recommending that all government entities move to TLS 1.2 by January 1, 2016.

FYI TLS 1.3 is in a draft specification by the IETF as we speak. I would expect that we will see TLS 1.3 released by the time the PCI SSC’s June 30, 2016 deadline.

With that covered, what is an ASV to do with a scanning customer’s SSL and TLS 1.0/1.1 issues?

According to the letter sent to the ASVs:

Prior to 30 June 2016: Entities that have not completed their migration should provide the ASV with documented confirmation that they have implemented a risk mitigation and migration plan and are working to complete their migration by the required date. Receipt of this confirmation should be documented by the ASV as an exception under “Exceptions, False Positives, or Compensating Controls” in the ASV Scan Report Executive Summary and the ASV may issue a result of “Pass” for that scan component or host, if the host meets all applicable scan requirements.”

The key here is that you must be mitigating the vulnerability and working to migrate to TLS 1.2.

So what would a mitigation plan look like? Most likely you would monitor for usage of SSL or TLS 1.0/1.1 connections to your devices that only support SSL and TLS 1.0/1.1.

For those of you that are not going to be able to migrate to TLS 1.2, the Council gives ASVs guidance there as well.

After 30 June 2016: Entities that have not completely migrated away from SSL/early TLS will need to follow the Addressing Vulnerabilities with Compensating Controls process to verify the affected system is not susceptible to the particular vulnerabilities. For example, where SSL/early TLS is present but is not being used as a security control (e.g. is not being used to protect confidentiality of the communication).”

The reason the Council has to be able to provide a solution past June 30, 2016 here is that it is my understanding that a lot of comments were received about “baked in” SSL that was going to require wholesale replacement of devices to correct the problem. A lot of those devices are IP-based point of interaction (POI) devices. ASVs have been instructed on the process to use to reduce the CVSS so that the vulnerability is no longer considered “high”.

If you have any further questions regarding this announcement, I would discuss it with your ASV. As with all things PCI, every ASV will have variations based on their own risk adversity as to what this pronouncement says.

14
Apr
15

SSL Update

“The report of my death was an exaggeration.” – Mark Twain

Today the PCI SSC announced that SSL and “early TLS” (whatever that means) will not truly die until July 1, 2016. This will allow a transition period for all of you stuck with vendor baked-in SSL as well as the procrastinators amongst us to get converted to TLS 1.2. I say TLS 1.2 because if you are going to convert, you really should go to the most current version and not just with whatever will get you by.

The complete summary of PCI DSS v3.1 changes can be found here.

UPDATE: Late on Wednesday, April 15, the PCI SSC released v3.1 of the PCI DSS. The Council uses NIST SP800-52 rev1 as the definition of “early TLS”. You can get a copy of the new version of the PCI DSS here.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

July 2015
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,294 other followers


Follow

Get every new post delivered to your Inbox.

Join 1,294 other followers