Posts Tagged ‘detective controls

21
Dec
14

Forensic Examinations And Facts

I am watching the news reports on the Sony breach and laughing at all of the “facts” that are being bandied about.  I want to use the Sony breach as a teachable moment and explain that the “facts” may not be as factual as represented by the media, forensic examiners or even the FBI.  I have done a number of forensic investigations and from my own experience there is a lot of effort required to prove conclusively that a particular device or actor is the actual attacker.

So let us take a look at the “evidence” we have at this point and see if the conclusions drawn should be treated as facts.

My first issue is how quickly the FBI and Mandiant have come out with the “fact” that North Korea is behind the attack.  According to the timelines I have seen, it was on November 21. 2014 when Sony was told by the attackers, GOP, that Sony had been hacked.  So in around three weeks of time the FBI and Mandiant have figured out, definitively, it was North Korea that was behind the attack.  Granted, Mandiant and the Bureau could have been investigating this long before, but given the way the news reports were written, I have to believe that Sony had no idea anything was wrong until November 21.

Why do I find this timeline spurious?  It took Mandiant over three years to trace things back to the Chinese for their report, APT1, last year and we are to believe that the FBI has the skill and manpower to trace a “sophisticated attack” (Kevin Mandia’s words to Sony) back to North Korea?  I find that hard to believe.  Not because the Bureau and Mandiant are not skilled, but that it is just impossible to cram a year’s worth of investigation into a few weeks, regardless of the manpower tossed at the investigation.

In my own experience, I typically had ideas as to what and how things happened within a few weeks, but now the difficult work of determining exactly how things went down began.  It can take months or even years to figure out an attack if it is ever figured out.  It is why NTSB investigations of airplane crashes take at least a year to have a report issued.  Any attack may not be as simple or uncomplicated as you initially think.

“Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.”

We do know for a fact that hackers reuse other attackers’ code.  Why reinvent the wheel if you do not need to?  Hence the variants of all of the attack code to not only evade anti-virus but to also enhance or improve techniques and methods.  Just because there are similarities in some lines of code, algorithms, methods, etc., does not mean that it was the North Koreans that were the actual actors.  It just means that the attackers used code attributed to North Korea.  Key word, “attributed”.  To me, a far better piece of evidence would have been if the code had been written in Korean or a North Korean dialect.

“The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack.”

Hard coded IP addresses are evidence?  So does that mean that everyone is guilty if I write their telephone number on a napkin and that turns up as evidence?  No.  A better piece of evidence would have been log data that actually can tie those IP addresses to the data that was exfiltrated out of Sony.  Just because IP addresses are hardcoded in an application does not necessarily imply that the IP end point was in fact the actual endpoint.  Hackers regularly own other organizations’ and governments’ servers to obfuscate their actual location.  Just because there’s a hardcoded IP address in a piece of code does not necessarily mean that is the endpoint.  It just means that a device could be involved.

“Separately, the tools used in the SPE attack have similarities to a cyber attack in March of last year against South Korean banks and media outlets, which was carried out by North Korea.”

The attack on certain South Korean banks and TV stations in 2013 was never definitively pinned on North Korea, it was just suspected.  The prime piece of evidence was a Chinese IP address that was assumed to implicate North Korea.  So using the South Korean attack as though it was definitively proved to be done by North Korea is not a fact.

While I had some issues with the Mandiant report on China and their investigation methods, the information being offered as “facts” that North Korea is behind the Sony breach are positively appalling.  People want an answer immediately and so one is given regardless of accuracy or even believability.  However, this is a technology issue and so it is easy to feed the public supposed “facts” since only the true technology people in the world will know the difference.

Unfortunately a breach such as the one at Sony will take time, probably a lot of time.  I would not be surprised if we end up with a lot of “suspicions” and “assumptions” when a final analysis is done and released, if we ever get a definitive answer.  The reason I believe that is that I do not think Sony had the kind of security implemented and working given the amount of information that has been supposedly gathered by the attackers.  The other clue in this is that it was November 21 when Sony was notified by the attackers they had been breached.

The key take away here is that forensic examinations very rarely prove WHO the bad actor was that caused the breach.  This is particularly true when the attacker is outside the organization.  There are just too many ways that an attacker can obfuscate their actual identity/location.

What forensic examinations do provide is a road map of improvements and enhancements in an organization’s security measures and procedures to minimize future attacks.  Note that I did not say “prevent” future attacks.  I use minimize because security is never an absolute.  Anyone with an extreme desire to attack an organization will do so regardless of how well your security program is constructed and executed.

Bruce Schneier points out this very fact about determined attackers in his post on the Sony breach.  I have always referred to this as the ‘98-2 Rule’.  Properly implemented and managed information security keeps 98% of attackers out.  However it is the remaining 2% that are determined enough to figure out how to work around even the best security.  All any organizations can do about that remaining 2% is to put controls in place so that when the 2% get through, they are detected as soon as possible and their impact minimized.  This is why security frameworks are so important because they provide organizations with guidance as to what it does take to only have the 2% to worry about.

Given the limited evidence provided thus far, could it be that this is all a sophisticated marketing ruse that went sideways?  Would it not be apropos if Seth Rogen and his production company did the attack as a promotional stunt and the attackers they hired found out that Sony was ripe for such an attack and then went further than what they were supposed to?

Something to think about.

Advertisement
08
Aug
14

Requirement 10.6.2 Clarification

As a refresher, requirement 10.6.2 states:

“Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.”

The argument in PCI circles is the definition of “all other systems”.  Some of us believed that it meant systems other than those in-scope.  Other people believed that it had to refer to only in-scope systems such as a user workstation.  As a result, I asked the PCI SSC to clarify this requirement and this is the response I got back.

“In PCI DSS v2.0, logs for all in-scope systems were required to be reviewed daily. However it was recognized that for larger or more complex environments, there could be lower risk systems that were in scope for PCI DSS that could warrant less frequent log reviews. As such, PCI DSS v3.0 defines a number of events and system types that require daily log reviews, and allows the organization to determine the log review frequency for all other in-scope events and systems that do not fall into those categories.

For some environments, such as those designed specifically for the purposes of PCI DSS, then it is possible that all in-scope systems fall under the system categories defined in Requirement 10.6.1, meaning that daily log reviews are required for all in-scope systems. In other environments, there may be many different types of system that are considered in-scope, but which are not critical systems and neither store, process or transmit CHD nor provide security services to the CDE. Some possible examples could be stock- control or inventory-control systems, print servers (assuming there is no printing of CHD) or certain types of workstations. For these events or systems, the entity, as part of its annual risk assessment process, is expected to define the frequency for reviews based on the risk to its specific environment.

The intent of this update is not to apply PCI DSS Requirements to out-of-scope systems. We realize that the current wording is causing confusion and will address this in the next revision.”

So there we have it.  Not the first time my interpretation was wrong.  The requirement means in-scope systems that, from an assessment of risk, are at less of a risk of compromise can reduce the frequency of log reviews.

But that means you need to have an accurate risk assessment to support your argument.  So those of you that have not explicitly assessed the risk of your category 2 systems will have to break them out to support a reduced log review frequency.

18
May
14

Adventures In Finding Cardholder Data

On page 10 of the PCI DSS v3 under the heading of ‘Scope of PCI DSS Requirements’, second paragraph, is the following sentence.

 “At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope.”

Under the first bullet after that paragraph is the following.

 “The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined CDE.”

In the past, organizations would rely on their database and file schemas along with their data flow diagrams and the project was done.  However, the Council has come back and clarified that the search for cardholder data (CHD), primarily the primary account number (PAN).  The Council has stated that this search needs to be more extensive to prove that PANs have not ended up on systems where it is not expected.

Data Loss Prevention

To deal with requirement 4.2, a lot of organizations invested in data loss prevention (DLP) solutions.  As a result, organizations with DLP have turned those DLP solutions loose on their servers to find PANs and to confirm that PANs do not exist outside of their cardholder data environment (CDE).

Organizations that do this quickly find out three things; (1) the scope of their search is too small, (2) their DLP solution is not capable of looking into databases, and (3) their DLP tools are not as good at finding PANs at rest as they are when it’s moving such as with an email message.

On the scope side of the equation, it’s not just servers that are in scope for this PAN search, it’s every system on the network including infrastructure.  However, for most infrastructure systems such as firewalls, routers and switches it is a simple task to rule them out for storing PANs.  Where things can go awry is with load balancers, proxies and Web application firewalls (WAF) which can end up with PANs inadvertently stored in memory and/or disk due to how they operate.

Then there is the scanning of every server and PC on the network.  For large organizations, the thought of scanning every server and PC for PANs can seem daunting.  However, the Council does not specify that the identification of CHD needs to be done all at once, so such scanning can be spread out.  The only time constraint is that this scanning must be completed before the organization’s PCI assessment starts.

The second issue that organizations encounter with DLP is that their DLP has no ability to look into their databases.  Most DLP solutions are fine when it comes to flat files such as text, Word, PDF and Excel files, but the majority of DLP solutions have no ability to look into databases and their associated tables.

Some DLP solutions have add-on modules for database scanning but that typically required a license for each database instance to be scanned and thus can quickly become cost prohibitive for some organizations.  DLPs that scan databases typically scan the more common databases such as Oracle, SQL Server and MySQL.  But legacy enterprise databases such as DB/2, Informix, Sybase and even Oracle in a mainframe environment are only supported by a limited number of DLP solutions.

Another area where DLP solutions can have issues is with images.  Most DLP solutions have no optical character recognition (OCR) capability to seek out PANs in images such as images of documents from scanners and facsimile machines.  For those DLP solutions that can perform OCR, the OCR process slows the scanning process down considerably and the false positive rate can be huge particularly when it comes to facsimile documents or images of poor quality.

Finally there is the overall issue of identifying PANs at rest.  It has been my experience that using DLP solutions for identifying PANs at rest is haphazard at best.  I believe the reason for that is that most DLP solutions are relying on the open source Regular Expressions (RegEx) to find the PANs.  As a result, they all suffer from the same shortcomings of RegEx and therefore their false positive rates end up being very similar.

The biggest reason for the false positive rate is the fact that most of these solutions using RegEx do not conduct a Luhn check to confirm that the number found is likely to be a PAN.  That said, I have added a Luhn check to some of the open source solutions and it has amazed me how many 15 and 16 digit combinations can pass the Luhn check and yet not be a PAN based on further investigation.  As a result, having a Luhn check to confirm a number as a potential PAN reduces false positives, but not as significantly as one might expect.

The next biggest reason RegEx has a high false positive rate is that RegEx looks at data both at a binary level and character level.  As a result, I have seen PDFs flagged as containing PANs.  I have also seen images that supposedly contained PANs when I knew that the tool being used had no OCR capability.

I have tried numerous approaches to reduce the level of false positive results, but have not seen significant reductions from varying the RegEx expressions.  That said, I have found that the best results are obtained using separate expressions for each card brand’s account range versus a single, all-encompassing expression.

Simple Solutions

I wrote a post a while back regarding this scoping issue when it was introduced in v2.  It documents all of the open source solutions available such as ccsrch, Find SSNs, SENF and Spider.  All of these solutions run best when run locally on the system in question.  For small environments, this is not an issue.  However, for large organizations, having to have each user run the solution and report the results is not an option.

In addition, the false positive rates from these solutions can also be high.  Then there is the issue of finding PANs in local databases such as SQL Lite, Access or MySQL.  None of these simple solutions are equipped to find PANs in a database.  As a result, PANs could be on these systems and you will not know it using these tools.

The bottom line is that while these techniques are better than doing nothing, they are not that much better.  PANs could be on systems and may not be identified depending on the tool or tools used.  And that is the reason for this post, so that everyone understands the limitations of these tools and the fact that they are not going to give definitive results.

Specialized Tools

There are a number of vendors that have developed tools that have been developed to specifically find PANs.  While these tools are typically cheaper than a full DLP solution and some of these tools provide for the scanning of databases, it has been my experience that these tools are no better or worse than OpenDLP, the open source DLP solution.

Then there are the very specialized tools that were developed to convert data from flat files and older databases to new databases or other formats.  Many of these vendors have added modules to these tools in the form of proprietary methods to identify all sorts of sensitive data such as PANs.  While this proprietary approach significantly reduces false positives, it unfortunately makes these tools very expensive, starting at $500K and going ever higher, based on the size and environment they will run.  As a result, organizations looking at these tools will need more than just use their need for PAN search capability to justify their cost.

The bottom line is that searching for PANs is not as easy as the solution vendors portray.  And even with extensive tuning of such solutions, the false positive rate is likely going to make the investigation into your search results very time consuming.  If you want to significantly reduce your false positive rate, then you should expect to spend a significant amount of money to achieve that goal.

Happy hunting.

13
Apr
14

An Open Letter To Executives

I apologize for not posting anything recently, but I have been busy dealing with my taxes, QSA re-certification and clients.  Over the years that has involved dealing with people that I would like to think know better.  But based on my interactions with them, it is painfully obvious that they do not.  As a result, I have decided to write this letter to all of you in hopes that you get a clue as to how your short sidedness is going to ultimately sell your organization “down the river”.  I should have published this letter a long time ago as this is not a new issue.

 Dear Executive:

As I sat in the meeting, I watched your body language as I delivered our report on how well your organization is secured.  Based on my observations, it is painfully obvious that you do not have a clue as to the importance of security as well as you really do not care.  Since I want my bill paid, I was polite and did not take you to task as you should be taken.

So, let me put this into blunt language that you might better understand.

First and foremost, as an executive of the organization, you have a fiduciary responsibility to protect the assets of the organization.  Based on our findings, you are not protecting those assets, you are not even close.  I realize that all of this technology baffles you, but it is that technology where your organization’s life blood of intellectual property resides in orders, formulas, blueprints, specifications, customer lists and other key or sensitive information.  Without that intellectual property, your organization does not exist.  Yet as we went through all of our findings, you argued time and again about what it will take in time, money and/or manpower to appropriately secure your organization.  While I appreciate your concerns, this is what it takes to secure an organization that relies heavily on technology.

Second, security is not perfect.  I am not exactly sure where you got the impression that security is perfect, but that is wrong and you need to adjust your thinking.  Security is all about managing and minimizing risks.  As an executive, that is one of your primary job functions.  Yet your three/five/seven/ten year old risk assessment seems to point to the fact that risks and managing those risks are not a priority.  As if that was not enough, we pointed out a number of areas where risk exists but there is no evidence that the management of those risks was being done.  The recommendations we provided you offered a number of viable solutions, however they will all require changes to the organization, which seemed to be your biggest reason as to why our recommendations could not be implemented.

Third, doing the bare minimum is not going to secure your organization.  While we were talking about the PCI DSS, any security framework is merely the ante into the security game.  If you truly want to be secure it will take significant time and a certain amount of money to make that happen.  Buying security appliances and other “widgets” can only do so much.  One of the biggest findings in our report is that your existing tools in use are not being used properly and warnings and alerts are being written off as “false positives” without any investigation.  With the level of sophistication of attacks rising exponentially, based on our assessment. those tools are doing very little to protect your organization.  Another area of great concern is that your employees are, for the most part, unable to recognize current scams and threats.  As you correctly pointed out, security awareness training is not going to stop every attack, but what you missed is that such training should significantly reduce such attacks’ effectiveness.

Fourth, you need to read the definition of “compliance”.  As defined in Merriam-Webster’s dictionary, compliance means, “conformity in fulfilling official requirements”.  As our findings pointed out, you are not in compliance with a number of key “official requirements” defined by the PCI DSS.  Without adequate “official requirements” such as policies, standards and procedures, how do your employees know their responsibilities and what you are holding them accountable?  Based on our discussion of findings, you apparently are of the opinion that your employees should just intuitively know their responsibilities and accountabilities.  “Intuitively obvious” may apply to the operation of an Apple iPod as stated by Steve Jobs at its introduction, but that phrase does not apply the running of an organization.

Finally, a compliance program is not all about checking a box.  I know most auditors/assessors seems to operate that way and most executives want it to work that way, but a proper compliance program should never, ever work that way.  Compliance means looking at all of the organization’s protective, detective and corrective controls (the control triad) and determining if they are: (1) functioning properly, (2) designed properly, (3) minimizing the risks and (4) in need of any new controls or changes/enhancements to existing controls to make them function more accurately or efficiently.  While you agreed with our findings regarding the control issues we identified, your argumentative behavior about them seems to indicate otherwise.

I wish you and your organization the best of luck because it seems that your idea of risk management is to rely on luck.  I would like to tell you that you will succeed with that approach, however the statistics say otherwise.

Sincerely,

Your Frustrated Assessor

28
May
13

BlackPOS

I got a Tweet from a friend today regarding this new piece of malware found out in the wild and dubbed ‘BlackPOS’.  BlackPOS is very similar in nature to vSkimmer.  Now before everyone goes off and panics, if you are religiously following the PCI DSS, BlackPOS should not be an issue and here is why.

  • Requirement 11.5 – Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.  BlackPOS does a lot of manipulation around known file names, but the hash values of those files should change from the known good values, so any file monitoring system should alert on that fact.  It also uses file names that would never exist on a production system, so those should also generate an alert.  In addition, BlackPOS creates a TXT file that also should generate an alert when created.  However, if you are not alerting in real-time, you should be so that you pick up these issues as soon as possible.  This is where the bad guys are headed with their attacks, so you may as well alert as soon as an incident occurs so that you can address it before it gets out of control.
  • Requirement 1.1.5 – Documentation and business justification for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure.  BlackPOS uses FTP to move the TXT file from the POS system to their server.  If you are allowing FTP to flow freely from your POS or cardholder data environment (CDE) to anywhere on the Internet, you were not PCI compliant in my opinion, even if you had some bizarre business justification.
  • Requirement 5.1 – Deploy anti-virus software on all systems commonly affected by malicious software (particularly personal computers and servers).  While BlackPOS was only identified today, the anti-virus vendors will most likely have signatures out by the time you read this, so they will be looking for BlackPOS by the time you get your updated signatures.

Just these three requirements can stop this sort of an attack.  Yet, time and again we see these attacks succeed because people are not properly implementing their file integrity and not restricting network traffic flowing out of their internal networks.

PCI compliance does work when you use it the way it was intended.

25
Feb
13

Have You Been Paying Attention?

Mandiant recently released an interesting report on APT1, the supposed Chinese Army hackers that are behind most of the serious digital attacks being conducted these days.  The reason I bring this up is that, even before the Mandiant report, I had been getting questions regarding how the PCI DSS, ISO 27K, HIPAA and other standards could possibly block or even alert an organization that they are under such an attack or have been compromised by advanced persistent threat (APT).  I will use the APT acronym in this post to indicate any sophisticated attacker, not just those that follow the methods used by APT to compromise a network.

It fascinates me how many organizations jump through hoops to tightly control and monitor their inbound network traffic from the Internet, business partners and any other untrusted network.  Yet the same organization does nothing about the outbound side of the equation.  It is the age old belief that all internal traffic can be trusted because it is from the inside of the network.

It is this trust factor that APT relies upon in their ability to compromise your network.  Because you have tightly controlled your inbound traffic, APT relies on social engineering to gain access to the internal network.  Once inside, since most organizations do nothing to control or monitor their outbound traffic, it is a relatively simple task to connect from the inside of a network to any external IP address and set up a communications link.

Even when organizations are controlling and managing their outbound traffic, it is still easy to connect using ports 80 (HTTP) or 443 (HTTPS).  In this day and age, it is virtually impossible to be in business without allowing ports 80 and 443 outbound.  Attackers know this and use this fact against a target in order to gain a way out.

So what can you do to minimize this risk?

  • Tighten you inbound traffic from the Internet and business partners.  I consistently find that organizations allow traffic from countries where they have no business relationships.  As a result, these IP addresses can be use to launch denial of service and hack attacks.  If you do not limit where inbound traffic can come from, you are only asking for a denial of service attack or worse.
  • Control what external IP addresses and URLs your internal network can connect.  While a lot of organizations today use overseas business partners, is that any reason to let your internal network connect to any IP address?  No.  This is typically set up by a network engineer that does not want to be bothered by a lot of service requests to allow IP addresses or URLs.  However today’s sophisticated network attacks are only stopped if you control what IP addresses or URLs your users can connect.  Yes, this potentially creates a pain in the rear issue when someone needs to access something you have blocked.  But would you rather stop a breach or let it go unchecked until you accidentally trip over it?
  • Control your outbound ports with the same exuberance that you use on your inbound traffic.  If all you need from your internal network outbound are ports 80 and 443, then those should be the only ports allowed out on your firewall.
  • Monitor your outbound network traffic for “anomalies.”  The biggest anomaly you should be looking for is encrypted traffic over any port other than 443 (HTTPS).  The reason is that any application using anything other than 443 for encrypted communications is likely an attacker trying to obfuscate the fact that they have compromised your network.  After all, your firewall and IDS/IPS cannot read the packets encapsulated in the encrypted stream, so the attacker effectively blinds them to what is actually going on.  Encrypted traffic that does use 443 hopefully will get blocked by your white or black list of IP addresses and URLs.
  • Analyze the log data generated from your firewalls and IDS/IPS looking for potential APT attacks.  Are there consistent indications that devices are trying to connect to blocked IP addresses or URLs?  Do those incidents occur after hours or when the user is not in the office?  These are examples of conditions that should be flagged and investigated to ensure that something bad is not occurring.

Remember, security is not perfect.  What these controls do is make it more difficult for APT to successfully compromise your network.  However, if APT puts their mind to it, there is likely a way to successfully compromise your network but it will take a significant amount of work.  In the end, I would like to assume that since there are many more targets easier to get at that APT will go after the easy targets and leave your organization alone, at least for the time being.

07
Feb
13

Compliance, Compliance Testing and Security

I was recently on a Webinar presented by a major security vendor and one of their points was that executive management is finally starting to realize that compliance does not equal security.  If you read this blog regularly, you know I really do not like the phrase “compliance does not equal security” and I view it as a convenient dodge by those who use it as a way to weasel out of their responsibilities.

But during this Webinar I had an epiphany regarding this topic.  It is the confusion between security, compliance testing and reporting and the act of compliance by your technology, employees and business partners with your organization’s security policies, standards and procedures that is the problem.

I know I am just asking for flame mail with this post, but I am so tired of people looking to blame everyone but themselves about their inadequacies surrounding information security.  As I have done before, to paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard.  If it wasn’t hard, everyone would do it.”

Security is not always easy, particularly when upper management does not have buy in.  But even when upper management supports security efforts, I have seen security personnel not take advantage of that fact and get the job done.  Security does not have to be hard, but it does take more than just slamming some firewalls and intrusion prevention gear down, tossing a SIEM into the mix and thinking you are done.  Security is a never ending journey because someone is always coming up with new ways to attack you.

Anyway, to start off, let us take a look at some definitions first so we are all on the same page.

Compliance is defined as:

“Conformity in fulfilling official requirements.”

“Official requirements?”  Could that possible mean your organization’s security policies, standards and procedures?  You bet.  In this instance, we are talking about those that correspond to the PCI DSS, but this also applies to ISO 27K, FISMA, HIPAA, GLBA or any multitude of frameworks and regulatory requirements.

Conformity is defined as:

“Compliance with standards, rules, or laws.”

Based on these definitions, security is all predicated on complying with what are deemed an adequate set of security policies, standards and procedures.  Conversely, if you are not complying with an adequate set of security policies, standards and procedures, then your organization cannot be as secure as it could be.  As a result, compliance has to equal security as long as the security policies, standards and procedures are considered adequate.  Therefore security professionals that quote the mantra, “compliance does not equal security” either have a problem with the compliance side of the equation (most likely) or with the standards/frameworks (the dodge).

Over the years there have been a lot of discussions about the PCI DSS, ISO 27K, FISMA and other security frameworks and whether or not they are adequate.  The important thing to remember is that all of these standards or frameworks are merely ante into the information security game.  They are the bare minimum or a baseline to get to a basic level of security.  Should you being doing more?  Definitely, but what those efforts beyond the standard/framework are depends on what you are trying to secure, your network and application architectures and a multitude of other factors related to your computing environment and how it is used.  Those are factors that cannot be taken into account by any standard/framework because they would start to become impossible for others to follow and implement.  The bottom line here is that if you want someone to tell you exactly what to do to secure your networks and applications, go hire a consultant you trust and they will tell you everything you want to know.

The rub in all of this is that, based on the breach reports from Verizon Business Services, Trustwave, et.al. as well as compliance testing reports I have reviewed, none of you out there are 100% compliant to begin with, let alone even close.  Every organization I am aware has problems complying with the basics, let alone with any advanced security requirements in the published standards/frameworks.  So if you cannot comply with what you already have, explain to me how a different framework is going to change that fact unless it is less stringent than the framework you are already trying to use?  And if that other framework is less stringent, while that may solve the compliance issue (which I seriously doubt), exactly how is a less stringent framework going to make you secure?  The answer is that it will not make you secure.

What security professionals struggle with is that compliance is a never ending, 24x7x365 effort.  Drop your guard for an instant and it can be game over.  But provided your security policies, standards and procedures are appropriate and detailed (the reason why you want to use an appropriate standard/framework), your organization is not as secure as it can be unless your personnel and devices comply 100% of the time with every defined security policy, standard and procedure.  If you want confirmation of these facts, again, just look at the breach analysis reports year after year.  The reason there are breaches is because of non-compliance with one, but usually more, of an organization’s security policies, standards and/or procedures.

This brings me to the rumblings of late regarding a rethinking of defense in depth.  Defense in depth is predicated on using layers of security devices and controls to minimize the risk that a security incident occurs not to completely prevent an incident although you might get lucky.  For example, firewalls are the sledge hammer of security tools.  However, because we need to have ports open for outsiders to access applications, we follow our firewalls with intrusion detection/prevention devices to ensure that no one abuses the protocols used by the ports.  We follow that up with monitoring of log data from the firewalls, IDS/IPS, routers, switches and servers to identify any “sneaky” attacks using the protocols we allow.  The layers are there to cover the various holes we need to have in order to make our networks and applications function.  The tighter and smaller we can make those holes, the more secure we will be, but there will still be some amount of risk.  So we bring in more layers to cover those risks until it is more expensive to address the risk than to accept the risk.  That remaining risk is the residual risk that we therefore manage and control through detection and correction.

The other thing defense in depth relies on is the control triad.  The idea being that, because you cannot entirely prevent every security incident, you need a way to detect the incident so that you can take action to stop or minimize the impact of the incident.  You follow that up with periodic assessments of your control environment to identify and correct any deficiencies or improve your program based on new information regarding security.  The follow up assessments can be activities such as a root cause analysis (RCA) of an incident, an internal audit of user accounts and user rights or brining in a network security team to assess your security architecture and controls.  All of these activities will result in findings and recommendations to make your security systems and controls better.

And that brings us full circle to the PCI assessment.  It is merely a tool used by the acquiring banks, card brands, processors and others to obtain reasonable assurance that your organization is doing what it can to minimize the possibility of a breach of cardholder data.  It is not meant to be, nor could it ever be, an absolute complete assessment of an organization’s security posture and therefore provide absolute assurance that a breach will not occur (even though the PCI SSC and card brands tend to imply that fact).  Compliance assessments are only a snapshot of personnel and device compliance at the time the reports were written.  This is no different than going to the doctor for your annual physical which results in a snapshot of your health at that point in time.  It is not that those compliance reports are worthless; they just need to be referenced and used properly based on the fact that they are a snapshot.  Just as your doctor will tell you to lose weight or stop smoking, compliance reports provide recommendations on where you can make improvements or adjustments in your policies, standards and procedures based on what compliance evidence was found, or not found, during the assessment.

So, what are the lessons to be learned?

  • Security is not and never will be perfect; there will always be residual risk that must be managed and controlled.
  • Compliance does equal security, at least as best as your preferred standard or framework defines it plus whatever enhancements you have made.
  • Compliance assessments and reports point out where your organization was not compliant and needs to do better, not to prove your organization is secure.

Use the tools at your disposal correctly, stay current on threats and monitor your security posture and you will likely live a long, prosperous and secure life.

Keep hiding behind “compliance does not equal security” and you will forever be living off of your “luck” until it runs out (usually sooner rather than later).

23
Jan
13

Compliance Is Security – Final Answer

I am really tired of this dodge that compliance does not equal security.

I was recently on a Webinar by a major security vendor and one of their points was that executive management is finally starting to realize that compliance does not equal security.  And I realized what the problem is; people are confusing those PCI ROCs, internal audit reports and the rest of compliance assessments as an accurate assessment of an organization’s security.

Organizations develop and implement security policies, standards and procedures to protect the networks and systems used by the organization.  Those policies, standards and procedures were not created in a vacuum; they were developed based on what the organization has learned over time to protect itself as well as items taken from ISO, PCI, HIPAA, FISMA, etc. which are derived from the lessons learned by lots of people in the information security arena.  In order to be as secure as possible, everyone in the organization must follow (i.e., comply) with all of the organization’s policies, standards and procedures.  The idea being that ISO 27K and FISMA are complete security frameworks whereas PCI, HIPAA and the like are security frameworks focused on specific types of information.

Where the wheels keep coming off in this discussion is the confusion between compliance testing and reporting and the act of compliance by your employees and business partners.  Compliance is an ongoing, 24x7x365 effort.  Compliance assessments are a snapshot of compliance at the time the reports were written.  This is no different than going to the doctor for your annual physical which results in a snapshot of your health at that point in time.  It is not that those compliance reports are worthless; they just need to be referenced and used properly based on the fact that they are a snapshot.

A prime example of this is the PCI ROC.  Other than vulnerability scanning, penetration testing and change management testing, there is nothing in the PCI ROC that covers a period of time and even the aforementioned topics are limited as to the time frames used for testing.  The ultimate result of this is that you get a snapshot of compliance as of the date of the report.  Unless something was out of compliance at that point, you will never know.  This is not anyone’s fault, it is just how the system works because no organization is willing to pay for their QSA/ISA to assess them 24x7x365.

What all of these frameworks assume and expect is that the organization has put into place monitoring that does test compliance with their security policies, standards and procedures 24x7x365.  For small organizations this can be done manually by monitoring event logs and syslogs on the small number of systems involved.  

For everyone else, automated toolsets are going to be required such as security incident and event management (SIEM), configuration management, intrusion detection/prevention, wireless monitoring, network performance and traffic monitoring, help desk, change management and similar tools are required to do the job properly.  This is because of the volume of information involved that requires analysis and correlation cannot be completed without some form of automated tools.

But just having these tools does not ensure compliance either.  The tools need to be properly configured, maintained and monitored so that appropriate personnel are notified of out of compliance conditions and that those issues are addressed in a timely manner.  If that analysis and notification process were truly going on, most of the breaches that have occurred to this point would likely have not occurred or would have been stopped early on.

But the real lesson here is that the technology side of security is the easy part.  Firewalls, intrusion detection/prevention, routers, switches, servers, security tools, etc. can all be configured to ensure 100% 24x7x365 compliance with all relevant security policies, standards and procedures.  

The hard part of security is people for a variety of reasons.  The most obvious reason the people side of security is hard is the fact that most information security people are loathe to deal with people.  That is an issue that is important and needs to be addressed in another post.

The larger people problem is that, at the end of the day, people are not 100% 24x7x365 compliant because they are fallible.  And no matter how much training you provide people, you are not going to change the fact that they are fallible.

But that is what all of that technology is for is to let you know when people are fallible and to stop or minimize the risks generated because of their fallibility.

06
Jan
13

Security And Compliance

I have written a lot about this topic over the years and was recently reviewing my Compliance Is Not Security – Busted! post and the comments that came in regarding it.

A theme of a number of the comments was that compliance does not equal security.  DUH!

I have never once said or even implied that compliance equaled security as – yes, here it comes – security is not perfect!  However, if you are complying with any security program/framework such as the PCI DSS, ISO 27K, etc., then you are likely more secure than those who are not.

Security technology such as firewalls, routers, servers, applications, etc. can all be set up with rules that are complied with 100% of the time, day in and day out, no exceptions.  The problem comes down to people who are fallible.  Their compliance is never 100% and you are probably lucky to have anyone above 90%, no matter how much security awareness training you do.  As a result, in organizations that are truly complying with the PCI standards, this is where the security breach starts, with people for one reason or another.

No, I am not necessarily talking about social engineering, although social engineering is growing because of the fact that organizations have invested a lot in security technologies yet people are fallible.  People can be the root cause because of any or all of the following.

  • How dare you do that to me!  This is the most obvious of the people issues that comes to mind.  Face it, when backed into a corner, people lash out just like a trapped animal.  The supposedly wronged party wants their proverbial “pound of flesh.”  They get that pound of flesh by hurting the organization that has just hurt them.  This can be as minimal as taking office supplies to downloading databases to a USB drive as they empty their desk.  Obviously, a database, network or system administrator’s access is much different than a clerk’s.  However, if your security is minimal on the inside as it is in most organizations, the clerk may actually have better access than the administrators when it comes to sensitive information.  Such a situation may not be the fault of the administrators, that old version of POS or ERP may not have the ability to be more granular regarding access to information.
  • Over inundated with alerts and cannot identify real alerts from false positives.  This typically occurs when an automated tool is implemented but never tuned to the organization’s environment.  In this sort of an environment, finding real alerts can be like finding a needle in a haystack when there are thousands of alerts an hour scrolling by on the screen.  This usually makes management wonder why the tool was needed in the first place.
  • Saw an alert and ignored it.  We see this most often coupled with the aforementioned inundation issue.  The other most common version of this issue is with internally used SSL certificates that were generated incorrectly or use a default certificate supplied by the application.  Users then see the “There is a problem with this Website’s security certificate” or similar error message in their browser whenever these flawed certificates are encountered and become conditioned to ignore the error message.  Over time, they become conditioned to ignore all of these sorts of messages, including those for malware infected Web sites and, surprise, you have been compromised.  I have lost count how many people have said to me, “We just ignore those alerts because we know they are false positives.”
  • Saw the alert but got side tracked and never came back to it.  This is a problem we see all of the time.  For example, the person that monitors the network is also the person that manages the network and configures the network.  An alert comes in and the person begins a root cause analysis (RCA) only to get pulled away because a remote facility is offline.  The offline issue gets resolved, but other issues come up as well as meetings and telephone calls and the person never gets back to the RCA for the alert because there is no “tickler” to remind them to go back and complete the RCA.  In the meantime, the attacker has gained their beachhead and is probing the network for whatever value it may contain.
  • Just did not put together all of the pieces to know they were compromised.  Like the reasons 9/11 occurred, most organizations do not correlate all of the potential incidents occurring in their networks and therefore do not understand that there is an active effort to compromise their network or that they have already been compromised until well after the incident has caused damage.  The reason this is important is that once an attacker is inside your organization’s security perimeter, it is typically game over because there are few controls to prevent access and identify that data is being taken.

If you have read the Verizon Business Services Data Breach Investigations Reports (DBIR) over the years you know how the bulk of attacks get inside, they are the result of people.  For the last two years, the DBIR has used the VERIS Event Threat Grid to show how breaches occur.  Across the top of the grid are the categories; Malware, Hacking, Social, Misuse, Physical, Error and Environmental.  The Social, Misuse and Error categories imply mistakes or deliberate acts of people.  If you read the definitions on the VERIS Web site, Malware is also very people centric as is hacking.  Surprisingly to some will be that the Physical and Environmental categories also have a good number of people errors.  Based on just a quick read, it looks to be that about 60% to even 70% of all of the incidents categorized by VERIS has some form of people error component.

Since we are not going to get rid of people in our organizations any time soon, what are you to do?

  • Admit that people are the problem and focus your security measures accordingly.  Every 12 step program says the first step is to admit the problem which, in this case, is that people are fallible.  As a result, we need to construct our security measures such that this fallibility is minimized as much as possible.  One of the best solutions is to integrate alerts into your help desk or change management system so that a ticket is generated.  Those tickets need to have an escalation process behind them so that if they are not investigated within a period of time, they are bumped up to the next higher rung of management and that escalation continues until the tickets are finally addressed.  This way there is visibility for the alerts should they slip through the cracks.  As a side benefit of this approach, you gain statistics to reinforce why you need more staff and/or more/better tools.
  • Strengthen your internal security measures.  As things stand, once inside most organization’s security perimeter, there is very little that stands in the way of an experienced attacker getting the data they desire.  Regardless of whether it is an insider attack or an attacker has managed to get inside, there is already justification for organizations to beef up their internal security measures.  To address this problem, I would recommend the security architectures as documented in my Fort Knox approach, Forrester’s Zero Trust Model or McGladrey’s Ultra Secure Network.  But most organizations do not have the infrastructure architecture, the application architecture or even the will to take such approaches.  But that does not excuse an organization from just saying they cannot do anything.  If anything, most organizations could vastly improve the monitoring they do on their internal networks.  Monitoring needs to be coupled with reducing the total number of ports that are open between network segments.  Most internal networks do a terrible job of this because of a variety of factors including applications people that cannot tell what ports need to be open to avoiding operational issues by just leaving things open.  Another area of improvement is reviewing user access rights on all systems and applications, not just those in-scope for PCI compliance.
  • Constantly tune your alerting system(s).  Just as attack methods are not static, neither are networks, systems and applications.  Changes are occurring all of the time in an organization’s IT environment, yet if you ask the people running the SIEM about changes, nine times out of ten, nothing seems to be changing other than requests to look for a new signature or anomaly.  There is a belief in the SIEM user community that a SIEM’s update process is making the necessary changes in the policies that ship with the SIEM.  To a certain extent SIEM solutions are similar to anti-virus and malware solutions.  However, because a SIEM monitors log data and the log data provided varies greatly from organization to organization, each organization needs to periodically review and adjust their alerting criteria to make sure that it reflects the organization’s operating environment and not just some template from the SIEM vendor.  If an organization is not reviewing its SIEM alerting rules based on the changes made, at least quarterly, then it is highly likely that the SIEM is not alerting properly.
  • Establish separate consoles from your SIEM for network, system, security and application administrators.  What a network administrator is looking for is vastly different from what an application administrator is looking for and what any particular group might be looking for to generate an alert.  As a result, to have only one console is really silly and non-productive.  Yet time and again, we see SIEM implementations with just that, one console and everyone being driven by email or SMS alerts.  The people alerted then have to get to the SIEM to find out what exactly triggered the alert and then determine what to do about it.  Having your own console view simplified things by only listing that viewer’s alerts and no one else’s alerts.  This allows people to focus on their problems and not the whole organizations problems.  The idea behind the single console is that if everyone knows what is going on overall, then correlation would occur because everyone sees everything.  While you would think that would be the case, in reality, people just want to fix their problem and move on, not the entire organization.  Which leads to my last point.
  • Watch the overall alerting picture so that correlations can be made.  According to most sources, today’s attacks are becoming more sophisticated and multi-pronged in their approach.  For example, while most DDoS attacks are just to be a pain in the posterior to the target and disrupt access to the target’s Web site, there are those DDoS attacks that are used as cover so that people inside are blinded to the real attack(s).  Whether or not the DDoS was a decoy depends on what other events or incidents occurred during the DDoS attack, if your alerting system did its work.  Higher end SIEM solutions can provide basic correlation rules, but most SIEM solutions require the end user to develop those correlation rules.  It is these correlation rules that help organization identify these more sophisticated attacks.  That said, these correlation rules do not have to be very sophisticated.  For example, during a DDoS attack, you really only need to look for malware attacks, failed authentication attempts and other anomalies that would be likely indicators of the DDoS attack being used to mask the real attack.

Is all of this going to address your security issues?  Sorry, not a chance.  None of the above stops all breaches, it merely minimizes the possibility that a breach goes on for months or years.  Hopefully it minimizes a breach down to weeks, days, maybe even hours in some cases but it will never totally eliminate them.  Security is not perfect.

There is a side benefit to all of this and that is it will assist you in doing RCA.  RCA is very effective in getting rid of those nagging operation issues that occur from time to time and mess up the delivery of your organization’s goods and services.  All of the information you collect for security purposes can also be used to find the needle in the haystack that is causing a database to corrupt, a network connection to drop or a server to fail because now you have information as to what was going on that led up to the problem.

The reason an organization is not secure is that there are so many areas of improvement needed that the full control triad is no longer functioning and holes exist that will allow an attacker to operate without the knowledge of the organization.  Until the controls are implemented and operating properly, it will be impossible to determine if they are secure or not.  The recommendations I have made will hopefully give you a better picture of what you face and reacting to issues that need attention before your organization is the next one to be breached.

25
May
12

What To Do About Insiders

The first posting I did on this subject was to provide an understanding that, despite the news stories, the insider threat is a very real threat and needs to be addressed.  However, what is an organization to do?  Employees and others need to have access to certain information in order to get their jobs done.  What steps should an organization take to minimize the insider threat?

First, I need to be very clear about this.  Even when you do all of what I recommend, you are only minimizing the insider threat.  The insider threat can never be totally mitigated.  Insiders must have access to information that the general public or even you business partners do not have access.  As a result, should an employee get sloppy with controls or go “rogue,” you can expect to lose whatever information that person had access.  Remember my mantra – security is not perfect.

I posted some ideas a while back on controls for automation.  Here are my minimum recommendations for manual controls to put into place to minimize the insider threat.

  • Management needs to recognize the importance of management controls.  The “tone at the top” really does mean something when it comes to controls.  However, management needs to understand that these sorts of controls are no absolute guarantee of avoiding issues.  Properly implemented, monitored and adjusted as necessary, such a control environment will confirm to the rest of the organization that management believes that controls are important.  If management does not know what to do regarding management controls, then they should consult with a public accounting firm as they are very aware of control environments and can assist in the design of a control environment.
  • Preventive controls.  Preventative controls, as their name implies, put in place something to prevent a problem.  A prime example of a manual preventive control is requiring a minimum of two signatures on checks.  The larger the amount on the check, the more people that have to sign off on the check.  Under such an approach multiple people have to collude to defraud the system.  This sort of approach can also be taken for report reviews of inventory, cash on hand and any other metrics that are important to the survival of the organization.  The idea is to ensure that at least two people are involved in these reviews and that they physically sign off on their review and document and start an investigation into any irregularities.
  • Detective controls.  As the name implies, detective controls are controls used to detect problems.  Following the example in preventative controls, the other people signing off on a check or reviewing a critical metric report is a detective control.  If the reviewer feels that something is not right with what they are reviewing, they are obligated to notify their immediate supervisor of the issue and ask the submitter to physically document the situation.  Once documented, the reviewer can then either sign off and accept the explanation, or refuse and further investigate.
  • Corrective controls.  Corrective controls are those controls used to ensure that the preventative and detective controls are focused on the right problems and are going to be able to be relied upon going forward.  Keeping to the theme, in the event of an irregularity being identified, management should then institute a root cause analysis and determine what caused the situation and make the necessary changes to the preventative and detective controls to ensure that people do not try to circumvent the control environment.
  • Hold employees responsible for the control environment.  Management may be responsible for establishing controls, but it is the employees that make the control environment actually work.  Employees should have their key controls evaluated at least annually to reinforce the importance of controls.  In our check example, the people signing off on checks should be evaluated on how many checks with problems are issued by the organization that they were required to sign.
  • Solicit control improvement ideas from employees.  The problem most organizations have with management controls is keeping them relevant.  A common example we see is a problem that occurred ten years ago has been addressed by automated controls in a new computer system, yet management continues to require the manual control to be followed.  Most of the time, employees know exactly what needs to be done, but management does not want to recognize that fact.
  • Have a third party periodically assess your controls.  In addition to employees providing ideas, organizations should periodically invite a third party, such as their accounting firm, to assess the control environment and recommend changes.  A number of years ago I worked with a large organization where we discovered that the way one of their computer systems had recently been modified, checks could be generated and bypass approvals and oversight.

For those of you that are going to recommend these minimum controls, my heart goes out to you.  The road ahead is likely to be very bumpy and contentious if your organization has a mediocre control environment.

Something to share with management as you push this sort of project is that there are very measureable benefits to implementing controls.  Every organization that I have worked with over the years has found that a byproduct of their controls projects has been fewer customer complaints and fewer employee screw ups.  Avoiding problems or making them smaller and less impactful on customers can add up to serious savings in time and money.

If you have a mature control environment, take a look at how you can make it better, more effective and more relevant.  If you do not have a mature control environment, then take baby steps.  Look to your accounting area as they will likely have the most robust control environment.  Grab one of those accountants and use them to help you look at other areas that may have problems that controls can address.

Best of luck to all of you on your journey.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031