Posts Tagged ‘control triad

13
Apr
14

An Open Letter To Executives

I apologize for not posting anything recently, but I have been busy dealing with my taxes, QSA re-certification and clients.  Over the years that has involved dealing with people that I would like to think know better.  But based on my interactions with them, it is painfully obvious that they do not.  As a result, I have decided to write this letter to all of you in hopes that you get a clue as to how your short sidedness is going to ultimately sell your organization “down the river”.  I should have published this letter a long time ago as this is not a new issue.

 Dear Executive:

As I sat in the meeting, I watched your body language as I delivered our report on how well your organization is secured.  Based on my observations, it is painfully obvious that you do not have a clue as to the importance of security as well as you really do not care.  Since I want my bill paid, I was polite and did not take you to task as you should be taken.

So, let me put this into blunt language that you might better understand.

First and foremost, as an executive of the organization, you have a fiduciary responsibility to protect the assets of the organization.  Based on our findings, you are not protecting those assets, you are not even close.  I realize that all of this technology baffles you, but it is that technology where your organization’s life blood of intellectual property resides in orders, formulas, blueprints, specifications, customer lists and other key or sensitive information.  Without that intellectual property, your organization does not exist.  Yet as we went through all of our findings, you argued time and again about what it will take in time, money and/or manpower to appropriately secure your organization.  While I appreciate your concerns, this is what it takes to secure an organization that relies heavily on technology.

Second, security is not perfect.  I am not exactly sure where you got the impression that security is perfect, but that is wrong and you need to adjust your thinking.  Security is all about managing and minimizing risks.  As an executive, that is one of your primary job functions.  Yet your three/five/seven/ten year old risk assessment seems to point to the fact that risks and managing those risks are not a priority.  As if that was not enough, we pointed out a number of areas where risk exists but there is no evidence that the management of those risks was being done.  The recommendations we provided you offered a number of viable solutions, however they will all require changes to the organization, which seemed to be your biggest reason as to why our recommendations could not be implemented.

Third, doing the bare minimum is not going to secure your organization.  While we were talking about the PCI DSS, any security framework is merely the ante into the security game.  If you truly want to be secure it will take significant time and a certain amount of money to make that happen.  Buying security appliances and other “widgets” can only do so much.  One of the biggest findings in our report is that your existing tools in use are not being used properly and warnings and alerts are being written off as “false positives” without any investigation.  With the level of sophistication of attacks rising exponentially, based on our assessment. those tools are doing very little to protect your organization.  Another area of great concern is that your employees are, for the most part, unable to recognize current scams and threats.  As you correctly pointed out, security awareness training is not going to stop every attack, but what you missed is that such training should significantly reduce such attacks’ effectiveness.

Fourth, you need to read the definition of “compliance”.  As defined in Merriam-Webster’s dictionary, compliance means, “conformity in fulfilling official requirements”.  As our findings pointed out, you are not in compliance with a number of key “official requirements” defined by the PCI DSS.  Without adequate “official requirements” such as policies, standards and procedures, how do your employees know their responsibilities and what you are holding them accountable?  Based on our discussion of findings, you apparently are of the opinion that your employees should just intuitively know their responsibilities and accountabilities.  “Intuitively obvious” may apply to the operation of an Apple iPod as stated by Steve Jobs at its introduction, but that phrase does not apply the running of an organization.

Finally, a compliance program is not all about checking a box.  I know most auditors/assessors seems to operate that way and most executives want it to work that way, but a proper compliance program should never, ever work that way.  Compliance means looking at all of the organization’s protective, detective and corrective controls (the control triad) and determining if they are: (1) functioning properly, (2) designed properly, (3) minimizing the risks and (4) in need of any new controls or changes/enhancements to existing controls to make them function more accurately or efficiently.  While you agreed with our findings regarding the control issues we identified, your argumentative behavior about them seems to indicate otherwise.

I wish you and your organization the best of luck because it seems that your idea of risk management is to rely on luck.  I would like to tell you that you will succeed with that approach, however the statistics say otherwise.

Sincerely,

Your Frustrated Assessor

07
Feb
13

Compliance, Compliance Testing and Security

I was recently on a Webinar presented by a major security vendor and one of their points was that executive management is finally starting to realize that compliance does not equal security.  If you read this blog regularly, you know I really do not like the phrase “compliance does not equal security” and I view it as a convenient dodge by those who use it as a way to weasel out of their responsibilities.

But during this Webinar I had an epiphany regarding this topic.  It is the confusion between security, compliance testing and reporting and the act of compliance by your technology, employees and business partners with your organization’s security policies, standards and procedures that is the problem.

I know I am just asking for flame mail with this post, but I am so tired of people looking to blame everyone but themselves about their inadequacies surrounding information security.  As I have done before, to paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard.  If it wasn’t hard, everyone would do it.”

Security is not always easy, particularly when upper management does not have buy in.  But even when upper management supports security efforts, I have seen security personnel not take advantage of that fact and get the job done.  Security does not have to be hard, but it does take more than just slamming some firewalls and intrusion prevention gear down, tossing a SIEM into the mix and thinking you are done.  Security is a never ending journey because someone is always coming up with new ways to attack you.

Anyway, to start off, let us take a look at some definitions first so we are all on the same page.

Compliance is defined as:

“Conformity in fulfilling official requirements.”

“Official requirements?”  Could that possible mean your organization’s security policies, standards and procedures?  You bet.  In this instance, we are talking about those that correspond to the PCI DSS, but this also applies to ISO 27K, FISMA, HIPAA, GLBA or any multitude of frameworks and regulatory requirements.

Conformity is defined as:

“Compliance with standards, rules, or laws.”

Based on these definitions, security is all predicated on complying with what are deemed an adequate set of security policies, standards and procedures.  Conversely, if you are not complying with an adequate set of security policies, standards and procedures, then your organization cannot be as secure as it could be.  As a result, compliance has to equal security as long as the security policies, standards and procedures are considered adequate.  Therefore security professionals that quote the mantra, “compliance does not equal security” either have a problem with the compliance side of the equation (most likely) or with the standards/frameworks (the dodge).

Over the years there have been a lot of discussions about the PCI DSS, ISO 27K, FISMA and other security frameworks and whether or not they are adequate.  The important thing to remember is that all of these standards or frameworks are merely ante into the information security game.  They are the bare minimum or a baseline to get to a basic level of security.  Should you being doing more?  Definitely, but what those efforts beyond the standard/framework are depends on what you are trying to secure, your network and application architectures and a multitude of other factors related to your computing environment and how it is used.  Those are factors that cannot be taken into account by any standard/framework because they would start to become impossible for others to follow and implement.  The bottom line here is that if you want someone to tell you exactly what to do to secure your networks and applications, go hire a consultant you trust and they will tell you everything you want to know.

The rub in all of this is that, based on the breach reports from Verizon Business Services, Trustwave, et.al. as well as compliance testing reports I have reviewed, none of you out there are 100% compliant to begin with, let alone even close.  Every organization I am aware has problems complying with the basics, let alone with any advanced security requirements in the published standards/frameworks.  So if you cannot comply with what you already have, explain to me how a different framework is going to change that fact unless it is less stringent than the framework you are already trying to use?  And if that other framework is less stringent, while that may solve the compliance issue (which I seriously doubt), exactly how is a less stringent framework going to make you secure?  The answer is that it will not make you secure.

What security professionals struggle with is that compliance is a never ending, 24x7x365 effort.  Drop your guard for an instant and it can be game over.  But provided your security policies, standards and procedures are appropriate and detailed (the reason why you want to use an appropriate standard/framework), your organization is not as secure as it can be unless your personnel and devices comply 100% of the time with every defined security policy, standard and procedure.  If you want confirmation of these facts, again, just look at the breach analysis reports year after year.  The reason there are breaches is because of non-compliance with one, but usually more, of an organization’s security policies, standards and/or procedures.

This brings me to the rumblings of late regarding a rethinking of defense in depth.  Defense in depth is predicated on using layers of security devices and controls to minimize the risk that a security incident occurs not to completely prevent an incident although you might get lucky.  For example, firewalls are the sledge hammer of security tools.  However, because we need to have ports open for outsiders to access applications, we follow our firewalls with intrusion detection/prevention devices to ensure that no one abuses the protocols used by the ports.  We follow that up with monitoring of log data from the firewalls, IDS/IPS, routers, switches and servers to identify any “sneaky” attacks using the protocols we allow.  The layers are there to cover the various holes we need to have in order to make our networks and applications function.  The tighter and smaller we can make those holes, the more secure we will be, but there will still be some amount of risk.  So we bring in more layers to cover those risks until it is more expensive to address the risk than to accept the risk.  That remaining risk is the residual risk that we therefore manage and control through detection and correction.

The other thing defense in depth relies on is the control triad.  The idea being that, because you cannot entirely prevent every security incident, you need a way to detect the incident so that you can take action to stop or minimize the impact of the incident.  You follow that up with periodic assessments of your control environment to identify and correct any deficiencies or improve your program based on new information regarding security.  The follow up assessments can be activities such as a root cause analysis (RCA) of an incident, an internal audit of user accounts and user rights or brining in a network security team to assess your security architecture and controls.  All of these activities will result in findings and recommendations to make your security systems and controls better.

And that brings us full circle to the PCI assessment.  It is merely a tool used by the acquiring banks, card brands, processors and others to obtain reasonable assurance that your organization is doing what it can to minimize the possibility of a breach of cardholder data.  It is not meant to be, nor could it ever be, an absolute complete assessment of an organization’s security posture and therefore provide absolute assurance that a breach will not occur (even though the PCI SSC and card brands tend to imply that fact).  Compliance assessments are only a snapshot of personnel and device compliance at the time the reports were written.  This is no different than going to the doctor for your annual physical which results in a snapshot of your health at that point in time.  It is not that those compliance reports are worthless; they just need to be referenced and used properly based on the fact that they are a snapshot.  Just as your doctor will tell you to lose weight or stop smoking, compliance reports provide recommendations on where you can make improvements or adjustments in your policies, standards and procedures based on what compliance evidence was found, or not found, during the assessment.

So, what are the lessons to be learned?

  • Security is not and never will be perfect; there will always be residual risk that must be managed and controlled.
  • Compliance does equal security, at least as best as your preferred standard or framework defines it plus whatever enhancements you have made.
  • Compliance assessments and reports point out where your organization was not compliant and needs to do better, not to prove your organization is secure.

Use the tools at your disposal correctly, stay current on threats and monitor your security posture and you will likely live a long, prosperous and secure life.

Keep hiding behind “compliance does not equal security” and you will forever be living off of your “luck” until it runs out (usually sooner rather than later).

06
Jan
13

Security And Compliance

I have written a lot about this topic over the years and was recently reviewing my Compliance Is Not Security – Busted! post and the comments that came in regarding it.

A theme of a number of the comments was that compliance does not equal security.  DUH!

I have never once said or even implied that compliance equaled security as – yes, here it comes – security is not perfect!  However, if you are complying with any security program/framework such as the PCI DSS, ISO 27K, etc., then you are likely more secure than those who are not.

Security technology such as firewalls, routers, servers, applications, etc. can all be set up with rules that are complied with 100% of the time, day in and day out, no exceptions.  The problem comes down to people who are fallible.  Their compliance is never 100% and you are probably lucky to have anyone above 90%, no matter how much security awareness training you do.  As a result, in organizations that are truly complying with the PCI standards, this is where the security breach starts, with people for one reason or another.

No, I am not necessarily talking about social engineering, although social engineering is growing because of the fact that organizations have invested a lot in security technologies yet people are fallible.  People can be the root cause because of any or all of the following.

  • How dare you do that to me!  This is the most obvious of the people issues that comes to mind.  Face it, when backed into a corner, people lash out just like a trapped animal.  The supposedly wronged party wants their proverbial “pound of flesh.”  They get that pound of flesh by hurting the organization that has just hurt them.  This can be as minimal as taking office supplies to downloading databases to a USB drive as they empty their desk.  Obviously, a database, network or system administrator’s access is much different than a clerk’s.  However, if your security is minimal on the inside as it is in most organizations, the clerk may actually have better access than the administrators when it comes to sensitive information.  Such a situation may not be the fault of the administrators, that old version of POS or ERP may not have the ability to be more granular regarding access to information.
  • Over inundated with alerts and cannot identify real alerts from false positives.  This typically occurs when an automated tool is implemented but never tuned to the organization’s environment.  In this sort of an environment, finding real alerts can be like finding a needle in a haystack when there are thousands of alerts an hour scrolling by on the screen.  This usually makes management wonder why the tool was needed in the first place.
  • Saw an alert and ignored it.  We see this most often coupled with the aforementioned inundation issue.  The other most common version of this issue is with internally used SSL certificates that were generated incorrectly or use a default certificate supplied by the application.  Users then see the “There is a problem with this Website’s security certificate” or similar error message in their browser whenever these flawed certificates are encountered and become conditioned to ignore the error message.  Over time, they become conditioned to ignore all of these sorts of messages, including those for malware infected Web sites and, surprise, you have been compromised.  I have lost count how many people have said to me, “We just ignore those alerts because we know they are false positives.”
  • Saw the alert but got side tracked and never came back to it.  This is a problem we see all of the time.  For example, the person that monitors the network is also the person that manages the network and configures the network.  An alert comes in and the person begins a root cause analysis (RCA) only to get pulled away because a remote facility is offline.  The offline issue gets resolved, but other issues come up as well as meetings and telephone calls and the person never gets back to the RCA for the alert because there is no “tickler” to remind them to go back and complete the RCA.  In the meantime, the attacker has gained their beachhead and is probing the network for whatever value it may contain.
  • Just did not put together all of the pieces to know they were compromised.  Like the reasons 9/11 occurred, most organizations do not correlate all of the potential incidents occurring in their networks and therefore do not understand that there is an active effort to compromise their network or that they have already been compromised until well after the incident has caused damage.  The reason this is important is that once an attacker is inside your organization’s security perimeter, it is typically game over because there are few controls to prevent access and identify that data is being taken.

If you have read the Verizon Business Services Data Breach Investigations Reports (DBIR) over the years you know how the bulk of attacks get inside, they are the result of people.  For the last two years, the DBIR has used the VERIS Event Threat Grid to show how breaches occur.  Across the top of the grid are the categories; Malware, Hacking, Social, Misuse, Physical, Error and Environmental.  The Social, Misuse and Error categories imply mistakes or deliberate acts of people.  If you read the definitions on the VERIS Web site, Malware is also very people centric as is hacking.  Surprisingly to some will be that the Physical and Environmental categories also have a good number of people errors.  Based on just a quick read, it looks to be that about 60% to even 70% of all of the incidents categorized by VERIS has some form of people error component.

Since we are not going to get rid of people in our organizations any time soon, what are you to do?

  • Admit that people are the problem and focus your security measures accordingly.  Every 12 step program says the first step is to admit the problem which, in this case, is that people are fallible.  As a result, we need to construct our security measures such that this fallibility is minimized as much as possible.  One of the best solutions is to integrate alerts into your help desk or change management system so that a ticket is generated.  Those tickets need to have an escalation process behind them so that if they are not investigated within a period of time, they are bumped up to the next higher rung of management and that escalation continues until the tickets are finally addressed.  This way there is visibility for the alerts should they slip through the cracks.  As a side benefit of this approach, you gain statistics to reinforce why you need more staff and/or more/better tools.
  • Strengthen your internal security measures.  As things stand, once inside most organization’s security perimeter, there is very little that stands in the way of an experienced attacker getting the data they desire.  Regardless of whether it is an insider attack or an attacker has managed to get inside, there is already justification for organizations to beef up their internal security measures.  To address this problem, I would recommend the security architectures as documented in my Fort Knox approach, Forrester’s Zero Trust Model or McGladrey’s Ultra Secure Network.  But most organizations do not have the infrastructure architecture, the application architecture or even the will to take such approaches.  But that does not excuse an organization from just saying they cannot do anything.  If anything, most organizations could vastly improve the monitoring they do on their internal networks.  Monitoring needs to be coupled with reducing the total number of ports that are open between network segments.  Most internal networks do a terrible job of this because of a variety of factors including applications people that cannot tell what ports need to be open to avoiding operational issues by just leaving things open.  Another area of improvement is reviewing user access rights on all systems and applications, not just those in-scope for PCI compliance.
  • Constantly tune your alerting system(s).  Just as attack methods are not static, neither are networks, systems and applications.  Changes are occurring all of the time in an organization’s IT environment, yet if you ask the people running the SIEM about changes, nine times out of ten, nothing seems to be changing other than requests to look for a new signature or anomaly.  There is a belief in the SIEM user community that a SIEM’s update process is making the necessary changes in the policies that ship with the SIEM.  To a certain extent SIEM solutions are similar to anti-virus and malware solutions.  However, because a SIEM monitors log data and the log data provided varies greatly from organization to organization, each organization needs to periodically review and adjust their alerting criteria to make sure that it reflects the organization’s operating environment and not just some template from the SIEM vendor.  If an organization is not reviewing its SIEM alerting rules based on the changes made, at least quarterly, then it is highly likely that the SIEM is not alerting properly.
  • Establish separate consoles from your SIEM for network, system, security and application administrators.  What a network administrator is looking for is vastly different from what an application administrator is looking for and what any particular group might be looking for to generate an alert.  As a result, to have only one console is really silly and non-productive.  Yet time and again, we see SIEM implementations with just that, one console and everyone being driven by email or SMS alerts.  The people alerted then have to get to the SIEM to find out what exactly triggered the alert and then determine what to do about it.  Having your own console view simplified things by only listing that viewer’s alerts and no one else’s alerts.  This allows people to focus on their problems and not the whole organizations problems.  The idea behind the single console is that if everyone knows what is going on overall, then correlation would occur because everyone sees everything.  While you would think that would be the case, in reality, people just want to fix their problem and move on, not the entire organization.  Which leads to my last point.
  • Watch the overall alerting picture so that correlations can be made.  According to most sources, today’s attacks are becoming more sophisticated and multi-pronged in their approach.  For example, while most DDoS attacks are just to be a pain in the posterior to the target and disrupt access to the target’s Web site, there are those DDoS attacks that are used as cover so that people inside are blinded to the real attack(s).  Whether or not the DDoS was a decoy depends on what other events or incidents occurred during the DDoS attack, if your alerting system did its work.  Higher end SIEM solutions can provide basic correlation rules, but most SIEM solutions require the end user to develop those correlation rules.  It is these correlation rules that help organization identify these more sophisticated attacks.  That said, these correlation rules do not have to be very sophisticated.  For example, during a DDoS attack, you really only need to look for malware attacks, failed authentication attempts and other anomalies that would be likely indicators of the DDoS attack being used to mask the real attack.

Is all of this going to address your security issues?  Sorry, not a chance.  None of the above stops all breaches, it merely minimizes the possibility that a breach goes on for months or years.  Hopefully it minimizes a breach down to weeks, days, maybe even hours in some cases but it will never totally eliminate them.  Security is not perfect.

There is a side benefit to all of this and that is it will assist you in doing RCA.  RCA is very effective in getting rid of those nagging operation issues that occur from time to time and mess up the delivery of your organization’s goods and services.  All of the information you collect for security purposes can also be used to find the needle in the haystack that is causing a database to corrupt, a network connection to drop or a server to fail because now you have information as to what was going on that led up to the problem.

The reason an organization is not secure is that there are so many areas of improvement needed that the full control triad is no longer functioning and holes exist that will allow an attacker to operate without the knowledge of the organization.  Until the controls are implemented and operating properly, it will be impossible to determine if they are secure or not.  The recommendations I have made will hopefully give you a better picture of what you face and reacting to issues that need attention before your organization is the next one to be breached.

22
Dec
12

What To Focus On In 2013

It is the end of the year and, like all other pundits, here is another idea on what 2013 will bring in the way of security issues.  After reading a lot of the other predictions out there, I tend to agree with those from Verizon Business Services’ Data Breach Investigation Report researchers.  While everyone else is predicting cyber-Armageddon as the biggest threat, the researchers at Verizon Business Services see a lot more of the same for 2013.

The biggest threat Verizon identifies is more attacks on authentication systems.  This is most likely because your vendor or your developers talked you into storing authentication information in your database that is Internet facing.  We see this all of the time with eCommerce and Internet banking solutions.  The external user credentials end up being stored in the database along with order entry, inventory and pricing data.  This is typically done because using a directory system for such purposes is difficult and, at times, not as functional as when authentication data is stored in and used from a database.  Given the prevalence of SQL attacks, all of that information results in being available for the taking through a SQL injection attack.  As a result, the attackers compromise the authentication system, gain access to everyone’s credentials, including administrators, and it is likely ‘game over’ regarding the rest of your security measures.

I want to touch next on social engineering because it is typically directly related to compromising authentication systems even though the second place attack method most concerning Verizon researchers is application attacks.  Social engineering is all about tricking your end users into giving up key information so that an attacker can compromise your environment.  The most common piece of information an attacker tries to obtain is an end user’s credentials for logging onto the network.  Hence the reason why I wanted to discuss this after the authentication system attacks.  Social engineering is the most insidious of attack methods because it does not involve any of an organization’s security technology.  And worst of all, if social engineering is successful, all or most of your organization’s security technology is effectively neutralized as a result.  That is because most organizations have little or no security once someone is on the inside.

Now let us look at the second most concerning attack to Verizon which is application attacks.  Verizon is saying that application attacks are more of a threat to governments and large applications.  Regardless of the target, any organization with an application presence on the Internet is a potential target such as with eCommerce or Internet banking.  A lot of these applications are based on on-line frameworks such as IBM’s Websphere or Oracle’s Application Framework.  It is not that these frameworks are insecure, it is that they still require development effort and it is those custom development efforts do not guarantee a secure application.  The problem comes from the fact that a lot of people believe that using a framework means that little to know security testing even though the amount of custom development done in these frameworks can be more extensive than starting from scratch.  As a result, we see a lot of organizations tossing Internet applications into production with little or no security testing and then ending up with breaches as a result.

In addition, there are third party applications served up by application service providers (ASP).  A lot of small and mid-sized businesses (SMB) use these sorts of applications to have an online presence.  As a result, a lot of SMBs believe that these solutions do not require any security testing because the vendor and the ASP do that for them.  However, we are encountering more and more attacks on SMBs, particularly those that have wealthy clientele such as country clubs and exclusive financial institutions because their applications are notorious for not being secure.  SMBs are constantly amazed that; (1) they were targeted and, (2) the application was not bettered secured.  Yet attackers know that while the take out of SMBs could be significantly less than a large organization, an SMB is usually easier to compromise because they do not have the security and monitoring resources of a large organization.  As a result,

Finally, something that concerns me as the previously discussed threats is mobile devices and devices not under the organization’s control, also known as ‘bring your own device’ or BYOD.  I think these devices will surpass the other three threats over the next few years because most organizations have difficulty maintaining security on their servers, desktops and notebooks, let alone something like an iPhone or an Android tablet.  The worst thing about mobile devices is that they are so easily lost and it fascinates me how many people lose their mobile devices.  The bottom line about mobile devices and BYOD is that you must be very, very careful as to what you allow these devices to access and how you grant that access.  And you must make sure that these devices are not allowed to download information, even if encrypted, as that information is highly likely to be lost.

So what should you be doing regarding these threats?  Here are my top things organizations should be doing to minimize the risks presented by these threats.

  • Trust no one.  This is particularly true of mobile devices or BYODs, but it also applies to your own internal systems.  Forrester has promoted this in their ‘Zero Trust Model’ as well as me in the ‘Fort Knox Approach’.  This is not as easy as one might think, but the approach makes sense in these days of attacks on authentication systems and social engineering approaches.
  • Classify your data.  This is usually a difficult project, but they pay dividends at the end because everyone understands why certain information cannot be allowed out of the control of the organization.  It also allows people to justify to others why data cannot be allowed to be accessed by people that do not need access as well as via mobile or BYOD.
  • Require encryption on mobile devices and BYOD.  Even if you do not allow data to end up on these devices, you do not even want the memory or other information that might be inadvertently stored on these devices out of your control.  As a result, if you encrypt these devices, there is a high likelihood that, if they are lost, the person finding them will just wipe them and start over.
  • When possible, use a directory system for authentication.  This is always painful for systems that operate outside of the traditional control environment of internal users.  Directory systems are usually designed to be more secure than any sort of database authentication system because they are assumed to be at risk from the start.  However, just because they are designed to be secure does not mean they cannot be implemented in an insecure manner.  Windows Active Directory takes a lot of heat for being insecure; however a lot of that heat is due to silly implementations to support insecure authentication methods for compatibility.
  • Conduct security awareness training.  The only thing that minimizes social engineering is consistent and regular security awareness training.  However, do not kid yourself or management.  Everyone has their ‘moments’ and does something they should not.  That said there are always those that just never seem to get it which is why you need other controls and monitoring to ensure you maintain your security.  However, to just throw up your hands and say it is pointless is also not a position to take.
  • Secure your applications.  This means conducting application code reviews and testing applications before they are put into production not after the fact.  Unlike networks where you need to put them into production before testing them, applications can be tested before going into production.  It amazes me how many organizations put their applications into production and, by the time they finally get around to testing them, they have already been compromised.  And while automated application code testing solutions are all of the rage, we still find that the best results come from the more traditional human code review not automated tools.
  • Monitor your network and applications.  This is a double edge sword.  You know what to look for, however, you have so many ports open that it is near to impossible to recognize bad traffic from good traffic.  And it is not necessarily the fault of your IT department as most packaged applications require an inordinate amount of ports open to function properly.  However, the key thing to monitor, more than anything, is any traffic going outside of your network to an unknown location.  When you see traffic going to Eastern Europe, China or any unexpected IP address, your monitoring system should generate an alert as that is typically a key indicator that you have been compromised.

Have a happy holiday season and I will “see” you all next year.

29
May
12

The Failure Of PCI?

Steve Sommers of Shift4 has posted an interesting take on the PCI DSS and its value to merchants and service providers, particularly in the wake of the Global Payments breach.  Steve has asked me to comment on his post and here is my take.

This quote speaks to the frustration a lot of merchants and service providers feel.

“The only thing PCI did was round up a bunch of existing security best practices, compile them into lists, and publish these lists as “guidance” documents.”

No doubt about it, the PCI DSS is an amalgam of various security “best practices” bundled together and published by the PCI SSC.  I remember back in 2003 when the PCI DSS was the Visa CISP, fresh off the presses as an Excel spreadsheet, still embedded with the review comments from Visa and the consultants that created the CISP.

But what are the card brands to do?  Breaches are occurring all over the place when they start down this road.  The media reports that so many Visa or MasterCard accounts are breached and then they talk about the merchant or service provider involved.  Visa and MasterCard are trying to protect their brand names because studies show that the public remembers the card brand names, but quickly forget the names of the merchants or service providers breached.  As a result, they feel the need to develop guidelines to protect cardholder data to minimize the number and sizes of breaches.

Was life better when Visa, MasterCard, American Express, Discover and JCB all had their own individual standards?  You all complain now about one ROC or SAQ, how would you like to be filling out a different form for each card brand every year?  I would guess your answer is a huge ‘No’.

But the bigger question is, if not the PCI DSS, then what standard?  ISO 27002?  FISMA?  FFIEC?  HIPAA HITECH?  I have yet to find a security standard that everyone can agree on, let alone agree to follow.  People complain about every one of the information security standards.  And then there is that ugly mantra of “compliance is not security,” but I have already covered that ground.  So see my posts on why I think that saying is just a dodge.

All security standards are just a starting point or ante into the game.  A number of friends of mine have all remarked that their security programs only have to be a little bit better than everyone else’s to keep them off the target list.  However, that is the key; you need to be better than the other guy.  But will the other guy tell you what he is doing when he has the same strategy?  Standards like the PCI DSS give you that benchmark to start from so you know where you need to be better than the rest.

But the biggest problem with standards all comes down to the fact that humans are really averse to being measured or assessed against a standard.  Why?  It makes people responsible and accountable for what they do and few people want that sort of accountability – we all much prefer “wiggle room” in how our jobs are assessed.

“Then the card brands attached fines and penalties to punish merchants if they failed to comply with PCI “guidance” 100% of the time.”

“To me, the issue is this: PCI SSC promotes their work as “best practices” or ”guidance”,  and then the card brands turn around and flog merchants for not following them when they are breached.”

Steve is right on the mark with these statements.  As he stated earlier in his post and what I frequently state here, security is not perfect.  All security does is reduce the risk of a breach but does not remove that risk in its entirety.  As a result, even with PCI DSS compliance, breaches will still occur, but the goal is to make them much less frequent and a number of orders of magnitude smaller in the amount of data released.

This brings us to the heavy handedness in how the card brands handle breaches.  All I can say is that some of my friends in the forensics field are telling me that there are a number of breaches that they have investigated that were not the result of PCI non-compliance.  So Visa and PCI SSC GM Russo need to back off on their statement that “No compromised entity to date has been found to be in compliance with PCI DSS at the time of the breach.”  According to my sources, this is patently not true anymore and the card brands are not happy about that fact.

The card brands, in particular Visa, seem to refuse the premise that security is not perfect and keep pushing that the PCI DSS, if followed to the letter, is the solution to breaches.  None of this is the truth and security professionals know those facts.  As a result, we end up with media quotes from the card brands, PCI SSC representatives and security professionals that are at times out and out asinine.  Until we can all come to grips with these facts, we will continue to be playing a game of spin.  And spin get us nowhere.

“I personally believe that PCI is written in such a way – and interpretations among QSAs vary so much – as to make it impossible for anyone to be 100% compliant 100% of the time.”

The flexibility in the PCI DSS is there because security professionals and their employers would not have it any other way.  Would you prefer a dictatorial standard that specifically calls out solutions and vendors?  What would people be saying if only Cisco and Juniper firewalls, routers and switches were allowed?  What would Microsoft say if Windows was not allowed?  What would other vendors say if only MICROS POS solutions were approved?  What if only VLANs with specific ACLs were the only allowed method of network segmentation?  Can you say lawsuits?

The bigger problem that the PCI SSC needs to address is the QSA/ISA training.  A lot of QSAs are great technologists, but would not know a good or bad control environment if it bit them in the posterior.  Fewer QSAs and most ISAs know controls, but would not know a proper firewall or router configuration to save their lives.  And finally, there are a very, very few QSAs and some ISAs that know the technology and controls.  Unfortunately, the PCI SSC has not found the way to winnow out the QSAs and ISAs so that only the ones that know both technology and controls remain.

But even in such a perfect world, each QSAC has its own tolerance of risk.  As a result, what is PCI DSS compliant to one QSAC is not necessarily going to be acceptable to another QSAC because of the risk they are being asked to accept.  Firms like mine are fairly risk averse, so we are going to be more demanding when it comes to what is PCI compliant than other QSACs.  But by the same token, I do not believe we are unreasonable in what we demand for PCI compliance.

At the end of the day, while the PCI DSS is not perfect, it does provide the following benefits to merchants and service providers.

  • It provides a way to help everyone learn from the other guy’s mistakes.  As attack tactics change, so do the PCI standards to address tactics that might not be covered.
  • It gives everyone the baseline of what they need to execute 24x7x365 if they even think they have a better than average chance at security.

Prior to the PCI DSS, Visa CISP and the other standards, it was a crap shoot as to whether or not an organization’s security was going to be up to snuff.  I do not think that is where anyone wants to go.

Steve, I understand your frustration and the frustration and pain of merchants and service providers.  But if what I have stated here is not a net positive, I do not know what is.  Is it perfect?  Nothing in the world is perfect.  But there are some changes that would improve the program and make it seem much less painful and frustrating.  We just need to continue to work on the PCI SSC and the card brands to see the light and make the necessary changes.

25
May
12

What To Do About Insiders

The first posting I did on this subject was to provide an understanding that, despite the news stories, the insider threat is a very real threat and needs to be addressed.  However, what is an organization to do?  Employees and others need to have access to certain information in order to get their jobs done.  What steps should an organization take to minimize the insider threat?

First, I need to be very clear about this.  Even when you do all of what I recommend, you are only minimizing the insider threat.  The insider threat can never be totally mitigated.  Insiders must have access to information that the general public or even you business partners do not have access.  As a result, should an employee get sloppy with controls or go “rogue,” you can expect to lose whatever information that person had access.  Remember my mantra – security is not perfect.

I posted some ideas a while back on controls for automation.  Here are my minimum recommendations for manual controls to put into place to minimize the insider threat.

  • Management needs to recognize the importance of management controls.  The “tone at the top” really does mean something when it comes to controls.  However, management needs to understand that these sorts of controls are no absolute guarantee of avoiding issues.  Properly implemented, monitored and adjusted as necessary, such a control environment will confirm to the rest of the organization that management believes that controls are important.  If management does not know what to do regarding management controls, then they should consult with a public accounting firm as they are very aware of control environments and can assist in the design of a control environment.
  • Preventive controls.  Preventative controls, as their name implies, put in place something to prevent a problem.  A prime example of a manual preventive control is requiring a minimum of two signatures on checks.  The larger the amount on the check, the more people that have to sign off on the check.  Under such an approach multiple people have to collude to defraud the system.  This sort of approach can also be taken for report reviews of inventory, cash on hand and any other metrics that are important to the survival of the organization.  The idea is to ensure that at least two people are involved in these reviews and that they physically sign off on their review and document and start an investigation into any irregularities.
  • Detective controls.  As the name implies, detective controls are controls used to detect problems.  Following the example in preventative controls, the other people signing off on a check or reviewing a critical metric report is a detective control.  If the reviewer feels that something is not right with what they are reviewing, they are obligated to notify their immediate supervisor of the issue and ask the submitter to physically document the situation.  Once documented, the reviewer can then either sign off and accept the explanation, or refuse and further investigate.
  • Corrective controls.  Corrective controls are those controls used to ensure that the preventative and detective controls are focused on the right problems and are going to be able to be relied upon going forward.  Keeping to the theme, in the event of an irregularity being identified, management should then institute a root cause analysis and determine what caused the situation and make the necessary changes to the preventative and detective controls to ensure that people do not try to circumvent the control environment.
  • Hold employees responsible for the control environment.  Management may be responsible for establishing controls, but it is the employees that make the control environment actually work.  Employees should have their key controls evaluated at least annually to reinforce the importance of controls.  In our check example, the people signing off on checks should be evaluated on how many checks with problems are issued by the organization that they were required to sign.
  • Solicit control improvement ideas from employees.  The problem most organizations have with management controls is keeping them relevant.  A common example we see is a problem that occurred ten years ago has been addressed by automated controls in a new computer system, yet management continues to require the manual control to be followed.  Most of the time, employees know exactly what needs to be done, but management does not want to recognize that fact.
  • Have a third party periodically assess your controls.  In addition to employees providing ideas, organizations should periodically invite a third party, such as their accounting firm, to assess the control environment and recommend changes.  A number of years ago I worked with a large organization where we discovered that the way one of their computer systems had recently been modified, checks could be generated and bypass approvals and oversight.

For those of you that are going to recommend these minimum controls, my heart goes out to you.  The road ahead is likely to be very bumpy and contentious if your organization has a mediocre control environment.

Something to share with management as you push this sort of project is that there are very measureable benefits to implementing controls.  Every organization that I have worked with over the years has found that a byproduct of their controls projects has been fewer customer complaints and fewer employee screw ups.  Avoiding problems or making them smaller and less impactful on customers can add up to serious savings in time and money.

If you have a mature control environment, take a look at how you can make it better, more effective and more relevant.  If you do not have a mature control environment, then take baby steps.  Look to your accounting area as they will likely have the most robust control environment.  Grab one of those accountants and use them to help you look at other areas that may have problems that controls can address.

Best of luck to all of you on your journey.

30
Aug
11

Compliance Is Not Security – Busted!

You hear this as a common lament from security professionals, “Compliance is not security.”  This remark has always sounded like a dodge to me.  I suppose the reason I think this is that a majority of the people who seem to utter this phrase always seem to have questionable security practices.  But since I hear this complaint so much, I thought I should see if these people have a point.  So like Jamie and Adam on Mythbusters, I decided to see if this saying really held water.

The first thing I did was to get the definition for the term ‘compliance’.  From Webster’s dictionary, ‘compliance’ is defined as:

“Conformity in fulfilling official requirements.”

That definition seems to say it all.  If you are conforming to defined requirements, then you are compliant.  Therefore, whether or not an organization is secure comes down to whether or not the requirements they are complying with are deemed adequate to ensure the security of the organization.  So, it is the standard that is the problem, not the compliance with the standard.  Let us examine the PCI DSS for its completeness in creating a secure environment.

First, the PCI DSS is organized into six domains and twelve requirements.  The six domains are:

  • Build and Maintain a Secure Network
  • Protect Cardholder Data
  • Maintain a Vulnerability Management Program
  • Implement Strong Access Control Measures
  • Regularly Monitor and Test Networks
  • Maintain an Information Security Policy

The naysayers in the group will point to the fact that the PCI DSS seems to be exclusively focused on the security of cardholder data.  While true, I would argue that the following domains are required across an organization’s network to ensure the security of cardholder data.

  • Build and Maintain a Secure Network
  • Maintain a Vulnerability Management Program
  • Implement Strong Access Control Measures
  • Regularly Monitor and Test Networks
  • Maintain an Information Security Policy

So we only lose the second domain of “Protect Cardholder Data” when we focus on the cardholder data aspects.  There are still five of the domains remaining that are relevant to an organization’s entire network.  And even if those are focused on the cardholder data environment, in order to ensure the security of the cardholder data environment, security would have to exist on the other areas of the network not part of the cardholder data environment.  However, change cardholder data to employee data, contract data, board of director meeting minutes, customer data, product engineering data, and you begin to see that the PCI DSS can provide a framework for all of those sensitive data types as well.  As a result, I would argue that the PCI DSS does offer a reasonable framework for providing security.

Now, just so you know I have not taken a big gulp of the “PCI Kool-Aid.”  Is the PCI DSS the perfect security framework?  Not at all.  For that matter, there is no security framework that has been developed that is 100% perfect.  They all have their own flaws and problems, but for the most part they do an admirable job in securing an organization if properly implemented, monitored and tweaked.  But let us be clear, there is no such thing as a perfect security framework because as I have said time and again – wait for it – security is not perfect.  For those of you that are implicitly selling security to your management as perfect need to stop it.  You are doing the security profession a disservice and are likely cutting your career short as well.

The real reason I think a lot of security professionals regurgitate the saying; “compliance is not security” is that they are confusing compliance with consistency.  True security requires 100% consistency in the execution of procedures.  However, not all security can be totally automated and some security practices must involve people.  Unfortunately, people are fallible, so 100% consistency is not possible when people are involved.  Because we know people are fallible, we develop our security practices such that there are layers to protect us from the occasional missteps of people.  Those layers are structured around the control triad of protective controls, detective controls and corrective controls.  I have written on the control triad before, so I will not bother to explain these principles.  Needless to say, if your controls do not have these three components, then you are likely not as secure as you might think.

Because at some point people are involved in any control environment, compliance programs are developed to test an organization’s controls over a period of time to ensure that all standards and procedures are executed all of the time.  Compliance programs use frameworks such as the PCI DSS, SOX, HIPAA and the like to determine what areas to assess and to ensure that the requirements documented in these frameworks are being consistently followed.  But the key is that controls are assessed over a period of time, usually six to twelve months.  Assessees are required to provide proof over the assessment period to prove that controls are being enforced 100% of the time.

This is an area where the PCI DSS falls short as a lot of the requirements are only assessed as of a given point of time.  Unfortunately, that given point in time can be different for each requirement.  As a result, it is very easy for an organization to be compliant on paper, but not compliant consistently.  If the PCI SSC could make one change that would improve the PCI assessment process it would be to require assessing organizations over a period of twelve months.  I know of a few QSACs that follow such an approach, but they are few and far between.

The bottom line is that security professionals need drop the “compliance is not security” mantra as long as the framework used addresses all aspects of security.  All of you out there that are hiding behind this saying need to find some other excuse for not securing your organizations.  “Compliance is not security” is an excuse, and a lame one at that.  Because if you are truly doing your job, then you are complying with some security framework and, therefore, compliance is security.

Myth busted!

09
Jan
11

Network Segmentation – One Last Discussion

Based on feedback I am getting, my previous posts regarding network segmentation are still not getting the point across regarding proper network segmentation.  So, this time I am going to use my post regarding the Control Triad and hopefully everyone will now understand what constitutes appropriate network segmentation.

As a quick refresher, the control triad is compromised of preventative controls, detective controls and corrective controls.  All of these control types are required to ensure a secure environment.  The more individual controls you have under each of the three control types, the less likely an incident will occur and the more coverage you should be able to afford your organization should a control go temporarily out of compliance.  However, an individual control can really only appear under one of the control types otherwise that control is diluted as it becomes a single point of failure causing the control triad to not function properly.  With that explanation, let us look at proper network segmentation from the control triad perspective.

Preventative Controls

The following would be considered the minimum preventative controls when talking about network segmentation.

  • Firewall(s) with rules that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Router(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • VLAN(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Private wireless network(s) use a separate VLAN(s) from the cardholder data environment with access controls enforced for any access to the cardholder data environment from wireless.  Private wireless access points are configured with WPA2 using Enterprise authentication and AES 128-bit or greater encryption.
  • Software firewall on server(s) in the cardholder data environment that restricts traffic to a limited number of ports/services to/from the server(s).
  • Restricted administrative access to infrastructure devices in or controlling access to the cardholder data environment.
  • Access controls that restrict administrative and end-user access to applications in the cardholder data environment or that access the cardholder data environment.

Remember, when I say. “limited number of ports to/from” I mean a very limited number of ports.  Yes, there may be instances where you might have 100 ports open to/from your cardholder data environment, but you better have a valid business reason for every one of those 100 ports.  And just so we are all clear, a valid business reason documents the reason why the port needs to be open, the risk presented to the cardholder data environment that the port is open, actions that have been taken to minimize the risks, and management approval of the port being open.  And the business reason for opening a port needs to be more than just “it needs to be open” or “the application will not function unless it is open.”  You need to document why it has to be open so that in the event of a breach you can quickly rule out the ports that might have been the cause based on the type of attack.

When we talk about restricting access, you need to be restricting access.  In small and mid-sized organizations, restricting access might not be feasible.  In those cases, forcing personnel to go to management to gain access is the way to properly provide control.  In large organizations, what we are talking about is restricting access to fewer personnel than everyone that has access to normal production.  The idea is that not everyone in support or business users should have access to the cardholder data environment.  The rule here is the fewer the better but do not make it so few that you create issues.

If you want to go the extra mile, the following controls can further enhance your security.  However, for some organizations, they come at a cost in operational efficiency that is unacceptable.

  • Disable all unused physical jack connections on all infrastructure devices.  Any activation of a jack requires a service ticket and standard management approvals.
  • Disable dynamic host configuration protocol (DHCP) in all retail locations.
  • Public wireless in retail facilities provided by a separate third party and on a separate circuit that connects to the Internet.
  • Required use of encrypted, two-factor authenticated virtual private network (VPN) connections from any wireless network to gain access to any internal network.
  • Access to the cardholder data environment is not allowed for users connecting through any remote access connection.

Detective Controls

The following would be considered the minimum detective controls when talking about network segmentation.

  • Network and host intrusion detection/prevention systems that monitors the aforementioned firewalls, routers, VLANs and servers that are protecting the cardholder data environment and generate alerts to appropriate personnel when an intrusion or incident is detected.
  • Daily analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Daily monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Daily analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or a potential incident.
  • Change management records for all infrastructure devices, servers and applications in-scope for PCI compliance.

The key here is to generate alerts should any anomalous activity be detected.  But that is the rub.  What is anomalous?  Anomalies are not always the easiest things to identify or define.  As a result, your detective controls may take a while to fine tune.  However, the organizations that do the best job of managing their detective controls organize their anomalies by the PCI DSS requirements they are trying to meet.  This allows them to tweak their anomaly detection capabilities by PCI DSS requirement.

Then there is the issue of what do you do if you detect an anomaly?  Most of the time, an anomaly is not dealt with because of one of two reasons.  The first reason is because the detection solutions are new and are not functioning properly because no one has taken the time to tune them.  The second reason is that because of changes in the environment, the detective controls need to be re-tuned to reflect the changes.  Regardless of why, the detective controls need to be adjusted so that they are not generating excess false positives resulting in people chasing phantom issues.

If you want to go the extra mile, the following controls can further enhance your security.  While these sorts of tools are available as open-source solutions, there are also many commercial solutions as well.  Regardless of whether they are commercial or open-source solutions, tools that will perform these functions typically take a significant amount of time and effort to tune so that they provide the right amount of information for the right incidents.

  • Real-time analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Real-time monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Real-time analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or potential incident.

All real-time does is provide you with instantaneous alerting.  Most small and even mid-sized merchants do not need real-time analysis and alerting.  Not that they cannot use it, it is likely overkill for their environments given the threat of attack.  However for governmental agencies/departments, financial institutions, health care organizations and most large merchants; real-time analysis and alerting is mandatory.

And if you think tuning for daily reviews was painful, tuning real-time analysis and alerting systems is at least twice as painful.

Corrective Controls

The following would be considered the minimum corrective controls when talking about network segmentation.

  • Change management procedures.
  • Incident response plan(s) for addressing any issues identified by the detective controls.
  • Root Cause Analysis (RCA) procedures.
  • Action plans that result from the incident response process that require changes to the preventative and/or detective controls.  At a minimum, the action plans must document the correction needed, the person(s) responsible for getting the correction completed and the timeframe for the correction to occur.
  • Internal audit review of the preventative and detective controls.
  • QSA review of the preventative and detective controls.

Here is where a lot of organizations miss the boat.  You have detected an anomaly, you dealt with the anomaly, but you do not analyze why the anomaly occurred or you do an analysis but then you do nothing to correct any issues that might have been identified.  As a result, the anomaly continues to be encountered but actions are not taken to minimize or even eliminate occurrences.  This is why the advanced persistent threat (APT) is successful.  APT relies on the fact that eventually all organizations get sloppy and do not take corrective actions to maintain or even improve their controls.

There may be a number of preventative, detective and corrective controls that I may have missed or did not consider since everyone has unique environments.  At a minimum, if your organization has implemented these controls and they are all operating effectively, you are going to better than the majority of organizations out there and much less likely to have a serious incident that could result in a breach.

And that is the problem all organizations face, keeping these controls functioning effectively every day without missing a beat.  That is why we have defense in depth.  If one control is not functioning properly, there are other controls that will cover in the interim until the control is back functioning properly.

Finally, as I always like to remind people, just because you implement all of these recommendations does make you invincible.  All these recommendations do is just making the likelihood of an incident and the potential damage resulting from an incident lower than if you had little or no controls in place.  How much lower depends on a number of factors, but the risk will be lower.  And after all, it is all about lower risk.

Hopefully the issue of what constitutes appropriate network segmentation has now been put to rest.

13
Mar
10

Security Is Not Perfect

I am really getting tired of people complaining about data breaches, the lack of security and how it is all the fault of security initiatives such as the PCI DSS, the QSA involved or the carelessness of the organization that was breached.  It is not that QSAs, the PCI DSS and the like do not have a responsibility in all of this, but that responsibility seems to be over allocated to QSAs and the PCI DSS rather than focused on the real cause.  According to a recent Verizon Business Services report, the real cause is usually human error.

What are worse are the comments from credit card company executives that seem to portray the PCI standards as ‘THE’ standards for securing cardholder data.  What these people fail to either understand or acknowledge is that even if every security measure in the PCI DSS were put in place and worked exactly as they should, breaches would still occur.  Albeit at a much slower pace and with less information released.  Why?  Because, security is not perfect.  Never was and never will be.

But, but, but …

If you have told people that your security systems are absolutely, positively protecting your organization’s information assets and that nothing can go wrong, shame on you.  You need to better manage people’s expectations regarding security.  This expectation by people that security is perfect is something that needs to be debunked immediately.  Yes, the security measures put in place are protecting the organization’s information assets, BUT …   Those measures are not perfect, never will be perfect and at some point will break down resulting in some sort of incident.  Your job is to make sure that your security measures ensure that any incident that occurs is as small as possible.

But, but, but …

No ‘buts’ about it.  Any security professional knows that security only reduces or minimizes risks; it does not remove them all.  Yes, some risks may be eliminated when proper security practices are implemented.  Nevertheless, for the most part, some security risks will always remain, regardless of the security measures put in place.  Proper management of these remaining security risks should minimize the risks as much as possible.  However, minimized risk does not imply that the threat or vulnerability cannot be leveraged.  Minimized risk means that controls are in place to make the likelihood of a compromise using any remaining vulnerability as low as possible.  As a result, security programs such as the PCI DSS are focused on keeping all but the most dedicated attackers at bay.  It is what I call the “99-1 rule.”  Security focuses on protecting your organization from 99% of attackers.  However, the remaining 1% of attackers will cost you too much time, resources and effort to keep them out.  Your hope is to at least be notified when that 1% attacks.

A prime example of security not being perfect is the banking industry.  Banks have invested heavily in a variety of security measures including vaults, silent alarms, video recording, die packs and other deterrents.  Vaults have definitely reduced the amount of money lost in a robbery.  However, one would argue that silent alarms, video recording and die packs have not had a significant impact on the number of bank robberies.  Where silent alarms, video recording and die packs come into play is in reducing the number of successful robberies.

That is what the PCI DSS and similar security standards are all about.  They were developed to make the successful breach as difficult a thing to accomplish as possible.

However, there are people out there that are dedicated to breaching security for whatever reason.  Whether or not you are in their sights is why you rely on the control triad and defense in depth.  The idea being that this will give you a “heads up” if your organization is being attacked.  But be careful and do not become complacent.  Dedicated attackers are like hunters.  They research their prey so that they know about the defenses of their target and they develop sophisticated plans to defeat those defenses or at least keep them at bay.  These are people skilled in their craft.  They take a part-time job as part of the cleaning staff at a building where their prey is located so that they can scope out their quarry and determine where the weaknesses are located.  If they need other expertise, either they will acquire that expertise through training or they will team with someone that has that expertise.  In the end, if there is a way, these people will find it and exploit it.

If you want to see these concepts in action, watch any of the Ocean’s 11, 12 or 13 movies.  They may be campy but the planning concepts used in these movies mimics just how an attacker goes about planning to obtain your databases or other information.  And if you do not believe that such people or threats really exist, read my post regarding the Advanced  Persistent Threat (APT).

The key to remember is that you are never going to remove all of the risk, you are only going to manage it.  With proper controls in place, threats can be managed and the risk to your assets minimized.  However, do not let your guard down because that is when attackers will take advantage.

06
Mar
10

Network Segmentation – Take 2

I have had a couple of discussions recently regarding what constitutes good network segmentation.  Apparently, my original post was just too cryptic, so I’m going to use some examples in this post to hopefully clarify where people are going wrong.

The PCI DSS gives very little guidance on network segmentation.  In fact, the only statement near a definition says. “Network segmentation can be achieved through internal network firewalls, routers with strong access control lists or other technology that restricts access to a particular segment of a network.”  But those are the mechanics of network segmentation.  This definition does not specify or illuminate the additional controls required to ensure segmentation which is why I wrote the original post.

In my first example, the network in question is segmented by VLANs.  The cardholder data environment (CDE) is contained in one VLAN and there are another eight VLANs defined.  All VLANs are internal and none face the Internet.  Access control lists (ACL) have been implemented to control the communications between the various VLANs and the CDE VLAN.  Monitoring of all of the VLANs has been implemented through a variety of methods including network monitors, host monitors and system/event log monitoring and analysis.  Network administrators monitor consoles that bring up any critical alerts that could indicate a potential attack or compromise.  This definition sounds pretty good does it not?  The problem is that it is all in the details and the details tell a different story.

In reviewing the VLANs’ ACLs we determined that two of the VLANs have TCP and UDP ports 1 through 65535 open to the CDE VLAN.  Whoa!  Every port is open to the CDE VLAN from these two VLANs?  Yes, that is correct.  This is not what the PCI SSC thought was “strong access control lists.”  In digging further, we inquire as to why this condition exists.  We are told that, ”We were unable to determine what the applications needed to have open between these VLANs, so rather than break anything, we just opened everything to be safe.”  To be safe?  Safe is a term that has different meanings relative to each person’s view that uses it.  In this case, because the two VLANs were internal, apparently the other VLANs were considered also ‘safe’.

But a lot of network administrators would point to the monitoring as the way they control things.  Are you serious?  I do not care how much monitoring you do.  With every port open, that monitoring is going to likely generate enough false positives to make identifying the real threats like finding a needle in a haystack.  And this was confirmed later on when we observed the network administrators that monitor the network.  They were ignoring almost everything that came up on their screens.  When we questioned them about this, they said, “We have tried to tune the alerts, but have not been able to significantly reduce the false positives.  We get around 10,000 to 25,000 alerts a day.  So we do the best we can to find the real threats.”  The best we can?  Security is not forgiving, let alone for people that are doing ‘the best they can’.

The moral of this example is that if you have every port or close to every port open, you cannot consider your network properly segmented.  I do not care what the other controls are that you believe are in place.  You have to be realistic.  And justifying having all of those ports open has to be more than implying you were too lazy and did not want to make the effort to find the real answers.

My other example involves a network that does have a limited number of ports open between their CDE VLAN and their other VLANs, albeit there are quite a few open ports.  They also have monitoring in place and their network administrators are very diligent in ensuring that alerts are addressed as quickly as possible.  Unlike my first example, these folks are seeing around 300 to 500 alerts of which 10% to 15% are false positives.  The problem is with their documentation.  In reviewing the firewall rules that segment the VLANs we documented all of the ports open to/from the CDE VLAN to the other VLANs.  We interviewed the Manager of their network management and administration department and inquired as to the business reason for each of the open ports.  Of the 100 or so ports defined in the ACLs, they can only give us business reasons for about 20% of them.  Heaven forbid they should document the reason in the configuration file, but there is no other documentation available.  The Manager even tries to find documentation in the help desk system where they log all of their changes, but even after refining the search criteria, there are just too many records to sift through in our one hour meeting to find what we need.  Not even proof that management knows that these ports are open, the risks that are involved with these ports being open and that management approved that these ports be opened.

The moral here is that documentation is the foundation from which you build.  If you have a shaky foundation, you will have shaky security and are likely a candidate for a compromise and breach.  This is why documentation is important.  If you cannot remember why ports were opened, users were allowed access to data and other security relevant issues, how can you even think you are secure?  The answer is you cannot be secure if you cannot answer basic questions.

But it gets better.  This same individual earlier in our meeting had confirmed that they were the one that reviewed the firewall rules quarterly and showed us emails to prove that fact.  Then as we are going through the CDE ACLs, they say, “Oh, that rule should be removed.  It was for a business partner that we have not done business with in more than four years.”  Now, do you think I seriously believe that you are really reviewing these firewall rules quarterly when you admit that a given rule should have been removed four years ago?  We document four more firewall rules that should have been changed or removed.  It is situations like this that cause a QSA to shudder and then wonder what other ugly things are under the rocks and just how far you need or want to dig to find them.

Our moral here is telling the QSA what they want to hear when you know you will have to contradict yourself later on.  All it does is make you look incompetent.  But this situation also points out a good point regarding the duties of a QSA in conducting their assessment.  QSAs not only rely on interviews and documentation, they also rely on observations to ensure that organizations not only talk the talk but also walk the walk.

So what then is proper network segmentation?  A properly segmented network is much more than just technology.

The foundation of a properly segmented network starts with the control triad of preventative, detective and corrective controls.  Preventative network controls are going to be firewall rules and VLAN ACLs and any other controls that prevent or control access.  Detective network controls are going to be related to the monitoring you implement.  Monitoring can be real time and/or log analysis after the fact, but it should not be limited to just access to/from the CDE.  Monitoring also needs to include monitoring the network traffic for anomalous traffic.  Finally, you need corrective controls to ensure that any issues discovered with the preventative and detective controls are addressed as soon as possible.  Corrective controls are usually generated as action items created from such things as the lessons learned from an incident response plan or findings from an audit.

Once you have decided on the controls you will implement, you then need to create documentation that supports those controls.  For networks, the documentation that is key is to document every port that is open inbound to or outbound from the CDE environment.  Each of those ports will have been formally approved by management with the risk presented by having the port open.  And that risk analysis needs to include not just the port in question, but any other relevant ports, if necessary, as certain combinations of ports may increase or decrease the risk.  This risk analysis is important for a number of reasons.  First, it documents the basic analysis of risk and provides the rationale for having made a decision at that time.  That documentation can also save you if a breach occurs as you can understand what the people were thinking when they originally opened the port and also understand potential methods that might have been used to cause the breach.  This documentation is also important for the quarterly reviews as you can use the documentation to refresh your memory as well as assisting you in making changes to the rules if business conditions change.  Yes, I know firsthand that documentation is the last thing anyone wants to do.  But without it I will guarantee you will not remember six months or more down the road why you did what you did and for whom.  And in the security business, it is that sort of knowledge that can mean the difference between being secure and being a target.

The next item that needs to be documented is the users, programs, services and organizations that have access to the CDE.  In the case of programs and services, this should be tied to the aforementioned list of ports open.  In a breach, this documentation will reduce the number of likely suspects of where the breach came from.  As a result, you can see why it is important to limit the number of people, programs and organizations that have access to the CDE.

The final piece of documentation that needs to exist is what should be done in the event a problem or an alert is generated.  If people do not know what their responsibilities are in regards to providing feedback, then alerts will be missed or ignored and problems may not be addressed as quickly as they should.  Responses to problems or alerts should include detail regarding the conditions that created the problem or alert, the steps take to address the problem or alert and any issues that may have resulted from addressing the problem or alert.  If the problem or alert is not addressed in the timeframe required, there needs to be an escalation process so that the problem or alert receive the necessary visibility of management should they go unaddressed.

I hope this provides the additional examples of network segmentation.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 834 other followers


Follow

Get every new post delivered to your Inbox.

Join 834 other followers