Posts Tagged ‘corrective controls



09
Jan
11

Network Segmentation – One Last Discussion

Based on feedback I am getting, my previous posts regarding network segmentation are still not getting the point across regarding proper network segmentation.  So, this time I am going to use my post regarding the Control Triad and hopefully everyone will now understand what constitutes appropriate network segmentation.

As a quick refresher, the control triad is compromised of preventative controls, detective controls and corrective controls.  All of these control types are required to ensure a secure environment.  The more individual controls you have under each of the three control types, the less likely an incident will occur and the more coverage you should be able to afford your organization should a control go temporarily out of compliance.  However, an individual control can really only appear under one of the control types otherwise that control is diluted as it becomes a single point of failure causing the control triad to not function properly.  With that explanation, let us look at proper network segmentation from the control triad perspective.

Preventative Controls

The following would be considered the minimum preventative controls when talking about network segmentation.

  • Firewall(s) with rules that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Router(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • VLAN(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Private wireless network(s) use a separate VLAN(s) from the cardholder data environment with access controls enforced for any access to the cardholder data environment from wireless.  Private wireless access points are configured with WPA2 using Enterprise authentication and AES 128-bit or greater encryption.
  • Software firewall on server(s) in the cardholder data environment that restricts traffic to a limited number of ports/services to/from the server(s).
  • Restricted administrative access to infrastructure devices in or controlling access to the cardholder data environment.
  • Access controls that restrict administrative and end-user access to applications in the cardholder data environment or that access the cardholder data environment.

Remember, when I say. “limited number of ports to/from” I mean a very limited number of ports.  Yes, there may be instances where you might have 100 ports open to/from your cardholder data environment, but you better have a valid business reason for every one of those 100 ports.  And just so we are all clear, a valid business reason documents the reason why the port needs to be open, the risk presented to the cardholder data environment that the port is open, actions that have been taken to minimize the risks, and management approval of the port being open.  And the business reason for opening a port needs to be more than just “it needs to be open” or “the application will not function unless it is open.”  You need to document why it has to be open so that in the event of a breach you can quickly rule out the ports that might have been the cause based on the type of attack.

When we talk about restricting access, you need to be restricting access.  In small and mid-sized organizations, restricting access might not be feasible.  In those cases, forcing personnel to go to management to gain access is the way to properly provide control.  In large organizations, what we are talking about is restricting access to fewer personnel than everyone that has access to normal production.  The idea is that not everyone in support or business users should have access to the cardholder data environment.  The rule here is the fewer the better but do not make it so few that you create issues.

If you want to go the extra mile, the following controls can further enhance your security.  However, for some organizations, they come at a cost in operational efficiency that is unacceptable.

  • Disable all unused physical jack connections on all infrastructure devices.  Any activation of a jack requires a service ticket and standard management approvals.
  • Disable dynamic host configuration protocol (DHCP) in all retail locations.
  • Public wireless in retail facilities provided by a separate third party and on a separate circuit that connects to the Internet.
  • Required use of encrypted, two-factor authenticated virtual private network (VPN) connections from any wireless network to gain access to any internal network.
  • Access to the cardholder data environment is not allowed for users connecting through any remote access connection.

Detective Controls

The following would be considered the minimum detective controls when talking about network segmentation.

  • Network and host intrusion detection/prevention systems that monitors the aforementioned firewalls, routers, VLANs and servers that are protecting the cardholder data environment and generate alerts to appropriate personnel when an intrusion or incident is detected.
  • Daily analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Daily monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Daily analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or a potential incident.
  • Change management records for all infrastructure devices, servers and applications in-scope for PCI compliance.

The key here is to generate alerts should any anomalous activity be detected.  But that is the rub.  What is anomalous?  Anomalies are not always the easiest things to identify or define.  As a result, your detective controls may take a while to fine tune.  However, the organizations that do the best job of managing their detective controls organize their anomalies by the PCI DSS requirements they are trying to meet.  This allows them to tweak their anomaly detection capabilities by PCI DSS requirement.

Then there is the issue of what do you do if you detect an anomaly?  Most of the time, an anomaly is not dealt with because of one of two reasons.  The first reason is because the detection solutions are new and are not functioning properly because no one has taken the time to tune them.  The second reason is that because of changes in the environment, the detective controls need to be re-tuned to reflect the changes.  Regardless of why, the detective controls need to be adjusted so that they are not generating excess false positives resulting in people chasing phantom issues.

If you want to go the extra mile, the following controls can further enhance your security.  While these sorts of tools are available as open-source solutions, there are also many commercial solutions as well.  Regardless of whether they are commercial or open-source solutions, tools that will perform these functions typically take a significant amount of time and effort to tune so that they provide the right amount of information for the right incidents.

  • Real-time analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Real-time monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Real-time analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or potential incident.

All real-time does is provide you with instantaneous alerting.  Most small and even mid-sized merchants do not need real-time analysis and alerting.  Not that they cannot use it, it is likely overkill for their environments given the threat of attack.  However for governmental agencies/departments, financial institutions, health care organizations and most large merchants; real-time analysis and alerting is mandatory.

And if you think tuning for daily reviews was painful, tuning real-time analysis and alerting systems is at least twice as painful.

Corrective Controls

The following would be considered the minimum corrective controls when talking about network segmentation.

  • Change management procedures.
  • Incident response plan(s) for addressing any issues identified by the detective controls.
  • Root Cause Analysis (RCA) procedures.
  • Action plans that result from the incident response process that require changes to the preventative and/or detective controls.  At a minimum, the action plans must document the correction needed, the person(s) responsible for getting the correction completed and the timeframe for the correction to occur.
  • Internal audit review of the preventative and detective controls.
  • QSA review of the preventative and detective controls.

Here is where a lot of organizations miss the boat.  You have detected an anomaly, you dealt with the anomaly, but you do not analyze why the anomaly occurred or you do an analysis but then you do nothing to correct any issues that might have been identified.  As a result, the anomaly continues to be encountered but actions are not taken to minimize or even eliminate occurrences.  This is why the advanced persistent threat (APT) is successful.  APT relies on the fact that eventually all organizations get sloppy and do not take corrective actions to maintain or even improve their controls.

There may be a number of preventative, detective and corrective controls that I may have missed or did not consider since everyone has unique environments.  At a minimum, if your organization has implemented these controls and they are all operating effectively, you are going to better than the majority of organizations out there and much less likely to have a serious incident that could result in a breach.

And that is the problem all organizations face, keeping these controls functioning effectively every day without missing a beat.  That is why we have defense in depth.  If one control is not functioning properly, there are other controls that will cover in the interim until the control is back functioning properly.

Finally, as I always like to remind people, just because you implement all of these recommendations does make you invincible.  All these recommendations do is just making the likelihood of an incident and the potential damage resulting from an incident lower than if you had little or no controls in place.  How much lower depends on a number of factors, but the risk will be lower.  And after all, it is all about lower risk.

Hopefully the issue of what constitutes appropriate network segmentation has now been put to rest.

26
Sep
10

Zero Trust Security – The Technical Discussion

With the cultural issues out of the way, let us discuss some technical details.  Given the state of security technology and where security leadership sits these days, I question if Zero Trust can be implemented.

Essentially, with a ‘Zero Trust’ approach, we are talking about DMZs.  However, instead of our usual externally facing DMZs we are also talking about DMZs that are internally facing.  These are no ordinary DMZs, these are highly monitored and controlled DMZs with IDS/IPS, NAC, full logging and everything else required to ensure security.  These technologies are not for the faint at heart as they require a lot of planning in order to get them right.

Where a lot of organizations get things wrong is that they believe that all of these security technologies are like a Ronco Showtime Rotisserie oven, you just “Set it and forget it.”  If only security worked that way, but it does not.  As a result, one of the first stumbling blocks organizations interested in Zero Trust face is staffing since Zero Trust will require a significant amount of attention both from a security perspective and from their help desk.  I do not think that we are talking about a significant increase in security and help desk personnel, but the existing staffing levels are likely to be insufficient in a Zero Trust environment.

The next issue that I see is from the technology itself.  Most security technology is designed for Internet facing use, not internal use.  While these solutions can be used internally, they tend to create issues when used internally because of their severe responses to any perceived attacks.  As a result, in order to use these solutions, security professionals have to turn off or turn down certain features or functions because they get in the way of getting business done.  Then there are the applications themselves.  I cannot tell you how frustrated I get with vendor and in-house developers that cannot tell you from a networking perspective how their applications work.  As a result, security professionals are required to do extensive research to figure out what ports/services an application requires, if they even do such research.  That then results in what we tend to see on internal networks with internal DMZs, lots of ports/services open into the DMZ because they do not want the application to break.  In a Zero Trust approach, this is not acceptable.

Then there is logging and the management and maintenance of log data.  It still amazes me the amount of push back I still receive on logging and the management of log data.  Security professionals and managers complain and complain about the amount of data that needs to be retained and the length it needs to be retained.  Hello!  This is the only way you will ever know what went wrong and how it went wrong so that you can fix it.  But the security information and event management (SIEM) industry has not helped things by delivering solutions that can cost as much as a large Beverly Hills mansion and are as easy to implement as an ERP system.  While there are open source solutions, the usability of these solutions are questionable at best.  Unfortunately, the PCI DSS is mandating that log data be reviewed at least daily.  In order to get that done, merchants either cannot afford or do not have the time to invest to meet this requirement.  As a result, there is a lot of frustration that what merchants are being asked to do cannot be done.  Yet, log information capture and review is possibly one of the most important aspects of an organization’s security posture.  Because if you do not stop an attack with your firewall and IPS, the only way you know that is from your log data.  Damned if you do, damned if you do not.

So a merchant implements all of the necessary technologies and procedures to make Zero Trust a reality.  Is that merchant more secure?  If a merchant makes such an investment, the reward will likely be improved security.  But it will take continuous effort to keep Zero Trust running and that is where all organizations run into trouble with security initiatives.  It takes consistent execution to make security work and people and organizations these days lose interest in things they think are fixed and so security gets swept to the back burner.  As a result, it takes strong leadership to keep security off of the back burner.  Without that leadership, security will fall into a rut and an incident will occur that will make security a front burner topic again.

So while I think Zero Trust is probably the approach we should all work towards, it will take a lot of effort to make it a reality.

21
Sep
10

Zero Trust Security – The Cultural Discussion

There is a great motto on the SR-71 Blackbird flight crew badges, “In God we trust, all others we verify.”  John Kindervag of Forrester Research has written a paper titled ‘No More Chewy Centers: Introducing The Zero Trust Model Of Information Security’ that takes this motto into the information security realm.  The premise of this paper is what if you treat everything as untrusted on your network, internal or external?  This paper is a great read and is worth the cost to obtain a copy.

This concept may sound a bit extreme and, for some, may even seem an odd approach.  But you have to ask yourself, can you really trust all of your users?  And that is exactly the point John is making.  He points to 26 data security breaches in the first half of 2010 that were the result of “trusted” personnel deliberately or accidentally releasing information.  John’s advice, if you cannot trust your users, then you need to treat them and their network traffic as untrusted.

As a security professional, this approach sounds appropriate given today’s computing environment.  However, as a former senior IT executive, I have to say it sends chills down my spine.  For what this approach requires is that you tell your employees that they cannot be trusted.  If that does not scare the daylights out of you, it will sure scare it out of your human resources executives and probably a few, if not all, of the rest of your senior managers.

Then there is the process of selling such an approach.  And let us face it; it will be quite a sales job to get such an approach sold to senior management.  To exacerbate this process, surveys of senior managers portray security professionals as being too technical and cannot explain why security is necessary in business terms.  With that sort of disconnect, the concept of Zero Trust is going to be almost impossible for most security professionals to sell to their organizations.  In my opinion, the only way such an approach will ever be implemented is if it is suggested and driven by senior management, not IT or information security.

Then there is the fact that Zero Trust is not going to totally solve the security problem.  Remember my mantra, security is not perfect.  Zero Trust is only going to minimize risk, but it is likely to minimize it to the absolute minimum it can be reduced.  Senior managers are going to be skeptical about spending the money it will take to get to this level.  However, for the financial institution and health care industries, the cost will be worth the peace of mind.  Other industries will likely struggle with justifying the expense.  But in the end, I think this is probably the only route to as secure an environment as one can have.

In a future post, I will discuss the technological ramifications of Zero Trust.

21
Aug
10

Twelve Character Long Passwords

This past week researchers from Georgia Tech released a paper saying that the days of eight character long passwords is over and that twelve character long passwords had arrived. The researchers based their efforts on the use of the latest graphics cards that have the computing power of a supercomputer, have software development kits and can be programmed in C.  However, the telling quote about their research came from the CNN Web site which stated, “The researchers used clusters of graphics cards to crack eight-character passwords in less than two hours.”

The first thing I thought of was, “What kind of system administrator lets a brute force attack on a single account run for two hours?”  The answer was no one, not even stupid ones allow that to happen.  As a result, this seemed to be a lot of “Chicken Little” reporting if you think only about a brute force attack in the traditional sense.

But the more I thought about it I did come up with potential uses for their work.  Wireless technologies are a method of communication where a hacker could obtain passwords without setting off alarms.  So, there is a potential threat, but not as great as the news reports are making you believe.

Then there is the portability, or lack thereof, of a system packed with a bunch of graphics cards.  Yes, we will find a way to shrink it in time, but for now, it’s not a possibility.  So even while the wireless scenario is a threat, without the portability, it too is relatively minor.

This is the problem with security research.  You really have to read the research paper to understand if the threat could actually be used outside of the laboratory.  In the case of this threat, most system administrators have put the following controls in place to stop such attacks.

  • Accounts lock after three to five invalid logon attempts.  No running a brute force attack against accounts for two hours straight when you only get three to five logon attempts.
  • Once locked accounts can only be unlocked by contacting the help desk.  So you lock the account, you just call the help desk right?  Think the help desk will wonder why you are constantly asking for a reset?  Eventually, you will not be able to convince the help desk to reset the account.
  • The help desk requires users to uniquely identify themselves by answering a question that only the user would know the answer.  Now you will have to do research into the user to determine their children’s’ names, birthdates, pets’ names, etc.  That of course implies that you got past bullet number two.

The bottom line is that this is why security standards such as the PCI standards are built in layers.  As researchers discover new threats, there are other controls in place to prevent the failure of the control now in question.

However, where security people frequently mess up is in connecting the dots between the current threat and threats exposed months or years ago that were written off because they were believed to be blue sky thinking.  I have seen examples where, in combination, the current threat plus older threats could be used to compromise security.  It was just all in how the threats were put together and the order they were executed.

This is why I think it is very important that security professionals need to understand their opponent and think like the opponent.  If you cannot understand how to put together an attack, it is very difficult to defend against it.  The best security professionals I have ever worked with thought like their adversaries.  They were always trying to see things through their opponent’s eyes and think of ways to circumvent controls.  It was through this sort of analyses that these top security people were able to create almost impenetrable defenses.  I say almost, because even these super security pros understand that security is not perfect.

02
Apr
10

PCI For Dummies

Is it just me, or is there a move afoot to make security idiot proof?  The reason I ask is that I keep getting emails from various sources indicating that they have developed a white paper entitled ‘PCI for Dummies’, ‘PCI for Idiots’ or ‘Making PCI Compliance Easy’.  Hello!  Get a clue out there!

To paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard.  If it wasn’t hard, everyone would do it.”  Security is not always simple, it is not always easy.  Security usually requires thought, diligence and consistent execution.  And in some cases, security may require a lot of thought and a lot of effort.  The reason?  The bad guys are hoping that people are complacent in their protection of their data.  They hope that we “drink the Kool-Aid” and believe the hype of the white papers.  The bad guys hope that you think because you have the latest and greatest security widget that PCI compliance has been knocked off your to-do list and you can move on to ‘real’ work.

The thing that a lot of people get wrong about security is that they think that it has a start and an end.  And that is the problem with security; it has never been a destination, it is a journey.  Security is a never ending struggle between the “haves,” in the case of PCI those organizations that have cardholder data, and the “have nots,” in this case the “bad guys” that want cardholder data.   Just when you think you are done, a new threat or risk pops up and the process of securing your organization starts all over.  And if it is not a new threat or risk, then it is someone that gave away their password, was polite and violated your physical security protocol by opening a secured door for someone, borrowed their access card to someone or left their netbook unsecured in their hotel room while they were at dinner or the room was cleaned.

PCI compliance is no different.  Your preventative and detective controls such as firewalls, intrusion detection and monitoring typically work fine as long as they are maintained and your users do not circumvent them.  However, monitoring also involves corrective controls and that is usually where people slip up.  It is the correction process that is so important.  While you can rely on vendor patches and maintenance to keep the widgets working, if you are not correcting problems, your security will gradually get weaker and weaker.

There is a great t-shirt out that says, “Just when you think you have made something idiot proof, someone goes out and makes a better idiot.”  Some users are better than others.  Some users are never going to get it.  It is the users that never get it that will hurt you.  However, we all have days where we get suckered for one reason or another.  With the sophistication of some of the attacks these days, it’s surprising that more people are not affected.  However, without an active security awareness program, you will never get people trained to suspect “odd” requests, stop opening attachments and falling for all of the obvious scams.  There is no security awareness program that is 100% successful.  But if you do not have a program, all of your other security efforts will be wasted.

That said one of the biggest “dummies” is likely in that large corner office.  It is not that your CEO/CFO/COO/CIO does not care, they really do care.  It is just that they likely have no idea how to approach the problem of security and how to address it.  Even more likely, they do not realize that it is a never ending effort.  As a result, the first thing you need to do is educate the C-level people in the ways of security.  You do not need to teach them the nuts and bolts, just the 50,000’ view.  But they are also likely users, so do not pass up the teaching moment.  Make sure they participate in your security awareness program which is another opportunity to train them.

As a lot of you have commented, I really do have a thing about security not being perfect.  It is an important message that needs to be delivered.  However, that message needs to be delivered carefully.  Security may not be perfect, but without peoples’ diligence there is no hope of coming close.  And that is the message today.

06
Mar
10

Network Segmentation – Take 2

I have had a couple of discussions recently regarding what constitutes good network segmentation.  Apparently, my original post was just too cryptic, so I’m going to use some examples in this post to hopefully clarify where people are going wrong.

The PCI DSS gives very little guidance on network segmentation.  In fact, the only statement near a definition says. “Network segmentation can be achieved through internal network firewalls, routers with strong access control lists or other technology that restricts access to a particular segment of a network.”  But those are the mechanics of network segmentation.  This definition does not specify or illuminate the additional controls required to ensure segmentation which is why I wrote the original post.

In my first example, the network in question is segmented by VLANs.  The cardholder data environment (CDE) is contained in one VLAN and there are another eight VLANs defined.  All VLANs are internal and none face the Internet.  Access control lists (ACL) have been implemented to control the communications between the various VLANs and the CDE VLAN.  Monitoring of all of the VLANs has been implemented through a variety of methods including network monitors, host monitors and system/event log monitoring and analysis.  Network administrators monitor consoles that bring up any critical alerts that could indicate a potential attack or compromise.  This definition sounds pretty good does it not?  The problem is that it is all in the details and the details tell a different story.

In reviewing the VLANs’ ACLs we determined that two of the VLANs have TCP and UDP ports 1 through 65535 open to the CDE VLAN.  Whoa!  Every port is open to the CDE VLAN from these two VLANs?  Yes, that is correct.  This is not what the PCI SSC thought was “strong access control lists.”  In digging further, we inquire as to why this condition exists.  We are told that, ”We were unable to determine what the applications needed to have open between these VLANs, so rather than break anything, we just opened everything to be safe.”  To be safe?  Safe is a term that has different meanings relative to each person’s view that uses it.  In this case, because the two VLANs were internal, apparently the other VLANs were considered also ‘safe’.

But a lot of network administrators would point to the monitoring as the way they control things.  Are you serious?  I do not care how much monitoring you do.  With every port open, that monitoring is going to likely generate enough false positives to make identifying the real threats like finding a needle in a haystack.  And this was confirmed later on when we observed the network administrators that monitor the network.  They were ignoring almost everything that came up on their screens.  When we questioned them about this, they said, “We have tried to tune the alerts, but have not been able to significantly reduce the false positives.  We get around 10,000 to 25,000 alerts a day.  So we do the best we can to find the real threats.”  The best we can?  Security is not forgiving, let alone for people that are doing ‘the best they can’.

The moral of this example is that if you have every port or close to every port open, you cannot consider your network properly segmented.  I do not care what the other controls are that you believe are in place.  You have to be realistic.  And justifying having all of those ports open has to be more than implying you were too lazy and did not want to make the effort to find the real answers.

My other example involves a network that does have a limited number of ports open between their CDE VLAN and their other VLANs, albeit there are quite a few open ports.  They also have monitoring in place and their network administrators are very diligent in ensuring that alerts are addressed as quickly as possible.  Unlike my first example, these folks are seeing around 300 to 500 alerts of which 10% to 15% are false positives.  The problem is with their documentation.  In reviewing the firewall rules that segment the VLANs we documented all of the ports open to/from the CDE VLAN to the other VLANs.  We interviewed the Manager of their network management and administration department and inquired as to the business reason for each of the open ports.  Of the 100 or so ports defined in the ACLs, they can only give us business reasons for about 20% of them.  Heaven forbid they should document the reason in the configuration file, but there is no other documentation available.  The Manager even tries to find documentation in the help desk system where they log all of their changes, but even after refining the search criteria, there are just too many records to sift through in our one hour meeting to find what we need.  Not even proof that management knows that these ports are open, the risks that are involved with these ports being open and that management approved that these ports be opened.

The moral here is that documentation is the foundation from which you build.  If you have a shaky foundation, you will have shaky security and are likely a candidate for a compromise and breach.  This is why documentation is important.  If you cannot remember why ports were opened, users were allowed access to data and other security relevant issues, how can you even think you are secure?  The answer is you cannot be secure if you cannot answer basic questions.

But it gets better.  This same individual earlier in our meeting had confirmed that they were the one that reviewed the firewall rules quarterly and showed us emails to prove that fact.  Then as we are going through the CDE ACLs, they say, “Oh, that rule should be removed.  It was for a business partner that we have not done business with in more than four years.”  Now, do you think I seriously believe that you are really reviewing these firewall rules quarterly when you admit that a given rule should have been removed four years ago?  We document four more firewall rules that should have been changed or removed.  It is situations like this that cause a QSA to shudder and then wonder what other ugly things are under the rocks and just how far you need or want to dig to find them.

Our moral here is telling the QSA what they want to hear when you know you will have to contradict yourself later on.  All it does is make you look incompetent.  But this situation also points out a good point regarding the duties of a QSA in conducting their assessment.  QSAs not only rely on interviews and documentation, they also rely on observations to ensure that organizations not only talk the talk but also walk the walk.

So what then is proper network segmentation?  A properly segmented network is much more than just technology.

The foundation of a properly segmented network starts with the control triad of preventative, detective and corrective controls.  Preventative network controls are going to be firewall rules and VLAN ACLs and any other controls that prevent or control access.  Detective network controls are going to be related to the monitoring you implement.  Monitoring can be real time and/or log analysis after the fact, but it should not be limited to just access to/from the CDE.  Monitoring also needs to include monitoring the network traffic for anomalous traffic.  Finally, you need corrective controls to ensure that any issues discovered with the preventative and detective controls are addressed as soon as possible.  Corrective controls are usually generated as action items created from such things as the lessons learned from an incident response plan or findings from an audit.

Once you have decided on the controls you will implement, you then need to create documentation that supports those controls.  For networks, the documentation that is key is to document every port that is open inbound to or outbound from the CDE environment.  Each of those ports will have been formally approved by management with the risk presented by having the port open.  And that risk analysis needs to include not just the port in question, but any other relevant ports, if necessary, as certain combinations of ports may increase or decrease the risk.  This risk analysis is important for a number of reasons.  First, it documents the basic analysis of risk and provides the rationale for having made a decision at that time.  That documentation can also save you if a breach occurs as you can understand what the people were thinking when they originally opened the port and also understand potential methods that might have been used to cause the breach.  This documentation is also important for the quarterly reviews as you can use the documentation to refresh your memory as well as assisting you in making changes to the rules if business conditions change.  Yes, I know firsthand that documentation is the last thing anyone wants to do.  But without it I will guarantee you will not remember six months or more down the road why you did what you did and for whom.  And in the security business, it is that sort of knowledge that can mean the difference between being secure and being a target.

The next item that needs to be documented is the users, programs, services and organizations that have access to the CDE.  In the case of programs and services, this should be tied to the aforementioned list of ports open.  In a breach, this documentation will reduce the number of likely suspects of where the breach came from.  As a result, you can see why it is important to limit the number of people, programs and organizations that have access to the CDE.

The final piece of documentation that needs to exist is what should be done in the event a problem or an alert is generated.  If people do not know what their responsibilities are in regards to providing feedback, then alerts will be missed or ignored and problems may not be addressed as quickly as they should.  Responses to problems or alerts should include detail regarding the conditions that created the problem or alert, the steps take to address the problem or alert and any issues that may have resulted from addressing the problem or alert.  If the problem or alert is not addressed in the timeframe required, there needs to be an escalation process so that the problem or alert receive the necessary visibility of management should they go unaddressed.

I hope this provides the additional examples of network segmentation.

15
Nov
09

Compensating Controls

A lot of organizations are relying on numerous compensating controls to achieve their compliance with the PCI DSS.  For version 1.2 of the PCI DSS, compensating controls have two new requirements and these new requirements will have the potential to cause a lot of organizations to become non-compliant.  As a result, I expect to find the next year very painful for our new clients as we explain these new rules and they attempt to come up with the documentation to keep their compensating controls.

There are now six requirements to the Compensating Controls worksheet.  The original four:

  • The PCI DSS requirement that cannot be met;
  • The control objective(s) of the PCI DSS requirement;
  • The business reason(s) why the PCI DSS requirement cannot be met; and
  • The compensating control(s) that have been put into place to achieve the PCI DSS requirement.

In addition to the original four requirements, two more have been added.  Those requirements are:

  • Validation that the compensating controls are functioning as defined; and
  • How the organization maintains the compensating controls.

The easier of the two new requirements is how an organization maintains the compensating controls.  However, while it is the easier of the two from a discussion standpoint, the fact that an organization must be able to document its maintenance processes for controls could be problematic.  In most organizations, the change control process for the control environment is typically not documented.  As a result, most organizations will have to adopt their change control processes for other business issues and just apply it to changes to their control environment.

The other new requirement is the one that most organizations will struggle with, providing documentation that the controls they are using to compensate for their lack of meeting the PCI DSS requirement are functioning as designed.  While this sounds simple and straight forward, I can tell you from personal experience that a lot of the compensating controls that were documented under v1.1 of the PCI DSS cannot provide this sort of proof.  In addition, a lot of compensating control cannot undergo the scrutiny of testing required to determine that they are functioning as designed.

So, where is the problem?  An organization must prove that it walks the walk, not just talks the talk.  This is where the wheels typically come off the bus.  It is very easy to say that something is a control; it is another thing to be able to actually prove it.  Organizations will have to provide documentation in the form of completed checklists or other formal documents that proves that the control is not only being used but that if a control exception occurs, the exception is addressed.  In addition, the QSA will also have to observe these processes at work so that they can satisfy themselves that the control is functioning.

I am not saying that compensating controls are a bad thing.  There are instances where organizations have no other choice.  The most common issue in this category is the database that takes more than 9 to 15 months to re-encrypt in order to change encryption keys.

In my experience, though, there are even more organizations that are using compensating controls because it is an easy out.  One of the most common reasons is that they need to avoid a large expenditure.  There are also those organizations where meeting the requirement will take a cultural change that management cannot step up to make.  There are also those organizations that just despise being told what to do and are just digging in their heels.  And, finally, there are those organizations that just plain do not want to have to get off their rear and do something.  For organizations like this, the jury rigging and hurdles they go through to make a compensating control work is mind boggling and, more often than not, costs more in time and resources than just meeting the original requirement.

If your organization is relying on compensating controls to be compliant with the PCI DSS, then you will want to make sure that you absolutely have no other alternative than a compensating control.  You should do an analysis of what it costs to maintain and comply with the compensating control versus implementing the PCI DSS requirement as written.  I would say that in about 85%+ of the cases, you will find that meeting the original requirement can be achieved easier and cheaper than using the compensating control.

19
May
09

‘Defense In Depth’ And The Control Triad

A lot has been written recently regarding that the PCI DSS relies on the concept of ‘defense in depth’.  However, very little has been written about how ‘defense in depth’ is actually implemented by the PCI DSS.

Before going into the PCI side of defense in depth, let us discuss the concept of defense in depth.  Defense in depth relies on the triad of preventative, detective and corrective controls.

  • Preventative controls are those controls put in place to prevent, as best possible, a security incident.
  • As I have repeatedly pointed out, security is not perfect.  Detective controls are designed to back up preventative controls by detecting incidents that may occur due to a shortcoming in or failure of the preventative control(s).
  • Corrective controls are those controls that back up both the preventative and detective controls by providing feedback to ensure that any lapses in those controls are corrected or those controls are redesigned to better prevent or detect their respective conditions.

With that as background, let us discuss an example of the preventative, detective and corrective controls that are part of the PCI DSS.

One of the primary preventative controls in the PCI DSS is network segmentation.  Properly implemented, network segmentation physically or logically separates PCI network traffic from all other network traffic.  By separating PCI network traffic from all other network traffic, you minimize the potential that PCI traffic is corrupted by non-PCI traffic.  Network segmentation comes in many forms.  It can be the firewall based on the rules that are implemented to segregate PCI network traffic from the Internet to the DMZ, the internal network to the DMZ or the general internal network to the PCI internal network.  It can also be the VLANs that are implemented across your internal network to segregate PCI traffic from all other internal network traffic.

On the detective side, logging is probably one of the biggest detective controls.  Every device can generate a log and, if properly configured, logging can provide a plethora of information regarding an organization’s network.  However, in order to detect an incident, log information must be reviewed and analyzed.  Not only must it be reviewed, but also with today’s sophisticated attacks, the log information must be correlated to other devices’ log information.  This requires a security information and event management (SIEM) system that centrally collects all log information from all devices and then conducts real-time or almost real-time analysis of that information looking for indications of any potentially anomalous behavior.  When potentially anomalous behavior is believed to be detected, the SIEM alerts the appropriate personnel to further investigate the behavior.

Detective controls can also be corrective controls.  The analysis of the log information is an example of just such a control.  Not only is it a detective control, it is also a corrective control.  That is because the analysis of the log information typically results in corrective actions to address the anomalous condition that is detected.

The Report On Compliance or Self-Assessment Questionnaire process is also an example of both detective and corrective controls.  Either process asks your organization to examine the triad of controls and detect any shortcomings in how these controls function.  If the controls are not functioning properly, the process identifies those controls that need to be changed and allows for the development of plans to correct those controls.

An incident occurs when the triad breaks down.  The preventative control is not properly designed to prevent all likely incidents.  A detective control does not detect all possible flaws in the preventative control(s).  And the most common offense?  When issues are identified with preventative or detective controls, action is not taken to correct the shortcomings of the preventative or detective controls.

Trust me; regardless of how good your organization is at following its policies, standards and procedures, you will have areas where the control triad does not function properly.  A good organization seeks out, identifies control issues, and addresses them as soon as they are identified.  A really good organization does not make their efforts to seek out and identify failings as a ‘witch hunt’ because they recognize that people are only human and always need to improve.

I could go on and on and on regarding the control triad and how it applies to the PCI DSS requirements.  However, I think you get the general idea.  So, the next time you are complaining about why a particular requirement exists, think about what it prevents, detects and/or corrects.  All of the PCI DSS requirements are there for a reason and provide cover for a significant number of the other requirements.  And that is what ‘defense in depth’ is all about.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 834 other followers


Follow

Get every new post delivered to your Inbox.

Join 834 other followers