Archive for February, 2013

25
Feb
13

Have You Been Paying Attention?

Mandiant recently released an interesting report on APT1, the supposed Chinese Army hackers that are behind most of the serious digital attacks being conducted these days.  The reason I bring this up is that, even before the Mandiant report, I had been getting questions regarding how the PCI DSS, ISO 27K, HIPAA and other standards could possibly block or even alert an organization that they are under such an attack or have been compromised by advanced persistent threat (APT).  I will use the APT acronym in this post to indicate any sophisticated attacker, not just those that follow the methods used by APT to compromise a network.

It fascinates me how many organizations jump through hoops to tightly control and monitor their inbound network traffic from the Internet, business partners and any other untrusted network.  Yet the same organization does nothing about the outbound side of the equation.  It is the age old belief that all internal traffic can be trusted because it is from the inside of the network.

It is this trust factor that APT relies upon in their ability to compromise your network.  Because you have tightly controlled your inbound traffic, APT relies on social engineering to gain access to the internal network.  Once inside, since most organizations do nothing to control or monitor their outbound traffic, it is a relatively simple task to connect from the inside of a network to any external IP address and set up a communications link.

Even when organizations are controlling and managing their outbound traffic, it is still easy to connect using ports 80 (HTTP) or 443 (HTTPS).  In this day and age, it is virtually impossible to be in business without allowing ports 80 and 443 outbound.  Attackers know this and use this fact against a target in order to gain a way out.

So what can you do to minimize this risk?

  • Tighten you inbound traffic from the Internet and business partners.  I consistently find that organizations allow traffic from countries where they have no business relationships.  As a result, these IP addresses can be use to launch denial of service and hack attacks.  If you do not limit where inbound traffic can come from, you are only asking for a denial of service attack or worse.
  • Control what external IP addresses and URLs your internal network can connect.  While a lot of organizations today use overseas business partners, is that any reason to let your internal network connect to any IP address?  No.  This is typically set up by a network engineer that does not want to be bothered by a lot of service requests to allow IP addresses or URLs.  However today’s sophisticated network attacks are only stopped if you control what IP addresses or URLs your users can connect.  Yes, this potentially creates a pain in the rear issue when someone needs to access something you have blocked.  But would you rather stop a breach or let it go unchecked until you accidentally trip over it?
  • Control your outbound ports with the same exuberance that you use on your inbound traffic.  If all you need from your internal network outbound are ports 80 and 443, then those should be the only ports allowed out on your firewall.
  • Monitor your outbound network traffic for “anomalies.”  The biggest anomaly you should be looking for is encrypted traffic over any port other than 443 (HTTPS).  The reason is that any application using anything other than 443 for encrypted communications is likely an attacker trying to obfuscate the fact that they have compromised your network.  After all, your firewall and IDS/IPS cannot read the packets encapsulated in the encrypted stream, so the attacker effectively blinds them to what is actually going on.  Encrypted traffic that does use 443 hopefully will get blocked by your white or black list of IP addresses and URLs.
  • Analyze the log data generated from your firewalls and IDS/IPS looking for potential APT attacks.  Are there consistent indications that devices are trying to connect to blocked IP addresses or URLs?  Do those incidents occur after hours or when the user is not in the office?  These are examples of conditions that should be flagged and investigated to ensure that something bad is not occurring.

Remember, security is not perfect.  What these controls do is make it more difficult for APT to successfully compromise your network.  However, if APT puts their mind to it, there is likely a way to successfully compromise your network but it will take a significant amount of work.  In the end, I would like to assume that since there are many more targets easier to get at that APT will go after the easy targets and leave your organization alone, at least for the time being.

Advertisement
24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.

19
Feb
13

Scoping Clarification

At the 2012 PCI Community Meetings, the PCI SSC made a presentation titled ‘PCI Standards Updates and Future Insights’.  Embedded in that presentation were a series of slides titled ‘Scoping & Segmentation Clarified’.  A number of writers have given reference to this clarification, but I have yet to see a discussion regarding the content of these slides.  So I felt someone should share with the world the content of these slides so that we are all on the same page.

“PCI DSS requirements apply to all system components, defined as any network component, server, or application that is included in or connected to the CDE [cardholder data environment]”

The first thing discussed are the misconceptions about PCI DSS scoping and what “connected to” really means.  Those examples of misconceptions pointed out as FALSE included:

  • Encrypted cardholder data (CHD) is always out of scope
  • “Connected to connected to” systems are not in-scope
  • Only systems that connect directly to the cardholder data environment are in-scope
  • Only inbound connections are in-scope
  • Directly connected to systems are not in-scope if they pass through an in-scope firewall

Encrypted CHD Is Out Of Scope

The only way encrypted cardholder data is ever considered out of scope is if, and only if, the organization being assessed does not have the ability to decrypt the data.  That said, it is up to the organization or their QSA to prove that the organization does not have access to the keys by documenting the assessment procedures that were performed to determine that the encryption keys could not be accessed and the cardholder data could not be decrypted.  So even though it may eventually be judged out of scope, there must be some form of investigation that proves that fact.

“Connected to connected to” Systems Are Not In-Scope, et.al.

The remaining four examples given as false are all related.  The guidance being provided by the PCI SSC should have been common sense to any security professional and QSA, but apparently was not as clear as we all thought.  As a result, the PCI SSC documented their thought process and provided this “guidance.”

“If a system can impact the security of the CDE (whether it is directly or indirectly connected), it is in-scope.

To be considered entirely out of scope, a network or system must be isolated from, with no connectivity into, the CDE (such that even if the system is compromised, it cannot impact the security of the CDE).

Restricting access by IP or port may limit the exposure of the CDE but does not automatically remove systems/networks from scope since there is still connectivity.

If connections are limited to specific ports or services, those systems are included in the PCI DSS scope to verify applicable controls are in place.”

Talk about opening a can of worms.  To a lot of people, this definition sounded a lot like Stalin’s “Doctor’s Plot.”  Exactly where do you draw the line on a network and what is connected to what?  To a lot of hard-line QSAs and some participating organizations (POs), this clarification essentially put everything back in-scope for PCI compliance because, in theory, any device on the internal network can be used to ultimately compromise the CDE.  To all of you out there that think this, take a pill and chill.  That is not what the clarification is saying unless there are no systems on your network that are truly isolated from the CDE.

The PCI SSC has left this decision to your QSA/ISA to determine where the boundaries of your CDE get drawn.  And that is based on the risk presented and how you have controlled access for the outliers on your network.  So, while in theory any device sitting on a network could be used as a staging point or beachhead to ultimately compromise the CDE, it all depends on the controls in place to minimize that potential risk.

As an easy example of what the PCI SSC is getting at with this clarification, any systems with access to the CDE are in-scope for PCI compliance.  According to the PCI SSC, a lot of QSAs were ruling systems out of scope such as backup systems, domain controllers, DHCP and DNS servers, management consoles and anything else used to manage, monitor or control devices inside the CDE.  The bottom line is that should any of these systems be compromised, the CDE is also likely compromised.  Limited access or not, these systems have access to the CDE and are therefore in-scope for the assessment.

The other part of the clarification is about just because a system has access to the CDE does not imply that all PCI requirements apply.  There are big differences between the access a backup system may have from a call center operators’ or an accountants’ PCs may have to the CDE.  As a result, it is up to the QSA/ISA to determine what PCI requirements are relevant based on the risk presented by the system and how it accesses the CDE.  At a bare minimum, any PC that has access to the CDE needs to be properly configured and security hardened, patched current, have anti-virus and anti-malware software and a firewall implemented.  If the PC has direct access to the CDE through something other than HTTPS as with a backup server or domain controller, then you should be treating these devices no different than a device inside the CDE. Whether or not additional requirements may be required will depend on the assessment of the PC and how it accesses the CDE.

Given this clarification, what should you be doing to determine the boundaries of the CDE?  Here are some of my ideas.

  • Scan you network for cardholder data (CHD) using OpenDLP, custom queries of databases and similar tools.  Document all systems that are found with actual CHD.  Make sure to check that the data found is actually CHD and not just 15 – 16 digit numbers that happen to pass a Luhn check.  We have encountered a lot of random log data from firewalls and intrusion detection/prevention solutions over the years that upon further inspection turned out to not be CHD.  The purpose of doing this scanning is so that you are reasonably certain that CHD does not exist on systems you did not know about.
  • Review your virtual LANs (VLAN) and related access control lists (ACL) and document what controls are in place to isolate systems from the CDE.  If ports are open between the CDE and a network segment, document why they are open.  A lot of times this sort of review results in discovery of systems that do not need CDE access but were granted access just in case.  The purpose of this review is to make sure that the CDE is truly isolated from the rest of the networks.  If not isolated, it may be possible to move systems into a few VLAN segments and isolate those segments to minimize your total CDE.
  • If possible, locate critical servers such as domain controllers, backup servers, etc. inside the CDE.  A lot of organizations have located one or two domain controllers inside their CDE and then limited communications to/from those domain controllers and domain controllers outside the CDE.  While the domain controllers outside the CDE are still in-scope if they communicate with the CDE domain controllers, such a move puts the majority of the risk on the CDE domain controllers.  In today’s SAN world, putting a backup server with a fiber channel connection back to the SAN used for virtual tape allows you to isolate your CDE backup process.
  • For those organizations that have virtual desktop technology implemented, consider creating virtual systems for providing access to the CDE.  A lot of organizations with virtual desktop technology have segregated this technology onto servers that only provide access to the CDE therefore limiting what virtual servers are in-scope.  Depending on the controls of the virtual environment will determine how much of that environment is necessary to be included in the assessment.  These virtual desktops should be built from a standard image and be strictly locked down so that new software cannot be installed by the user as well as they should be configured to log all user actions.  When using this approach, the user must have authentication credentials (i.e., user identifier and password) different from their other credentials.  You are also going to want some form of firewall between these virtual systems and the rest of the network and granting access to those systems that require access.
  • Another opportunity to minimize the CDE is the use of “jump boxes” on the network.  Jump boxes are accessed via remote desktop technologies to then gain access to the CDE and are typically used for network and server administration and management.  The jump box is configured so that any user’s activities are recorded and logged for later review.  Jump boxes are no different than the virtual desktop solution other than jump boxes are typically physical devices versus virtual devices.  The reason for using a physical device versus a virtual device is the jump box can be physically accessed if necessary in emergencies.  As with the virtual desktop solution, users of jump boxes must have user identifiers and passwords different from their other credentials and you will also need a firewall protecting the jump box.
  • For systems that have access to virtual desktops or jump boxes, these should still be security hardened, have anti-virus and anti-malware with current signatures and should also be timely patched.

A lot of people would think a virtual private network (VPN) over SSL, TLS, SSH or IPsec solution would also meet the requirement of isolation.  The problem with the VPN solution is that there is no isolation or gap between the system used to access the general network and the CDE.  The VPN does provide isolation of data transmissions but, if the system is compromised, the CDE is also likely compromised.  With the other solutions there are multiple ways that the CDE systems are isolated from the rest of the network.

Now we should all be on the same page.

13
Feb
13

What If?

Here is a thought provoking question that was posed to me recently by a former accomplice in the PCI world.

What if PCI DSS assessments were only required until a merchant proved they were PCI compliant or if a merchant had been breached?

The premise behind this idea is simple.  There are going to be merchants that get information security and there are those merchants that are never going to get information security no matter the carrots or sticks employed.  Not that merchants that get information security cannot be breached, but the likelihood is significantly lower than merchants that do not get information security.

Merchants would go through the PCI DSS assessment process, clean up their act and ensure they are compliant and then they would only have to go back through the process if they were breached.  As a best practice, merchants could chose to periodically assess themselves after significant changes to their cardholder data environments to make sure no new security issues had been created or at annual intervals of say three to five years.

In the event that a merchant were breached, the PCI assessment process would be required annually for the next three years or until all of the card brands involved agreed to drop the reporting requirement, whichever comes first.  For Level 1 merchants, they would go through the Report On Compliance (ROC) process performed by a QSA or an ISA.  For all other merchant levels, they could use the appropriate self-assessment questionnaire (SAQ) but that SAQ would have to be reviewed and signed off by a QSA.

For high risk organizations that process, store or transmit large volumes of cardholder data such as processors or service providers that do transaction reporting, statement rendering or other similar services, they would still have to go through the ROC or SAQ D as they do today based on the levels defined by the card brands.

For service providers such as managed security service providers (MSSP) or cloud service providers (CSP), they would be required to go through an annual SAQ D, at a minimum, or a ROC, if they desired, for all services that are required to be PCI compliant.  The ROC would have to be prepared by a QSA or ISA and the SAQ D would have to be reviewed and signed off by a QSA.

As with merchants, if a service provider suffers a breach, all bets are off and they must do a ROC by a QSA or ISA for the next three years or until the card brands tell you to stop.

Visa and MasterCard currently maintain lists of PCI compliant service providers.  Service providers pay to be listed on those lists and the qualifications to get on those lists would also not change.

Regardless of whether an organization is a merchant or service provider, the quarterly external and internal vulnerability scanning and annual external and internal penetration testing requirements would remain the same.  Merchants would be required to file their results with the merchants’ acquiring bank or processor(s).  For service providers, their scanning and penetration testing results would be filed with the relevant card brands.  The scanning and penetration testing just help to keep everyone honest in their efforts to maintain their security.

I have to say, it sounds like a rational process to me if you accept the original premise that organizations will either do what it takes to be secure or will not.  Thoughts?

07
Feb
13

Compliance, Compliance Testing and Security

I was recently on a Webinar presented by a major security vendor and one of their points was that executive management is finally starting to realize that compliance does not equal security.  If you read this blog regularly, you know I really do not like the phrase “compliance does not equal security” and I view it as a convenient dodge by those who use it as a way to weasel out of their responsibilities.

But during this Webinar I had an epiphany regarding this topic.  It is the confusion between security, compliance testing and reporting and the act of compliance by your technology, employees and business partners with your organization’s security policies, standards and procedures that is the problem.

I know I am just asking for flame mail with this post, but I am so tired of people looking to blame everyone but themselves about their inadequacies surrounding information security.  As I have done before, to paraphrase Tom Hank’s character in ‘A League of Their Own’, “There’s a reason security is hard.  If it wasn’t hard, everyone would do it.”

Security is not always easy, particularly when upper management does not have buy in.  But even when upper management supports security efforts, I have seen security personnel not take advantage of that fact and get the job done.  Security does not have to be hard, but it does take more than just slamming some firewalls and intrusion prevention gear down, tossing a SIEM into the mix and thinking you are done.  Security is a never ending journey because someone is always coming up with new ways to attack you.

Anyway, to start off, let us take a look at some definitions first so we are all on the same page.

Compliance is defined as:

“Conformity in fulfilling official requirements.”

“Official requirements?”  Could that possible mean your organization’s security policies, standards and procedures?  You bet.  In this instance, we are talking about those that correspond to the PCI DSS, but this also applies to ISO 27K, FISMA, HIPAA, GLBA or any multitude of frameworks and regulatory requirements.

Conformity is defined as:

“Compliance with standards, rules, or laws.”

Based on these definitions, security is all predicated on complying with what are deemed an adequate set of security policies, standards and procedures.  Conversely, if you are not complying with an adequate set of security policies, standards and procedures, then your organization cannot be as secure as it could be.  As a result, compliance has to equal security as long as the security policies, standards and procedures are considered adequate.  Therefore security professionals that quote the mantra, “compliance does not equal security” either have a problem with the compliance side of the equation (most likely) or with the standards/frameworks (the dodge).

Over the years there have been a lot of discussions about the PCI DSS, ISO 27K, FISMA and other security frameworks and whether or not they are adequate.  The important thing to remember is that all of these standards or frameworks are merely ante into the information security game.  They are the bare minimum or a baseline to get to a basic level of security.  Should you being doing more?  Definitely, but what those efforts beyond the standard/framework are depends on what you are trying to secure, your network and application architectures and a multitude of other factors related to your computing environment and how it is used.  Those are factors that cannot be taken into account by any standard/framework because they would start to become impossible for others to follow and implement.  The bottom line here is that if you want someone to tell you exactly what to do to secure your networks and applications, go hire a consultant you trust and they will tell you everything you want to know.

The rub in all of this is that, based on the breach reports from Verizon Business Services, Trustwave, et.al. as well as compliance testing reports I have reviewed, none of you out there are 100% compliant to begin with, let alone even close.  Every organization I am aware has problems complying with the basics, let alone with any advanced security requirements in the published standards/frameworks.  So if you cannot comply with what you already have, explain to me how a different framework is going to change that fact unless it is less stringent than the framework you are already trying to use?  And if that other framework is less stringent, while that may solve the compliance issue (which I seriously doubt), exactly how is a less stringent framework going to make you secure?  The answer is that it will not make you secure.

What security professionals struggle with is that compliance is a never ending, 24x7x365 effort.  Drop your guard for an instant and it can be game over.  But provided your security policies, standards and procedures are appropriate and detailed (the reason why you want to use an appropriate standard/framework), your organization is not as secure as it can be unless your personnel and devices comply 100% of the time with every defined security policy, standard and procedure.  If you want confirmation of these facts, again, just look at the breach analysis reports year after year.  The reason there are breaches is because of non-compliance with one, but usually more, of an organization’s security policies, standards and/or procedures.

This brings me to the rumblings of late regarding a rethinking of defense in depth.  Defense in depth is predicated on using layers of security devices and controls to minimize the risk that a security incident occurs not to completely prevent an incident although you might get lucky.  For example, firewalls are the sledge hammer of security tools.  However, because we need to have ports open for outsiders to access applications, we follow our firewalls with intrusion detection/prevention devices to ensure that no one abuses the protocols used by the ports.  We follow that up with monitoring of log data from the firewalls, IDS/IPS, routers, switches and servers to identify any “sneaky” attacks using the protocols we allow.  The layers are there to cover the various holes we need to have in order to make our networks and applications function.  The tighter and smaller we can make those holes, the more secure we will be, but there will still be some amount of risk.  So we bring in more layers to cover those risks until it is more expensive to address the risk than to accept the risk.  That remaining risk is the residual risk that we therefore manage and control through detection and correction.

The other thing defense in depth relies on is the control triad.  The idea being that, because you cannot entirely prevent every security incident, you need a way to detect the incident so that you can take action to stop or minimize the impact of the incident.  You follow that up with periodic assessments of your control environment to identify and correct any deficiencies or improve your program based on new information regarding security.  The follow up assessments can be activities such as a root cause analysis (RCA) of an incident, an internal audit of user accounts and user rights or brining in a network security team to assess your security architecture and controls.  All of these activities will result in findings and recommendations to make your security systems and controls better.

And that brings us full circle to the PCI assessment.  It is merely a tool used by the acquiring banks, card brands, processors and others to obtain reasonable assurance that your organization is doing what it can to minimize the possibility of a breach of cardholder data.  It is not meant to be, nor could it ever be, an absolute complete assessment of an organization’s security posture and therefore provide absolute assurance that a breach will not occur (even though the PCI SSC and card brands tend to imply that fact).  Compliance assessments are only a snapshot of personnel and device compliance at the time the reports were written.  This is no different than going to the doctor for your annual physical which results in a snapshot of your health at that point in time.  It is not that those compliance reports are worthless; they just need to be referenced and used properly based on the fact that they are a snapshot.  Just as your doctor will tell you to lose weight or stop smoking, compliance reports provide recommendations on where you can make improvements or adjustments in your policies, standards and procedures based on what compliance evidence was found, or not found, during the assessment.

So, what are the lessons to be learned?

  • Security is not and never will be perfect; there will always be residual risk that must be managed and controlled.
  • Compliance does equal security, at least as best as your preferred standard or framework defines it plus whatever enhancements you have made.
  • Compliance assessments and reports point out where your organization was not compliant and needs to do better, not to prove your organization is secure.

Use the tools at your disposal correctly, stay current on threats and monitor your security posture and you will likely live a long, prosperous and secure life.

Keep hiding behind “compliance does not equal security” and you will forever be living off of your “luck” until it runs out (usually sooner rather than later).

06
Feb
13

How To Be PCI Compliant And Still Be Breached

Bashas’ became the most recent example of a merchant claiming to be PCI compliant yet ending up breached.  A lot of naysayers I am sure are running around pointing to the PCI standards and say, “See, they are worthless.”  But the larger question most of you have is, “How can an organization be breached if it is PCI compliant?”

The first piece of the answer is security is not perfect.  Security controls have never, ever totally stopped an incident from happening.  If they were perfect, banks would no longer be robbed.  However, due to the security controls that have been implemented, the success of those robberies has dropped significantly.  This is the fact that the PCI SSC and the card brands seem to miss.  That while their standard is a good starting point, there is much more that has to be done to ensure a reasonable level of security.  And even then, an organization is never 100% secure.

The second part of the answer is that even if an organization is 100% compliant with the PCI DSS, there are still numerous ways to get around the controls and breach data as the Bashas’ breach may eventually point out.  Let us assume for this discussion that Bashas’ statement that they were PCI DSS compliant is accurate.  Then how could they have been breached?

The first clue is the statement that they discovered malware that went undetected for some period of time.  Any organization that believes that their anti-virus/anti-malware solution will address this issue is seriously lying to themselves.  AV is good, but it is also not perfect.  If the AV vendors have never seen the malware you picked up, then they have no signature to match it to, so they will likely not flag it.  This is the first indication that this attack was done by a professional.  The malware was not immediately detected which means the attacker likely developed it themselves from a variety of sources.

But how did the malware get on Bashas’ network?  The answer is social engineering and probably a spear phishing attack.  The attacker likely used PasteBin or similar Web sites, got some Bashas’ email addresses and used those to deliver the malware.  Someone unfortunately clicked on a link, opened an attachment or any other number of infection methods and the deed was done.  This is why security awareness training is so important.  Not that it stops these sorts of attacks, but it significantly reduces the likelihood that they are successful.  However, with the malware in place, now all it took was time to find the data.

But would not Bashas’ have noticed someone probing their network?  That depends on a number of factors, but based on the fact that they became aware of the malware, something eventually triggered an incident.  Unlike the security firm you hire to do vulnerability scanning and penetration testing, professional attackers do not perform their scans as quickly as possible.  They take their time and scan very, very slowly.  As a result, they usually do not generate enough traffic at once to garner an alert.  In addition to that, most of their backdoor software encrypts their external transmissions using SSL/TLS/IPsec over port 80 or 443 which are typically open to the Internet.  As a result, from a monitoring perspective, a lot of what is going on would appear “normal.”

So now that your view of the PCI DSS is dashed.  What should you do to respond?

  • Admit that security is not perfect and educate management that it is not perfect.  Breaches will still occur, but security controls are meant to minimize the number of those occurrences and the extent with which they obtain sensitive data.
  • Do not save sensitive data.  Merchants typically do not need to store cardholder data these days.  If you do, then use tokenization so your systems do not store cardholder data.
  • If possible, further isolate your sensitive data.  Look at Forrester’s “Zero Trust” model or the McGladrey Ultra Secure approaches.
  • If possible, reduce the number of actual people that can access your cardholder data to as few as possible.  The fewer people that can access cardholder data, the fewer targets that can be social engineered.
  • Use a “jump box” to provide access to your cardholder data environment so that you do not allow people direct access.  Couple this with different user credentials to gain access to the cardholder data environment.  Add in full instrumentation of the jump box to capture all activity performed on the jump box and monitor the jump box tightly.
  • More tightly monitor your communications through your firewalls.  Yes HTTP/HTTPS needs to be open these days just to do business, but do your personnel need totally unrestricted access to every possible IP address or URL?  No.  So white or black list IP addresses and URLs so that an attacker cannot just use whatever URL or IP address to work from.

Will all of this prevent a breach of your sensitive data?  No.  All these controls will do is reduce the risk of a breach to the lowest possible level.  In time, an ingenious professional attacker will find a way to compromise your controls.  However, with a rigorous control environment it is hoped that you will find them before they find your data.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

February 2013
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728