Archive for the 'Requirement 7 – Restrict access to cardholder data' Category


PCI And Virtualization

I just received an invitation for a Webinar on Virtualization and PCI compliance.  My friend, John Kindervag is one of the panelists and, no, this is not an unpaid advertisement for anyone to attend even though I have provided the link to register.  For an hour they will be discussing this topic because now the PCI DSS v2.0 references virtualization.  Let us be very clear, while the PCI DSS prior to v2.0 never explicitly discussed virtualization, QSAs were instructed on how to approach virtualization security.  And as you will see, virtualization security is no different than any other operating system security.

In my very humble opinion, virtualization is a one minute security issue, if that long.  Let us cut to the chase, as small an attack vector virtualization can be, it is still a potential attack vector, so you need to secure it.  Is that clear enough?  The real issue is how to secure a virtualized environment.

There are two different forms of virtualization.  There are stand-alone hypervisors (what NIST refers to as “bare metal”) like VMware vSphere, VMware ESXi, Microsoft Hyper-V and Citrix XenServer.  Bare metal hypervisors are what we typically run into the most in our PCI compliance engagements, but not necessarily a guarantee.  There are also VMware Server, VMware Desktop and Microsoft VirtualPC (what NIST refers to as “hosted”) that require a host OS to run on as an application no different than Microsoft Word.  Obviously, the attack vectors are wildly different for each type of virtualization.

For whatever reason, it seems that a lot of IT professionals do not recognize that a hypervisor is an operating system.  Yes it is a very specialized operating system, but it is an operating system just like Linux or Windows.  Most hypervisors are based on Linux or UNIX and have a few security hardening similarities.  But given a hypervisor’s specialization, they have significantly different security hardening requirements from their Linux or UNIX counterparts.  As such, hypervisor vendors typically provide a security hardening standard for each of their hypervisor operating systems.  All you need to do is go to the hypervisor vendor’s Web site and download the security hardening guide for your version of hypervisor.  Which brings up a good point, if your hypervisor vendor does not provide a security hardening guide, then you need to find a different hypervisor.

For bare metal implementations, the only thing you have to secure is the hypervisor itself.  However, with hosted virtualization, you need to secure the host operating system as well as the hypervisor.  In addition to the hypervisor, you will need to follow the host operating system vendor’s security hardening guide to ensure that the host OS is also secure.

But hardening your virtualized operating system is not the end of the job.  You need to properly implement your virtualized environment securely as well and that is more than just hardening the hypervisor.  The most obvious security item that needs to be done is that any guest operating systems implemented need to also be securely hardened.  It still surprises me the number of IT professionals that somehow seem to think that because they are implementing Windows or Linux as a virtual machine that there is something different about security and you can totally skip or skimp on hardening.  Security hardening procedures need to be completely followed regardless of whether the guest OS is stand-alone or in a virtual machine.

The next area that seems to get the short shrift is infrastructure security.  This is particularly true of the management of the hypervisor environment.  Most implementations I have seen do a good job of securely connecting the virtual machines, but the hypervisor infrastructure environment leaves a lot to be desired from a security perspective.  The first mistake I see is that the hypervisor management environment is not segregated from other networks.  In the first scenario I commonly see, the production network and the hypervisor management network are on the same segment.  If an attacker compromises any virtual machine, they gain access to the hypervisor management environment and can therefore gain access to the virtual cardholder data environment.  In the other scenario, the corporate network and hypervisor network are one and the same and therefore everyone that is on the corporate network can also gain access to the hypervisor management network.  The way to fix both of these situations is to put the hypervisor management network on its own network segment.  I also recommend to organizations that they dedicate a NIC to only that segment.  However, if an organization already has an operations management network segment separate from other networks, I have no problem having the hypervisor management network in that segment as well.

The other scenario I frequently see is virtual machines from the cardholder data environment (CDE) intermingled with virtual machines that are not part of the CDE.  The problem here is that in the event of a compromise of a non-CDE virtual machine, CDE virtual machines may be accessible because of the configuration of the virtualization environment.  The best way to use virtualization for PCI compliance is to isolate your CDE virtual machines in a physically separate virtual environment from your non-CDE virtual machines.

For the truly paranoid, you can also fiddle with parameters such as physical/logical NIC assignments as well as SAN configurations.  While these sorts of configuration changes can provide additional security to the equation, I have my doubts as to the significance of these changes from a security perspective.  In my years of dealing with virtualization, these sorts of configuration changes have been more for performance reasons and enhanced security was just a nice byproduct.

Finally, there is the maintenance aspect of virtualization.  I think everyone gets the fact that virtualized or not, the guest operating systems need to be maintained and patched just like their stand-alone brethren.  However, when you ask organizations how often they patch their hypervisor; some will say to you very honestly, “You have to patch it?”  Earlier on I stated that a hypervisor is also an operating system and, as such, it needs to be patched just like any other operating system.  Granted a hypervisor does not usually get patched every month like Windows, but there are patches issued every so often by hypervisor vendors.

Best of luck to John and the round table that are presenting this month on virtualization and PCI compliance.  Hopefully this post will help explain what they will be discussing as well as lead to more insightful questions on the topic.


Network Segmentation – One Last Discussion

Based on feedback I am getting, my previous posts regarding network segmentation are still not getting the point across regarding proper network segmentation.  So, this time I am going to use my post regarding the Control Triad and hopefully everyone will now understand what constitutes appropriate network segmentation.

As a quick refresher, the control triad is compromised of preventative controls, detective controls and corrective controls.  All of these control types are required to ensure a secure environment.  The more individual controls you have under each of the three control types, the less likely an incident will occur and the more coverage you should be able to afford your organization should a control go temporarily out of compliance.  However, an individual control can really only appear under one of the control types otherwise that control is diluted as it becomes a single point of failure causing the control triad to not function properly.  With that explanation, let us look at proper network segmentation from the control triad perspective.

Preventative Controls

The following would be considered the minimum preventative controls when talking about network segmentation.

  • Firewall(s) with rules that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Router(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • VLAN(s) with ACLs that restrict traffic to a limited number of ports to/from the cardholder data environment.
  • Private wireless network(s) use a separate VLAN(s) from the cardholder data environment with access controls enforced for any access to the cardholder data environment from wireless.  Private wireless access points are configured with WPA2 using Enterprise authentication and AES 128-bit or greater encryption.
  • Software firewall on server(s) in the cardholder data environment that restricts traffic to a limited number of ports/services to/from the server(s).
  • Restricted administrative access to infrastructure devices in or controlling access to the cardholder data environment.
  • Access controls that restrict administrative and end-user access to applications in the cardholder data environment or that access the cardholder data environment.

Remember, when I say. “limited number of ports to/from” I mean a very limited number of ports.  Yes, there may be instances where you might have 100 ports open to/from your cardholder data environment, but you better have a valid business reason for every one of those 100 ports.  And just so we are all clear, a valid business reason documents the reason why the port needs to be open, the risk presented to the cardholder data environment that the port is open, actions that have been taken to minimize the risks, and management approval of the port being open.  And the business reason for opening a port needs to be more than just “it needs to be open” or “the application will not function unless it is open.”  You need to document why it has to be open so that in the event of a breach you can quickly rule out the ports that might have been the cause based on the type of attack.

When we talk about restricting access, you need to be restricting access.  In small and mid-sized organizations, restricting access might not be feasible.  In those cases, forcing personnel to go to management to gain access is the way to properly provide control.  In large organizations, what we are talking about is restricting access to fewer personnel than everyone that has access to normal production.  The idea is that not everyone in support or business users should have access to the cardholder data environment.  The rule here is the fewer the better but do not make it so few that you create issues.

If you want to go the extra mile, the following controls can further enhance your security.  However, for some organizations, they come at a cost in operational efficiency that is unacceptable.

  • Disable all unused physical jack connections on all infrastructure devices.  Any activation of a jack requires a service ticket and standard management approvals.
  • Disable dynamic host configuration protocol (DHCP) in all retail locations.
  • Public wireless in retail facilities provided by a separate third party and on a separate circuit that connects to the Internet.
  • Required use of encrypted, two-factor authenticated virtual private network (VPN) connections from any wireless network to gain access to any internal network.
  • Access to the cardholder data environment is not allowed for users connecting through any remote access connection.

Detective Controls

The following would be considered the minimum detective controls when talking about network segmentation.

  • Network and host intrusion detection/prevention systems that monitors the aforementioned firewalls, routers, VLANs and servers that are protecting the cardholder data environment and generate alerts to appropriate personnel when an intrusion or incident is detected.
  • Daily analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Daily monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Daily analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or a potential incident.
  • Change management records for all infrastructure devices, servers and applications in-scope for PCI compliance.

The key here is to generate alerts should any anomalous activity be detected.  But that is the rub.  What is anomalous?  Anomalies are not always the easiest things to identify or define.  As a result, your detective controls may take a while to fine tune.  However, the organizations that do the best job of managing their detective controls organize their anomalies by the PCI DSS requirements they are trying to meet.  This allows them to tweak their anomaly detection capabilities by PCI DSS requirement.

Then there is the issue of what do you do if you detect an anomaly?  Most of the time, an anomaly is not dealt with because of one of two reasons.  The first reason is because the detection solutions are new and are not functioning properly because no one has taken the time to tune them.  The second reason is that because of changes in the environment, the detective controls need to be re-tuned to reflect the changes.  Regardless of why, the detective controls need to be adjusted so that they are not generating excess false positives resulting in people chasing phantom issues.

If you want to go the extra mile, the following controls can further enhance your security.  While these sorts of tools are available as open-source solutions, there are also many commercial solutions as well.  Regardless of whether they are commercial or open-source solutions, tools that will perform these functions typically take a significant amount of time and effort to tune so that they provide the right amount of information for the right incidents.

  • Real-time analysis of infrastructure device configurations to ensure that only approved configuration changes are made to these devices.
  • Real-time monitoring of devices to alert on any foreign devices that are added or when devices are removed from the network.
  • Real-time analysis of log data from the preventative controls to find potentially anomalous log entries that indicate a variance in the preventative controls or potential incident.

All real-time does is provide you with instantaneous alerting.  Most small and even mid-sized merchants do not need real-time analysis and alerting.  Not that they cannot use it, it is likely overkill for their environments given the threat of attack.  However for governmental agencies/departments, financial institutions, health care organizations and most large merchants; real-time analysis and alerting is mandatory.

And if you think tuning for daily reviews was painful, tuning real-time analysis and alerting systems is at least twice as painful.

Corrective Controls

The following would be considered the minimum corrective controls when talking about network segmentation.

  • Change management procedures.
  • Incident response plan(s) for addressing any issues identified by the detective controls.
  • Root Cause Analysis (RCA) procedures.
  • Action plans that result from the incident response process that require changes to the preventative and/or detective controls.  At a minimum, the action plans must document the correction needed, the person(s) responsible for getting the correction completed and the timeframe for the correction to occur.
  • Internal audit review of the preventative and detective controls.
  • QSA review of the preventative and detective controls.

Here is where a lot of organizations miss the boat.  You have detected an anomaly, you dealt with the anomaly, but you do not analyze why the anomaly occurred or you do an analysis but then you do nothing to correct any issues that might have been identified.  As a result, the anomaly continues to be encountered but actions are not taken to minimize or even eliminate occurrences.  This is why the advanced persistent threat (APT) is successful.  APT relies on the fact that eventually all organizations get sloppy and do not take corrective actions to maintain or even improve their controls.

There may be a number of preventative, detective and corrective controls that I may have missed or did not consider since everyone has unique environments.  At a minimum, if your organization has implemented these controls and they are all operating effectively, you are going to better than the majority of organizations out there and much less likely to have a serious incident that could result in a breach.

And that is the problem all organizations face, keeping these controls functioning effectively every day without missing a beat.  That is why we have defense in depth.  If one control is not functioning properly, there are other controls that will cover in the interim until the control is back functioning properly.

Finally, as I always like to remind people, just because you implement all of these recommendations does make you invincible.  All these recommendations do is just making the likelihood of an incident and the potential damage resulting from an incident lower than if you had little or no controls in place.  How much lower depends on a number of factors, but the risk will be lower.  And after all, it is all about lower risk.

Hopefully the issue of what constitutes appropriate network segmentation has now been put to rest.


What Are The PCI Lessons We Can Learn From The WikiLeaks Debacle?

I know, I know, there have been an over abundance of articles published on what we can learn from the WikiLeaks incident.  However, after reading an interesting article in the Washington Post regarding how the WikiLeaks debacle came about, I thought there were a number of lessons that merchants and service providers could benefit.  The WikiLeaks documents have been traced back to the Net-Centric Diplomacy database developed by the State Department as a result of the 9/11 terrorist attacks.

Everyone Had Access

Access to the Net-Centric Diplomacy database had become unmanageable.  According to the article, not only was the database accessible to State Department employees, but it was also accessible to a number of other government departments including Defense and Homeland Security.  This project was undertaken as a result of the 9/11 attacks to make information that the State Department was collecting available to a wider audience of analysts.  While the database was only four years old in 2010, State Department officials acknowledged that over half a million people had access to the database all over the government including government contractors.

To add insult to injury, State Department personnel admitted that user management was out of control, particularly outside of the State Department.  You see the State Department took a distributed security approach for the Net-Centric Diplomacy database and designated persons at other entities to manage their users.  Unfortunately, there appears that there was no oversight of these people nor was there a requirement for these people to justify why all of their users required access.  This distributed data security approach is very common in the business world.  However, without oversight and periodic review, the distributed approach turns into a free-for-all with almost anyone asking for access being granted access.

Requirement 7 of the PCI DSS is all about access to cardholder data and verifying that those users continue to require access.  The user management situation with the Net-Centric Diplomacy database is why requirement 7 was put into the PCI DSS.  What this situation points out is that if you do not have defined criteria for users that you strictly enforce for access to sensitive data, then you cannot expect to control the data and you can then probably expect to have a breach of that data somewhere down the line.

Users Responsible For Use

This is usually a good thing, but in this case it went horribly wrong.  From an IT perspective, this is exactly what an IT organization wants – user ownership of their application.  However, this is a prime example of how user ownership goes wrong.  In addition to the mismanagement of user access, users were also in control of how the database got used as well as what data went into the database.  Based on my reading of the article, the issues documented are symptoms of a larger problem which was that it is highly likely that little to no training was provided regarding the Net-Centric Diplomacy database and how it was expected to be used.

This is a problem that is very endemic in business as well as government.  Vendors and IT departments leave training up to their end users in the mistaken belief that applications these days are intuitively obvious and all that needs to be provided is a good Help system and that the Help system explains “everything” a user needs to know to use the software.  While users typically are responsible for developing the Help system, how many of us have complained that the help topic we are trying to find is not covered?  The problem with this approach is that it is up to the user to familiarize themselves with the software which no one ever does because the application is intuitively obvious.  If Help systems are so good, why are thousands of books published each year to explain how to use everyday applications like Microsoft Office, Oracle and Lotus Notes?

The first result of this lack of education was that information that did not belong in the database ended up in the database.  The way the input process worked for the database was to code in a mnemonic into a diplomatic message that would trigger the routing of the information into the database.  However, no one apparently explained clearly enough what belonged and did not belong in the database.  As a result, everything was coded to go into the database whether it really did or not.  From a PCI perspective, I cannot tell you how many times that we run into applications that are being used for purposes that their vendors never anticipated.  As a result, cardholder data ends up in fields unprotected just because someone saw a need to retain it in an application never engineered to accept it.  This is also why scoping by the organization needs to be done as cardholder data can end up all over.

The second result of this likely lack of education is that users were unaware of their responsibilities regarding the data they now were allowed access.  Obviously since the information in the database was leaked, users were not aware of their responsibilities or just did not care.  Worse yet, since there was likely no feedback to users that might be misusing the data, they likely were unaware that what they were doing was not allowed.  In the PCI realm, this is why policies, standards and procedures are so important as well as making sure that all users are aware of them.  While policies, standards and procedures do not in and of themselves stop a leak, most people do not want to break the rules if they are constantly made aware of them.  It is likely that users of the Net-Centric Diplomacy database were not regularly made aware of their responsibilities like PCI DSS requirement 12.6 requires.

You Need To Go Above And Beyond

Another concern that was identified was that data could be downloaded at will by any user.  While the State Department could limit downloads to thumb drives, it could not control downloads from other agencies.  Based on the article, it appears there was also no limit to the amount of information that could be downloaded.  As a result, whoever downloaded the information from the Net-Centric Diplomacy database could do so without worrying about being quickly discovered.

This is one of the biggest problems with information management today; ensuring that the information within the data store is properly used and remains in the data store.  Thanks to Microsoft, Oracle, IBM and other database vendors, access to databases can be obtained through a multitude of ways such as ODBC, direct SQL query, and directly from tools such as Microsoft Office.  The bad news is that not all of these methods require authentication, so anonymous access can be obtained.  This is why PCI DSS requirement 7 exists; to make sure that authentication is always required in order to gain access to cardholder data.  However, we constantly run across people in organizations that are doing valuable data analysis, but are using access methods to databases containing cardholder data that do not require authentication.  In a few instances, we have run across organizations that have written access control systems for ODBC to secure their data.

The PCI DSS has a requirement to monitor the access to cardholder data in requirement 10.2.1, but there is no requirement in the PCI DSS that calls out limiting the downloading of data.  This is an area where organizations need to go above and beyond the PCI DSS.  Most database management systems will allow you to limit the amount of data returned by any query.  While this is usually used to control runaway queries, it is also a good security practice as you can then make sure that no users can get a hold of the entire database without having to get special permission.

I am sure as time goes on, more and more of the details of how the WikiLeaks breach occurred will be revealed.  However, just what has been revealed to date can provide a lot of lessons that we should all take to heart.


The Reinvigoration Of Social Engineering

Social engineering did not go away, but it seems to have taken a backseat to other attack techniques over the last few years.  With the publication of the results of the social engineering contest at Defcon this year, the participants in the contest have shown that social engineering is still alive and well and a very successful attack technique.  The following quote from the report on the contest says it all.

“Targeting people has become the most cost efficient attack vector in many situations, and all indications point to this trend continuing to increase.”

Social engineering is one of the most insidious attack techniques around.  Unfortunately, organizations do little to address social engineering and have only made social engineering easier over the years.  Customer service methodologies and training over the last 30+ years have done a great disservice to organizations.  For example, organizations trip all over themselves to be the JD Power customer service leader.  Employees are assessed on their ability to solve a problem on the first customer contact.  Yet in my experience, these sorts of activities typically focus organizations on blindly providing customer service at the expense of the organization’s security.

The organizers of the contest defined 32 objectives or flags that contestants could obtain over a 25 minute call to the target.  These flags were assigned point values based on the perceived difficulty in obtaining them.  While the flags were not considered to be highly sensitive information, the flags were such that one as to wonder if even more sensitive information would have easily been obtained had the contestants been allowed to go after it.

Prior to the contest, contestants were required to develop dossiers and attack scenarios on their targets that were also graded and given a value that became part of their score.  In the 25 minutes, contestants could call their target once or multiple times.

The statistics gathered as a result of the contest bear out the effectiveness of social engineering.  Of the 15 organizations targeted, 14 of them did give up at least one flag.  More troubling is the fact that if a contestant encountered difficulty in obtaining information all it took to get the information was to hang up and call back and get a different employee.

Another area that provides concern is the amount of information the contestants were able to obtain through their dossier development.  The use of Google, Google Earth and Google StreetView provided an amazing amount of information for the contestants.  Also used were social media sites such as Facebook, MySpace and LinkedIn.  While Facebook, MySpace and similar sites have garnered the most attention by the media, it was LinkedIn that provided the most information, in a few cases providing the contestants with the ability to develop an organization chart for the target.

Security is only as good as the weakest link.  As this contest points out, an organization’s weakest link is probably their employees – the likely cause of which is a lack of or only cursory focus on security awareness.  The contest just magnifies the fact that organizations have done little or nothing to protect their organizations from information leakage by employees.  As I constantly like to remind everyone, security is not perfect.  While you may have a fairly good security awareness program, you are still at risk from social engineering.  As PT Barnum liked to say, “There’s a sucker born every minute.”  Humans are fallible and as much as we try, everyone has their moments, but some people have a lot more moments than others.

If you think this is all just a nice exercise and it really does not present a strong enough threat, then go back over the last six months and read all of the news clippings about data breaches and other exploits.  The majority of these attacks are all social engineering based or had a very strong social engineering component.

I highly recommend that you visit the Web site and obtain a copy of their report.  Share the report with your executives, particularly the leader of your customer service area.  Hopefully they will get a clue regarding the amount of information that is inadvertently leaving your organization.


Twelve Character Long Passwords

This past week researchers from Georgia Tech released a paper saying that the days of eight character long passwords is over and that twelve character long passwords had arrived. The researchers based their efforts on the use of the latest graphics cards that have the computing power of a supercomputer, have software development kits and can be programmed in C.  However, the telling quote about their research came from the CNN Web site which stated, “The researchers used clusters of graphics cards to crack eight-character passwords in less than two hours.”

The first thing I thought of was, “What kind of system administrator lets a brute force attack on a single account run for two hours?”  The answer was no one, not even stupid ones allow that to happen.  As a result, this seemed to be a lot of “Chicken Little” reporting if you think only about a brute force attack in the traditional sense.

But the more I thought about it I did come up with potential uses for their work.  Wireless technologies are a method of communication where a hacker could obtain passwords without setting off alarms.  So, there is a potential threat, but not as great as the news reports are making you believe.

Then there is the portability, or lack thereof, of a system packed with a bunch of graphics cards.  Yes, we will find a way to shrink it in time, but for now, it’s not a possibility.  So even while the wireless scenario is a threat, without the portability, it too is relatively minor.

This is the problem with security research.  You really have to read the research paper to understand if the threat could actually be used outside of the laboratory.  In the case of this threat, most system administrators have put the following controls in place to stop such attacks.

  • Accounts lock after three to five invalid logon attempts.  No running a brute force attack against accounts for two hours straight when you only get three to five logon attempts.
  • Once locked accounts can only be unlocked by contacting the help desk.  So you lock the account, you just call the help desk right?  Think the help desk will wonder why you are constantly asking for a reset?  Eventually, you will not be able to convince the help desk to reset the account.
  • The help desk requires users to uniquely identify themselves by answering a question that only the user would know the answer.  Now you will have to do research into the user to determine their children’s’ names, birthdates, pets’ names, etc.  That of course implies that you got past bullet number two.

The bottom line is that this is why security standards such as the PCI standards are built in layers.  As researchers discover new threats, there are other controls in place to prevent the failure of the control now in question.

However, where security people frequently mess up is in connecting the dots between the current threat and threats exposed months or years ago that were written off because they were believed to be blue sky thinking.  I have seen examples where, in combination, the current threat plus older threats could be used to compromise security.  It was just all in how the threats were put together and the order they were executed.

This is why I think it is very important that security professionals need to understand their opponent and think like the opponent.  If you cannot understand how to put together an attack, it is very difficult to defend against it.  The best security professionals I have ever worked with thought like their adversaries.  They were always trying to see things through their opponent’s eyes and think of ways to circumvent controls.  It was through this sort of analyses that these top security people were able to create almost impenetrable defenses.  I say almost, because even these super security pros understand that security is not perfect.


How Do You Know That Your Software Is Secure?

There is an editorial in the June 2010 issue of Digital Transactions magazine entitled “How to Beat the Traps Hackers Set in Software” that brings up this point and suggests a way to address the problem.  If you do not think this is a potential issue, think again and read this article.  I think you will change your mind.  The biggest problem with this issue is that it is not yet viewed as an issue by most organizations.

In case you have missed it, software is everywhere these days.  Most people fail to recognize that almost everything, from flat panel televisions to furnaces, rely on some amount of software to function.  As these devices get connected to networks, the risk that backdoors or “sleeper code” are used to obtain surreptitious access to these devices becomes huge.  And do not think it is not happening, appliance manufacturers are connecting their products to the Internet; automakers are putting cellular wireless access points in vehicles so that the occupants have access to the Internet.  All of this connectivity was driven home for me last year when the Air France Airbus went down in the Atlantic and it was reported that the aircraft had issued a number of messages back to Air France’s maintenance base indicating that problems with the aircraft were occurring.  The bottom line is that all of these ”innovations” put homes and businesses at risk to attack or fraud.  But this is not a new problem.  As long as there have been computers, the issue of backdoors and sleeper code has existed.  It is just that in our networked world, the problem has larger ramifications than in the past.

For PCI, I have always had a problem with the fact that card terminals do not have to comply with PA-DSS.  When I was at ETA a year ago, I toured the exhibition floor and took in all of the sophisticated card terminals that were on display.  I spoke to a number of vendors that pointed out that their terminals were running embedded Linux or Windows and had Software Development Kits (SDK) for them to further customize their terminals.  Yet, according to the PA-DSS on page vii, the PA-DSS does not apply to hardware terminals, or as the PA-DSS states, “also called dumb POS terminals or standalone POS terminals,” if all of the following are true.

  • The terminal has no connections to any of the merchant’s systems or networks;
  • The terminal connects only to the acquirer or processor;
  • The payment application vendor provides secure remote 1) updates, 2) troubleshooting, 3) access and 4) maintenance; and
  • The following are never stored after authorization: the full contents of any track from the magnetic stripe (that is on the back of a card, in a chip, or elsewhere), card-validation code or value (three- or four-digit number printed on front or back of payment card), PIN or encrypted PIN block.

Time and again we find out that unencrypted PANs are stored on the device, typically until the end-of-day (EOD) function is run.  Granted, the terminals will not print out a list of the PANs, but anyone with the administrator code can scroll through the PANs line-by-line.  When I have approached terminal vendors with questions regarding why their terminals are not PA-DSS complaint, they throw this back at me and say they do not have to comply because it is a “dumb” device.  Because terminals have multiple models depending on application, i.e., standalone, integrated POS, etc., they all have the same software; it is just configured specifically for the application.  However, it appears that the terminal vendors use the lowest end of their product line to say that the terminal does not have to comply with the PA-DSS, even though the entire product line has the same software.  And when you bring this issue up under the PCI DSS, you are told by the terminal vendor that you are being too strict in your interpretation of the standard.

Then we have open source applications.  This is probably the most problematic situation.  If you are using open source, how do you know that the application you are using is not for example rounding numbers down and depositing the difference in the developer’s account or sending a copy of all credit card transactions to a system controlled by the developer?  For most, they will not know.  While network monitoring would likely reveal the copying of transactions to another computer, the transfer of rounding could be hidden inside a company’s normal ACH or other banking traffic making it almost impossible to find.

As software becomes even more pervasive, the integrity of that software is going to have to be proven.  How does one know that any application does not contain sleeper code or backdoors that leave any system running that application open to infiltration or fraud?  That is the problem we all face.  We can no longer just trust that software does not have backdoors or sleeper code.  We will have to extensively test and certify software to ensure that software is as “clean” as possible.  However, as the Digital Transactions editorial points out, that will still not be proof positive that the software is necessarily safe.

We are entering a very brave new world and need to be very careful until we get a handle on what we are creating.  Until we do get control, we need to be very skeptical and careful about the software we use.


Passing The Buck

When you are providing services to customers and those services are in-scope for compliance with any of the PCI standards, do not be shocked when your customer’s QSA asks you to prove that you are complying with the relevant PCI standards.  What sort of services are we talking about?  While not a completely inclusive list, here are some of the most common services I run across that are in-scope for PCI compliance.

  • Network management.  This includes management and/or monitoring of firewalls, routers, switches, etc.,
  • Server management.  This includes configuring of servers, patching of servers, add/change/delete of user accounts, monitoring of servers, management of server log files, etc., or
  • Network security management.  This includes management and/or analysis of infrastructure and/or server logs, monitoring of security devices such as firewalls and IDS/IPS, incident response, etc.

The most common point of confusion I run across is with those third parties that are providing network management services.  If the service provider is only providing a telecommunications circuit, then the service provider is not in-scope of PCI compliance.  This fact has been confirmed time and again by the PCI SSC.  However, once you start to be responsible for managing routers, switches or other networking infrastructure, those services are in-scope for PCI compliance.

What I think these service providers forget is that it is not just the storage of cardholder data that is the concern of the PCI standards.  It is the processing and transmission of cardholder data that is also covered.  Now, if cardholder data transmissions are encrypted and the third party does not have the ability to decrypt those transmissions, then the third party is not in-scope.  However, where service providers get in trouble is that the data stream is encrypted at the router that they manage or they manage other devices that come into contact with unencrypted data.  They think that because they are off the hook in one instance, they are off the hook for all which is not the case.

If your company is managing customers’ networks, then explain just how your customers can respond to the following sample of network management compliance tests from the PCI DSS.

  • 1.1.1 – Verify that there is a formal process for testing and approval of all network connections and changes to firewall and router configurations.
  • 1.1.4 – Verify that firewall and router configuration standards include a description of groups, roles, and responsibilities for logical management of network components.
  • 1.2 – Examine firewall and router configurations to verify that connections are restricted between untrusted networks and system components in the cardholder data environment …
  • 1.2.2 – Verify that router configuration files are secure and synchronized—for example, running configuration files (used for normal running of the routers) and start-up configuration files (used when machines are re-booted), have the same, secure configurations.

The bottom line is that your customers cannot respond to these requests if your organization is performing them, just ask your customers.  They expect as part of your service agreement to respond to these requests.  Given the ingenuity of entrepreneurs, almost anything can be outsourced for a price, hence why each service that is outsourced needs to be addressed individually to determine whether or not the service is in-scope for PCI compliance.

For those service providers that are reading this and are still unconvinced, I would ask you this question.  If your organization is not responsible, then who is?  Your customer contracted with you to perform the service; therefore they no longer have the knowledge to respond to anything regarding these requests.  If they cannot respond, then who does respond?  And I would point out that if a QSA cannot obtain satisfactory responses to these requirements, then the QSA is obligated to mark them as ‘Not In Place’ which means your customer is not in compliance and must  remediate the problem.

I would remind everyone that security is an all or nothing operation.  Either everyone and everything is secure in the business process chain or they are not secure.  All it takes is just one weak link and the party is over.  We live in a very interconnected world and therefore the security of any one entity can make or break the security of all others.

And if you are still unconvinced, I would have you ask your attorney what happens if a breach occurs at one of your customer’s and is the result of your organization’s failure to comply with one or more of the PCI DSS requirement that caused the breach?  My guess is that your attorney will tell you that you are legally on the hook and that likely all fines, penalties and any other sanctions will be against your organization, not your customer.

And finally, if you are still saying this is all BS, then you better get out of this business because this is what is coming down the line.  QSAs are just the messengers, so do not complain to or about us.  It is the PCI SSC and the card brands that set the rules.  And the PCI SSC is cracking down on QSAs and making sure that we all consistently interpret the PCI DSS and other standards.  So the fact that “no one has asked us about this before” is rapidly coming to an end as every QSA will begin asking for your compliance.

As they like to say, “If it’s too hot in the kitchen, then maybe it’s time to get out.”


Managed Networks And PCI Compliance

Here is a subject that keeps coming up, particularly with organizations that manage networks for merchants.  If I manage an organization’s network, is my organization in-scope for PCI compliance?  The answer is yes, the services you are providing are placing your organization and its relevant services in-scope.

The first response from the network management company is usually, “How can that be, all other telecom companies are out of scope, why not us?”  Quickly followed by, “No other QSA has ever asked us to go through this.”  Remember, the QSA is just the messenger.  Everything stated in this post is from the PCI DSS, the DSS Glossary and other PCI SSC and card brand publications.  This is what the PCI SSC is asking all QSAs to do as part of their PCI assessment work.  If you do not agree, talk to the PCI SSC as they are the only ones that can change the standards.

To answer, “How can that be, all other telecom companies are out of scope, why not us?”

It is very simple. Your organization is not providing just a circuit.  The PCI SSC has been very clear on this.  If all you are providing is a circuit and no other services, then you are out of scope.  The moment your service to a merchant or service provider goes beyond just providing a basic method of transport, you cross into PCI compliance territory.  Basically, the PCI SSC’s interpretation is that if the merchant or service provider has outsourced all or part of a role to your organization, it stands to reason that your organization has assumed that responsibility and, by default, also assumed the relevant PCI compliance responsibility.

But, what if the data is encrypted before it gets to our equipment?  As long as your organization does not have the ability to decrypt the data stream, then your services are out of scope.  However, if the cryptographic process involves your equipment or you manage cryptographic keys, then you are in-scope and must comply with the PCI DSS.

What are your compliance obligations?  Based on my analysis, your organization is involved or responsible for at least the following PCI DSS requirements; 1, 2, 4, 6, 7, 8, 9, 10 and 12.  Here is my high-level take on what you need to be prepared to document, discuss and/or prove you are doing.

  • Provide policies, standards and procedures for device management.
  • Provide policies, standards and procedures for physical and logical security.
  • Provide a copy of your incident response plan.
  • Provide access control definitions for groups and roles that manage the devices.
  • Provide job descriptions for the personnel that manage the devices.
  • Document all protocols/services that are used for managing the devices including a business reason for each of the protocols/services.
  • Provide configurations of a sample of physical devices.  Sampling is allowed as long as the service provider can prove that it has implemented a standard process for managing the devices in question.
  • Provide documentation that supports that device configurations are properly backed up and secured.
  • Provide documentation that supports that device configurations that are running and those that are stored are one in the same.
  • Verification that all relevant policies, standards and procedures are followed in configuring new devices.
  • Verification that documented protocols/services are the only ones configured on the managed devices.
  • Verification that security is properly implemented on all managed devices.
  • Verification that appropriate access control systems are implemented on the managed devices.
  • Verification that remote access is secure.
  • Verification that all user accounts are appropriate managed and controlled.
  • Verification that all logging is implemented and logs are reviewed at least daily.
  • Verification that log information is properly secured.
  • Verification that the time management is properly implemented on each device.
  • Verification that some form of critical file monitoring is being performed.
  • Verification that there is a formal change management process in place including testing.
  • Verification that any cryptographic keys are properly managed and secured.
  • Verification that all devices have been appropriately patched and that there is a patch management process in place.
  • Verification that appropriate physical security controls are in place.
  • Verification that logs are maintained for any backups stored off-site.
  • Verification that alerts are responded to as documented in the incident response plan.

Now for the second comment, “No other QSA has ever asked us to go through this.”  If no other QSA has asked you to go through this, shame on them.  This is why the PCI SSC implemented its Quality Assurance program so that all QSAs start doing the same level of work.  This is also why there is such a variance in QSA costs.  We are finding that the QSAs that are the cheapest are the ones that are not being appropriately rigorous with their assessment of an organization against the PCI DSS.  As the PCI SSC takes more QSAs through the QA process and puts them through remediation, things will change and assessment costs will become more consistent.


Network Segmentation – Take 2

I have had a couple of discussions recently regarding what constitutes good network segmentation.  Apparently, my original post was just too cryptic, so I’m going to use some examples in this post to hopefully clarify where people are going wrong.

The PCI DSS gives very little guidance on network segmentation.  In fact, the only statement near a definition says. “Network segmentation can be achieved through internal network firewalls, routers with strong access control lists or other technology that restricts access to a particular segment of a network.”  But those are the mechanics of network segmentation.  This definition does not specify or illuminate the additional controls required to ensure segmentation which is why I wrote the original post.

In my first example, the network in question is segmented by VLANs.  The cardholder data environment (CDE) is contained in one VLAN and there are another eight VLANs defined.  All VLANs are internal and none face the Internet.  Access control lists (ACL) have been implemented to control the communications between the various VLANs and the CDE VLAN.  Monitoring of all of the VLANs has been implemented through a variety of methods including network monitors, host monitors and system/event log monitoring and analysis.  Network administrators monitor consoles that bring up any critical alerts that could indicate a potential attack or compromise.  This definition sounds pretty good does it not?  The problem is that it is all in the details and the details tell a different story.

In reviewing the VLANs’ ACLs we determined that two of the VLANs have TCP and UDP ports 1 through 65535 open to the CDE VLAN.  Whoa!  Every port is open to the CDE VLAN from these two VLANs?  Yes, that is correct.  This is not what the PCI SSC thought was “strong access control lists.”  In digging further, we inquire as to why this condition exists.  We are told that, ”We were unable to determine what the applications needed to have open between these VLANs, so rather than break anything, we just opened everything to be safe.”  To be safe?  Safe is a term that has different meanings relative to each person’s view that uses it.  In this case, because the two VLANs were internal, apparently the other VLANs were considered also ‘safe’.

But a lot of network administrators would point to the monitoring as the way they control things.  Are you serious?  I do not care how much monitoring you do.  With every port open, that monitoring is going to likely generate enough false positives to make identifying the real threats like finding a needle in a haystack.  And this was confirmed later on when we observed the network administrators that monitor the network.  They were ignoring almost everything that came up on their screens.  When we questioned them about this, they said, “We have tried to tune the alerts, but have not been able to significantly reduce the false positives.  We get around 10,000 to 25,000 alerts a day.  So we do the best we can to find the real threats.”  The best we can?  Security is not forgiving, let alone for people that are doing ‘the best they can’.

The moral of this example is that if you have every port or close to every port open, you cannot consider your network properly segmented.  I do not care what the other controls are that you believe are in place.  You have to be realistic.  And justifying having all of those ports open has to be more than implying you were too lazy and did not want to make the effort to find the real answers.

My other example involves a network that does have a limited number of ports open between their CDE VLAN and their other VLANs, albeit there are quite a few open ports.  They also have monitoring in place and their network administrators are very diligent in ensuring that alerts are addressed as quickly as possible.  Unlike my first example, these folks are seeing around 300 to 500 alerts of which 10% to 15% are false positives.  The problem is with their documentation.  In reviewing the firewall rules that segment the VLANs we documented all of the ports open to/from the CDE VLAN to the other VLANs.  We interviewed the Manager of their network management and administration department and inquired as to the business reason for each of the open ports.  Of the 100 or so ports defined in the ACLs, they can only give us business reasons for about 20% of them.  Heaven forbid they should document the reason in the configuration file, but there is no other documentation available.  The Manager even tries to find documentation in the help desk system where they log all of their changes, but even after refining the search criteria, there are just too many records to sift through in our one hour meeting to find what we need.  Not even proof that management knows that these ports are open, the risks that are involved with these ports being open and that management approved that these ports be opened.

The moral here is that documentation is the foundation from which you build.  If you have a shaky foundation, you will have shaky security and are likely a candidate for a compromise and breach.  This is why documentation is important.  If you cannot remember why ports were opened, users were allowed access to data and other security relevant issues, how can you even think you are secure?  The answer is you cannot be secure if you cannot answer basic questions.

But it gets better.  This same individual earlier in our meeting had confirmed that they were the one that reviewed the firewall rules quarterly and showed us emails to prove that fact.  Then as we are going through the CDE ACLs, they say, “Oh, that rule should be removed.  It was for a business partner that we have not done business with in more than four years.”  Now, do you think I seriously believe that you are really reviewing these firewall rules quarterly when you admit that a given rule should have been removed four years ago?  We document four more firewall rules that should have been changed or removed.  It is situations like this that cause a QSA to shudder and then wonder what other ugly things are under the rocks and just how far you need or want to dig to find them.

Our moral here is telling the QSA what they want to hear when you know you will have to contradict yourself later on.  All it does is make you look incompetent.  But this situation also points out a good point regarding the duties of a QSA in conducting their assessment.  QSAs not only rely on interviews and documentation, they also rely on observations to ensure that organizations not only talk the talk but also walk the walk.

So what then is proper network segmentation?  A properly segmented network is much more than just technology.

The foundation of a properly segmented network starts with the control triad of preventative, detective and corrective controls.  Preventative network controls are going to be firewall rules and VLAN ACLs and any other controls that prevent or control access.  Detective network controls are going to be related to the monitoring you implement.  Monitoring can be real time and/or log analysis after the fact, but it should not be limited to just access to/from the CDE.  Monitoring also needs to include monitoring the network traffic for anomalous traffic.  Finally, you need corrective controls to ensure that any issues discovered with the preventative and detective controls are addressed as soon as possible.  Corrective controls are usually generated as action items created from such things as the lessons learned from an incident response plan or findings from an audit.

Once you have decided on the controls you will implement, you then need to create documentation that supports those controls.  For networks, the documentation that is key is to document every port that is open inbound to or outbound from the CDE environment.  Each of those ports will have been formally approved by management with the risk presented by having the port open.  And that risk analysis needs to include not just the port in question, but any other relevant ports, if necessary, as certain combinations of ports may increase or decrease the risk.  This risk analysis is important for a number of reasons.  First, it documents the basic analysis of risk and provides the rationale for having made a decision at that time.  That documentation can also save you if a breach occurs as you can understand what the people were thinking when they originally opened the port and also understand potential methods that might have been used to cause the breach.  This documentation is also important for the quarterly reviews as you can use the documentation to refresh your memory as well as assisting you in making changes to the rules if business conditions change.  Yes, I know firsthand that documentation is the last thing anyone wants to do.  But without it I will guarantee you will not remember six months or more down the road why you did what you did and for whom.  And in the security business, it is that sort of knowledge that can mean the difference between being secure and being a target.

The next item that needs to be documented is the users, programs, services and organizations that have access to the CDE.  In the case of programs and services, this should be tied to the aforementioned list of ports open.  In a breach, this documentation will reduce the number of likely suspects of where the breach came from.  As a result, you can see why it is important to limit the number of people, programs and organizations that have access to the CDE.

The final piece of documentation that needs to exist is what should be done in the event a problem or an alert is generated.  If people do not know what their responsibilities are in regards to providing feedback, then alerts will be missed or ignored and problems may not be addressed as quickly as they should.  Responses to problems or alerts should include detail regarding the conditions that created the problem or alert, the steps take to address the problem or alert and any issues that may have resulted from addressing the problem or alert.  If the problem or alert is not addressed in the timeframe required, there needs to be an escalation process so that the problem or alert receive the necessary visibility of management should they go unaddressed.

I hope this provides the additional examples of network segmentation.


Threat Landscape Is Changing – Advanced Persistent Threat

If you are not familiar with Advanced Persistent Threat or APT, you better get yourself up to speed as soon as possible.  This is a threat that will likely catch you flat footed if you are not addressing it.  As a member of InfraGard I was made aware of APT a year or so ago, but it was a great report recently produced by MANDIANT Corporation that really brought this threat into perspective.  I cannot stress how urgently you should go to their Web site and request a copy of their latest M-TRENDS report.  It is covers this topic in much more detail and is very enlightening.

APT is not your usual attack.  As the name implies, it is a very skilled long-term siege on your network and computer systems.  The attack is taken slowly and carefully so as not to trigger any alerts at the target.  These are teams of very skilled professionals, not hactivists, script kiddies or even organized crime groups.  As far as anyone can figure out, these professionals are state sponsored based on the scale and logistics of their operations.  Their “job”, so to speak, is to compromise networks and systems for the purpose of gaining access to information.  What makes APT particularly insidious is that they set things up so that they can keep coming back.  What makes APT even more effective is that regardless of the countermeasures put in place to thwart attacks; these people have the resources and knowledge to work around those countermeasures.  In effect, APT brings my adage to life, “If someone wants to get you bad enough, they will do whatever it takes to make that happen regardless of what you do to prevent it.”

While I know that you are likely saying to yourself that your organization would not be on the APT radar, think again.  If you have a presence on the Internet whether that is ecommerce, a static Web site or even an email server, you are a potential target of APT.  And while you may not have information they want, you may have a business partner that they wish to compromise and they will use your network to get a way in to your business partner.  This all goes back to a post I made a while back regarding the fact that we are all interconnected these days, one network to another and so on.  So while APT may not be able to directly get into a target, they may be able to compromise a network attached to the target and get in that way.  As a result, we all need to take precautions to ensure we have each other’s backs.

The M-TRENDS report goes into great detail on the methods used, so I will not bore you here with those details.  But, some of the take aways I got from the report are as follows.

  • These are very sophisticated attacks and require a level of sophistication in information security that most organizations do not practice.  As a result, if you intend to stay out of APT’s clutches, you are going to have to raise the bar on your information security program significantly.  Raising the bar does not necessarily mean spending more money on the latest and greatest security technologies.  On the contrary.  APT wants targets that think security technology is the only way to secure an organization.  It is organizations that rely heavily on technology and ignore or belittle security training that they prey upon.  This means that you need to focus your efforts on things like training and being more diligent on log reviews and alert follow up.  The requirements in PCI DSS requirement 10 go a long way in assisting you with finding anomalous network traffic and the like.
  • APT relies on heavy reconnaissance of networks and the gathering of information to be used in their social engineering attacks.  There initial forays into your network will likely be as innocuous as port and vulnerability scans as well as spidering all of your public Web pages and LinkedIn, Facebook, MySpace, Twitter, etc.   While you can do very little about the port and vulnerability scanning, you can do quite a bit about spidering.  Now is the time to reconsider the information you post publicly on your Web sites.  It is also time to start managing the information that is ending up out on social networking sites.  A just published study in the UK indicated that information regarding a number of top secret projects for the military could be found on various social engineering Web sites.  If that is the case for really hush-hush projects, imagine the sorts of information that could be garnered about your own organization.  Remember, it is this sort of information gathering that have caused most of the break-ins to celebrities’ and politicians’ email and social sites.  In addition, all of this ‘personal’ information is just a quick Advanced Google search away.
  • With social engineering as one of the big keys to APT, it is time to get serious about training of your personnel.  APT use a number of targeted social engineering techniques such as ‘spear phishing’ to gain ways into an organization.  If you still think social engineering training is useless, here is the biggest reason I can give you to get serious about training.  It does not have to be boring, but it does need to convey a sense of urgency and the extreme risk presented.  Just having people read the M-TRENDS report and then discussing it would likely go a long way to motivating people to think before they do something they will regret later.
  • The malware used by APT is very sophisticated and is constructed in such a way as to thwart most current anti-virus and anti-malware solutions.  In addition, APT malware is regularly updated to continue to blind these solutions.  As a result, relying on these solutions is not feasible.  You will need other measures in place to ensure your security such as critical file monitoring, file signature hashing and similar measures.  I am not suggesting that you take these measures on all you systems, but you probably should consider it on systems that contain critical data or have access to critical data.  There are a number of PCI DSS requirements that can help you with this, but the biggest is requirement 10 again followed by requirement 11.5.
  • You will likely need to make your network segmentation even more granular.  As I stated in the last bullet, you do not want to have to put these countermeasures on every system you have.  Unfortunately, unless you further tweak your network segmentation to keep sensitive systems and non-sensitive systems apart, you are not going to keep APT at bay.  Granularity does not mean more VLANs or segments; it more likely means more or tighter ACLs to control access to information.
  • To hide their activities, APT uses encrypted data streams between their malware and their command and control systems.  As a result, traditional network traffic monitoring will not help unless you are monitoring for “unknown” encrypted traffic.  Again monitoring can detect this, but you need to be monitoring for encrypted data traffic that is not “normal.”  This can also be controlled by controlling outbound traffic to unknown destinations.
  • Finally, a lot of these attacks are from known locations such as China.  If your organization is not conducting business outside of the United States, why is your firewall configured to accept traffic from anywhere on the Internet?  For that matter, why does your firewall allow outbound connections to foreign countries?  All of this is configurable if you take the time to enable it in your firewalls, but most organizations never go to that length.  Now you have a big reason why to start restricting traffic in and out of your network like you should have been doing all along.

The PCI DSS has a number of controls in it that, if properly implemented and monitored, would go a long way in making APT’s activities more difficult.  However, that is the rub.  Unfortunately, most organizations do not execute the PCI DSS consistently and therefore they can end up being owned by APT.  And just complying with the PCI DSS is not necessarily going far enough, so you need to go beyond it to ensure your network’s security.

Always remember security is not and never will be perfect.  Your goal then is to make the life of APT as miserable as possible so when they come calling, they will likely go somewhere else to get what they want.  However, if you are their ultimate target, then you need to be sharp as they will do whatever it takes to get in.

Update: According to Jerry Dixon, director of analysis at Team Cymru, APT is no different than any other attack.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023