Archive for the 'Requirement 2 – Do not use vendor-supplied defaults' Category



02
Oct
11

Defense In Depth

I have a slide in my security presentation deck that discusses the concept of defense in depth and how when you start opening ports or start using encrypted data streams how you are punching holes into one or more of your security layers.  It amazes me how many people still do not understand how defense in depth works and how much security standards such as the PCI DSS rely on this concept.

So let us take a look at the various elements of security and the requirements of the PCI DSS and see how they bring defense in depth to bear.  Keep in mind this is an example and does not encompass everything an organization could do to increase defense in depth.

For most organizations, the first level of defense is at their firewall.  Requirements 1 and 2 talk to how you should use a firewall and secure it.  The biggest mistake that organizations make is not configuring their firewall properly.  And by configuration, I am not just talking about the configuration of the firewall’s software; I am also talking about where and how the firewall is used in the network.

The next level of defense for most networks is usually some form of intrusion detection/ prevention system.  Some of the requirements in 10 and 11 talk to intrusion detection/prevention.  IDS/IPS capability may be provided in a separate appliance or may be part of an organization’s firewall.  The key to using an IDS/IPS is ensuring that it is kept current with its attack signatures, monitoring its log data and/or console and ensuring that it is not be overwhelmed by network traffic.

One thing that continues to amaze me is how many implementations of IDS/IPS I encounter where the IDS/IPS are in the middle of encrypted data streams.  IDS/IPS systems cannot examine encrypted data streams unless they have the decryption keys which they typically do not have access.  As a result, encrypted data streams are not examined and therefore sensitive data and or attacks could be going right past the IDS/IPS.

How users authenticate to your network and devices is also a level of defense.  Requirements 7 and 8 of the PCI DSS talk to this point.  And it is not just authentication to applications that process, store or transmit cardholder data, it is also authentication to infrastructure devices and to databases.  

It has been more than five years since the “sa” default password debacle and yet you still encounter applications that use service accounts to access their database and those service accounts have no password.  The rationale?  “We did not want to code the password into the application,” is the common reply.

The other big area of authentication issues that you encounter is with firewalls, routers switches and other network infrastructure.  The problem is that the network administrators all use the same account and password.  You can understand their rationale, particularly those networks where you are administering thousands and thousands of devices.  

There are a number of ways to address this situation, but these are my favorite two.  The first is to implement 802.1X authentication using a RADIUS server.  Under this scenario, every network administrator has their own unique account and password to access the network devices.  Those unique accounts should be different from the network administrator’s account they use to get email and network access like every other user.  A lot of organizations already have the RADIUS server implemented for remote access, so adding in network administration access control is relatively easy.

The second way to address network administration access is to use a “jump box.”  In a “jump box” implementation, two or more “jump boxes” are placed at strategic points on the network and all network administration access is conducted through a “jump box.”  The “jump box” is fully instrumented in that all keystrokes, applications, etc. are logged and those logs are reviewed at least daily to ensure that network administrators are not changing things they should not be changing.  That means comparing service tickets for the network against the logs from the “jump boxes” and ensuring that only what was required to be changed was changed.  “Jump boxes” can also be used to control access for server administration.

A level of defense that usually gets little recognition is operating system (OS) hardening.  What some people seem to forget is that any computerized device has an OS whether it is a firewall, router, switch or server.  Requirement 2 talks not only to the hardening of wireless, but also firewalls, switches, routers and servers.  Every vendor publishes a guide that explains how to securely implement their OS.  Where things can get sticky is with third parties that argue that their product or software will not function if you follow the vendor’s OS hardening recommendations.  In my experience, testing a vendor’s product or software in a hardened environment typically does not have an adverse result.  However, the key is to conduct a test.

Another level of defense is anti-virus and anti-malware software.  This solution also usually includes a personal firewall on mobile devices such as notebooks, netbooks and smartphones.  Requirement 5 of the PCI DSS talks to anti-virus and anti-malware while a requirement in 1 talks to personal firewalls.  Nothing gets some people wound up more than anti-virus software.  The requirements in 5 can have compensating controls, but implementing those compensating controls consistently on mobile devices is usually just about impossible.  So while you may not have anti-virus/malware on your e-Commerce servers, you should have it on all of your desktops, notebooks, netbooks and other systems.

A level of defense that most organizations poorly manage is their collection and analysis of log data from their network devices and servers.  Requirement 10 speaks to the importance of log data.  As I have written before, log data is IT’s version of commercial aircraft’s flight data recorder.  If you want to why a problem occurred, log data from your devices can usually point you to the reason.  The problem most IT professionals have with log data is that they do not want to log everything because that generates too much data in their opinion.  However, until you have an incident, you do not know what log data will be important in identifying why the incident occurred therefore you need all of it.  The last thing you want to have happen is to tell management that you could not determine the cause of an incident because you did not record the critical information required in identifying the incident.

The final defense most people think of is application development which is covered by requirement 6.  If you are going to get push back, this is the most likely and consistent place you will get push back to the PCI DSS or any other security program.  Application developers are very protective of their environments, so when you start infringing in their area, they can get rather upset.  As a result, you hear the typical lament from developers that security, “restricts their creativity.”

In today’s rush to get things done, application developers usually do not have security at the front of their minds.  As a result, by the time anyone knows that there is a security issue, it is too late for it to be fixed and the application goes into production with the fix to be part of version 2.  That was the whole point the PCI DSS addresses in requirements 6.4, 6.5 and 6.6; avoid putting susceptible software into production.  The whole point of these requirements is to build a certain amount of security into the development process to minimize the amount of security issues that end up in production.

The real final defense is an organization’s policies, standards and procedures.  Yes, that paperwork that everyone thinks is “make do” work, really does have a purpose.  An organization’s policies, standards and procedures are the rules that everyone is to follow to ensure security.  Those rules also provide a way to measure people’s compliance so that, in the event of an incident, those people that did not follow the policies, standards or procedures can be shown their mistakes so that they can correct their actions in the future.  These rules also provide an organization’s framework for explaining to personnel as to how the organization is protecting their information assets and defining those assets.

There are a lot more options for defense in depth, but I think you get the idea.  Now that you understand how defense in depth works, you should now also understand what happens when security personnel are asked to open ports for an application or change configurations that reduce the number of levels in an organization’s defenses.  The fewer levels involved the higher the likelihood that a lapse in control can result in a breach, particularly when a number of lapses in controls are occurring simultaneously.  This is how supposedly PCI compliant organizations end up breached.

Advertisement
07
Jun
11

VoIP And PCI Compliance

I have had some interesting conversations with people lately regarding voice over IP (VoIP).  It fascinates me as to how little people really know and understand how this technology works.  But what really scares me is how this lack of information is putting organizations at risk.

The most obvious problem with VoIP is segmenting it away from the cardholder data environment (CDE).  I am really disturbed by the number of organizations that have no security around their VoIP.  Yes a lot of organizations have segmented the VoIP from the rest of their network, but there are no controls that stop anyone from getting into that network segment.  As a result, anyone with the right set of tools can gain access to the traffic in the VoIP network segment.

The next thing that scares me is the lack of security surrounding the VoIP servers including any call recording servers.  People treat these VoIP servers just like their traditional PBX.  Unlike their PBX that likely ran a proprietary version of UNIX, VoIP servers are just Windows or Linux servers running a VoIP application.  As a result, they are susceptible to all of the viruses, malware and everything else any other server is susceptible.  However, these servers typically do not run anti-virus (performance issues) or are they hardened to any rational security standard.  When they get infected or hacked, it seems to be a shock to the system administrator.

And what about the call recording technology?  We keep hearing from vendors of call recording solutions that they use proprietary recording methods requiring special CODECs.  While in some instances the proprietary claims are true, what we are finding more and more is that vendors are just manipulating file header information such that Windows Media Player, iTunes and the like do not recognize the file as being in a valid format.  However, tools such as VLC Media Player are able to see past the header changes and recognize these files for what they are, WAV, MP3 and the like.  Thus some proprietary formatting claims are all a bunch of smoke and cannot necessarily be relied upon for security or privacy.  Another tell on the proprietary nature of call recordings is that, when you “convert” a recording, if the conversion software seems to be copying the recording more than actually converting it, it is likely that the header is being fixed to WAV, MP3, etc.  Real audio conversions typically take more time than just copying because the file has to be fully processed.

But the final insult in this whole scenario is the lack of understanding security personnel have regarding the VoIP protocols.  While VoIP call setup and teardown are usually conducted over TCP/IP (a stateful protocol), the actual call itself is conducted via streaming protocols over UDP/IP (a non-stateful protocol).  As a result, when you start talking to security people about VoIP security, their knee-jerk response is to tell you that VoIP is secured by the corporate firewall.  However, given that the VoIP protocols are stateless, even behind a firewall really does not provide any protection.

So you have a VoIP solution in place.  What should you be doing to ensure its security if it is in-scope for PCI compliance?  Here are my thoughts.

  • Properly segment your VoIP from the rest of your network.  This means either physically or logically separating the VoIP from the rest of your network.  This also means implementing access control rules so that only those devices and people that need access to the VoIP network have access.  If you are also using your VoIP phones as the network jack for a PC, make sure to VLAN that jack to something other than the VoIP VLAN.
  • If you can, implement the VoIP segment without DNS and DHCP and use MAC filtering to avoid the accidental or deliberate plugging in of a PC into a network jack that is VoIP only.  At a minimum, use MAC filtering to at least control what gets plugged into the LAN jack.
  • Closely monitor your VoIP segment and generate alerts on any devices that are unplugged or plugged in.  Also monitor for any protocols other than the VoIP protocols that your VoIP system uses.
  • Do not use the last octet or any other portion of the phone’s IP address as the extension number.  Yes, I know this is an easy way for the help desk to identify and troubleshoot phones, but it is also easy for an attacker to locate targets of interest, so keep that in mind when you are implementing your VoIP solution.
  • Never, ever connect your VoIP network to another VoIP network outside of your explicit control.  Given that VoIP primarily uses UDP/IP, you cannot expect any firewall to protect your VoIP system from anything outside of your control.  Always use plain old telephone system (POTS) circuits to connect to any foreign network.  I know that is not as sexy as VoIP, but how else can you protect your VoIP system from outside influences?
  • Work with your VoIP vendor to harden all servers that manage the VoIP system.  These are just Windows, Linux, etc. systems.  Obviously you will need to do some testing of this and you may not be able to use all of the hardening items in your server hardening standard, but you would be surprised at what you can do that the vendor says will not work.  Remember, they are just trying to cover their butts should a problem crop up.
  • Be careful implementing VoIP on traditional PBXs.  A lot of these solutions are just PCs or servers and can be easily hacked once on your network just like their full VoIP brethren.
  • Get a hold of VLC Media Player or similar tool and see if you can play a recording off of your call recording system.  We are getting about a 25% hit rate using VLC to play recordings.  A lot of the success of this approach depends on the age of the call recording system.  The newer the systems, the more likely it is that you will find that the recordings are just tweaks of existing standards.
06
Mar
11

PCI And Virtualization

I just received an invitation for a Webinar on Virtualization and PCI compliance.  My friend, John Kindervag is one of the panelists and, no, this is not an unpaid advertisement for anyone to attend even though I have provided the link to register.  For an hour they will be discussing this topic because now the PCI DSS v2.0 references virtualization.  Let us be very clear, while the PCI DSS prior to v2.0 never explicitly discussed virtualization, QSAs were instructed on how to approach virtualization security.  And as you will see, virtualization security is no different than any other operating system security.

In my very humble opinion, virtualization is a one minute security issue, if that long.  Let us cut to the chase, as small an attack vector virtualization can be, it is still a potential attack vector, so you need to secure it.  Is that clear enough?  The real issue is how to secure a virtualized environment.

There are two different forms of virtualization.  There are stand-alone hypervisors (what NIST refers to as “bare metal”) like VMware vSphere, VMware ESXi, Microsoft Hyper-V and Citrix XenServer.  Bare metal hypervisors are what we typically run into the most in our PCI compliance engagements, but not necessarily a guarantee.  There are also VMware Server, VMware Desktop and Microsoft VirtualPC (what NIST refers to as “hosted”) that require a host OS to run on as an application no different than Microsoft Word.  Obviously, the attack vectors are wildly different for each type of virtualization.

For whatever reason, it seems that a lot of IT professionals do not recognize that a hypervisor is an operating system.  Yes it is a very specialized operating system, but it is an operating system just like Linux or Windows.  Most hypervisors are based on Linux or UNIX and have a few security hardening similarities.  But given a hypervisor’s specialization, they have significantly different security hardening requirements from their Linux or UNIX counterparts.  As such, hypervisor vendors typically provide a security hardening standard for each of their hypervisor operating systems.  All you need to do is go to the hypervisor vendor’s Web site and download the security hardening guide for your version of hypervisor.  Which brings up a good point, if your hypervisor vendor does not provide a security hardening guide, then you need to find a different hypervisor.

For bare metal implementations, the only thing you have to secure is the hypervisor itself.  However, with hosted virtualization, you need to secure the host operating system as well as the hypervisor.  In addition to the hypervisor, you will need to follow the host operating system vendor’s security hardening guide to ensure that the host OS is also secure.

But hardening your virtualized operating system is not the end of the job.  You need to properly implement your virtualized environment securely as well and that is more than just hardening the hypervisor.  The most obvious security item that needs to be done is that any guest operating systems implemented need to also be securely hardened.  It still surprises me the number of IT professionals that somehow seem to think that because they are implementing Windows or Linux as a virtual machine that there is something different about security and you can totally skip or skimp on hardening.  Security hardening procedures need to be completely followed regardless of whether the guest OS is stand-alone or in a virtual machine.

The next area that seems to get the short shrift is infrastructure security.  This is particularly true of the management of the hypervisor environment.  Most implementations I have seen do a good job of securely connecting the virtual machines, but the hypervisor infrastructure environment leaves a lot to be desired from a security perspective.  The first mistake I see is that the hypervisor management environment is not segregated from other networks.  In the first scenario I commonly see, the production network and the hypervisor management network are on the same segment.  If an attacker compromises any virtual machine, they gain access to the hypervisor management environment and can therefore gain access to the virtual cardholder data environment.  In the other scenario, the corporate network and hypervisor network are one and the same and therefore everyone that is on the corporate network can also gain access to the hypervisor management network.  The way to fix both of these situations is to put the hypervisor management network on its own network segment.  I also recommend to organizations that they dedicate a NIC to only that segment.  However, if an organization already has an operations management network segment separate from other networks, I have no problem having the hypervisor management network in that segment as well.

The other scenario I frequently see is virtual machines from the cardholder data environment (CDE) intermingled with virtual machines that are not part of the CDE.  The problem here is that in the event of a compromise of a non-CDE virtual machine, CDE virtual machines may be accessible because of the configuration of the virtualization environment.  The best way to use virtualization for PCI compliance is to isolate your CDE virtual machines in a physically separate virtual environment from your non-CDE virtual machines.

For the truly paranoid, you can also fiddle with parameters such as physical/logical NIC assignments as well as SAN configurations.  While these sorts of configuration changes can provide additional security to the equation, I have my doubts as to the significance of these changes from a security perspective.  In my years of dealing with virtualization, these sorts of configuration changes have been more for performance reasons and enhanced security was just a nice byproduct.

Finally, there is the maintenance aspect of virtualization.  I think everyone gets the fact that virtualized or not, the guest operating systems need to be maintained and patched just like their stand-alone brethren.  However, when you ask organizations how often they patch their hypervisor; some will say to you very honestly, “You have to patch it?”  Earlier on I stated that a hypervisor is also an operating system and, as such, it needs to be patched just like any other operating system.  Granted a hypervisor does not usually get patched every month like Windows, but there are patches issued every so often by hypervisor vendors.

Best of luck to John and the round table that are presenting this month on virtualization and PCI compliance.  Hopefully this post will help explain what they will be discussing as well as lead to more insightful questions on the topic.

09
Dec
10

Anatomy Of A Breach

People are always asking me why complying with the PCI standards is important as in, “What’s in it for my company?”  So I thought I would take a known, documented breach and walk through where PCI compliance would have made a difference.  And for those naysayers that point to the PCI DSS and say that compliance does not matter, I intend to show that compliance does lead to security.

The breach I am going to use is the Wal-Mart breach which was documented in an article in Wired magazine back in October 2009.  Wal-Mart has what most professionals would consider a robust control environment.  However, what this breach shows is that even with such an environment, a breach can still occur.  That is not to say that Wal-Mart did not make mistakes and it is those mistakes that I want to point out so that we can all learn.

For some background, the Wall-Mart breach occurred sometime between 2005 and November 2006 when it was discovered by Wal-Mart.  The good news, at least as far as Wal-Mart has ever publicly shared, was that no cardholder data was ever released as a result of the breach.  However, the final report issued internally by Wal-Mart was never shared outside the company, so it is anyone’s guess as to whether the claim that no cardholder data was ever released is accurate.  This was not Wal-Mart’s first cardholder data breach.  In the Fall of 2005, a small number of Sam’s Clubs gas station systems were accessed by intruders and around 600 credit card accounts were believed to have been compromised.

The breach was discovered by accident when a server crashed.  During the investigation to figure out what had happened to the server, one of the investigators found that L0phtcrack had been installed on the failed server and that it was L0phtcrack that had caused the server to fail.  Obviously, L0phtcrack was not an approved application to have installed.  As a result, this information caused an even larger investigation to be launched.

Before we discuss L0phtcrack, let us discuss file integrity monitoring.  This incident points out why the PCI DSS mandates file integrity monitoring in requirement 11.5.  But just monitoring known files is not enough.  This is where an organization needs to be going above and beyond in order to better ensure their security.  While monitoring critical files, you also need to be monitoring any new files that might be added to a system.  And alerts generated by your file integrity monitoring system need to be reconciled to all changes being made to the systems.  Any file addition, change or deletion not documented in changes needs to be investigated to determine its cause.  Based on the timeline, while Wal-Mart may have had critical file monitoring going on, it either was monitoring only a limited number of files and directories, not monitoring for new files and/or any alerts were not being followed up in a timely manner.

Then there is a topic not even mentioned in the PCI DSS but just as important.  Root Cause Analysis (RCA) is something that everyone should conduct in the event of a failure and needs to be an activity conducted as part of an organization’s incident response process.  Because of their RCA process, Wal-Mart found that L0phtcrack was the cause of the server failure.  Since L0phtcrack was not an approved program and was likely installed by an attacker, Wal-Mart personnel broadened their investigation to determine if L0phtcrack was installed on other systems.

While L0phtcrack should be an obvious program that should not be installed, it is not always that easy.  This is why requirements 2.2 and 12.3.7 are important, so that when doing an investigation, the investigators know what to expect to see installed as well as what was approved so that they can quickly determine if the server was running approved software.  Again, I am certain that L0phtcrack would not have been part of those standards.

That even larger investigation led to Wal-Mart to determining that over 800 systems and servers had been compromised or attempted to be compromised.  The compromise was traced back to a remote access VPN account that was used by a former Wal-Mart employee in Canada.  That account had been used by the intruder to enter Wal-Mart’s network and begin the compromise of their systems.  While investigating the breach, Wal-Mart personnel suspended that account and the intruder moved over to another terminated employee’s account.  When they disabled the second account, the intruder moved over to a third terminated employee account.

Requirement 8.5.4 states that accounts for terminate employees should be disabled or removed immediately and this was obviously not followed in this case.  Requirement 8.5.5 states that inactive accounts should be removed if not used for 90 days or more.  Unfortunately, we do not know if any of the accounts had been inactive for more than 90 days.  We also do not know if any of these accounts were disabled.  However, in such a breach, if the attacker has any sort of administrative access, it takes almost no time to activate a disabled account.  That is why an organization needs to remove those accounts as soon as possible, particularly any account that might have administrative privileges.

The investigation quickly focused on one particular Wal-Mart system, point-of-sale (POS).  Documentation from the investigation indicates that the intruder(s) were very focused on POS source code, executables, databases and documentation.  The intruder(s) were so focused on POS, that they even downloaded the latest technical specifications for Wal-Mart’s POS system.  As a result, investigators focused much of their efforts on POS systems at store locations and at corporate.

If not already obvious, investigators inspected log files to determine that the compromise went at least as far back as June 2005.  If you want to have a concrete example of why log information and proper time keeping are important and requirement 10 is so focused on log data and time setting, there is no better example than this breach.  Thanks to an obviously large retention of log data, Wal-Mart was able to at least figure out when and where the breach started as well as trace the actions of the intruder(s) through their network and systems.  It is implied that the time settings on servers and network devices must have been fairly closely synchronized as it is never mentioned if there were time correlation issues in the log data.  Had Wal-Mart had to rely on system and event logs that were contained only on the network devices and servers, the when, where and how of this breach might have never been known.

Unfortunately, the log data was not as complete as it could have been.  As a result, the Wal-Mart investigators were somewhat stymied in their efforts to better understand the breach.  Server logs were only configured to log unsuccessful logon attempts.  As a result, they were not able to track the successful logon attempts of the disabled accounts that were being used by the intruder(s) and therefore trace the actions of the intruder(s) through their network and systems.  A lot of administrators save log space on internal systems by not logging all activities.  I am also guilty of doing this as I also used to believe that successful attempts internally were not a big deal to miss.  However, as the internal threat has become more and more prevalent, I have changed my opinion and now I log everything I possibly can on all systems.  Requirement 10.2.5 implies that logging of all authorization and identification mechanisms are logged, but it does not specifically call out that successful and unsuccessful attempts are to be logged.

The saddest fact of all was that none of this should have been a shock to Wal-Mart IT and security personnel.  Almost six months prior to discovering the breach, Wal-Mart’s QSA had completed their PCI assessment and had found numerous areas where Wal-Mart was not compliant.  A lot of the areas of non-compliance were the direct result of how the breach occurred.

So what are the lessons that should be learned from this incident?

  • Compliance does matter and does result in security.  I do not care whether you follow the PCI DSS, FISMA or any other well known security standard.  The purpose of all security standards is to provide guidance on how to secure hardware and software so that it is difficult to compromise.  If you comply with any of these standards, you greatly enhance your security posture.  However, the best security comes down to more than just complying with a standard.  If an organization really wants to be secure it will have to go beyond just what the standard requires.
  • Security is not perfect.  The purpose of any security programs is to limit the damage of incidents when they occur so that they do not get out of control.  All we can expect to gain out of a security programs is minimizing the potential risk that an incident results in a breach of sensitive information.  A good friend of mine has a great quote on this point.  He always likes to say, “I just want my security program to be sufficient enough that it makes everyone else an easier target than my company.”  What security standards let you have is the information you need to know where the bar is set so that you can make investments to do that little bit more.
  • Most breaches are discovered by accident.  It has been my experience that even with great tools and instrumentation, the discovery of a breach or compromise all comes down to the uncovering of information that results in someone becoming curious and digging further into the incident and discovering that systems and/or data have been compromised.  This is not to say that monitoring and alerting is not worthwhile.  It is just that it is very rare that a breach or compromise is uncovered when the initial alert was issued.  It takes follow up on all of the alerts to actually uncover the breach or compromise.
  • Follow up should be the standard for all alerts and a documented Root Cause Analysis (RCA) process should be followed as part of an organization’s incident response plan.  This is where most organizations get sloppy and miss the signs of a breach or compromise.  They do not treat all alerts consistently,  do not perform the RCA process every time and therefore earlier warnings go undiscovered until the situation gets truly serious such as when a production server crashes.
  • If you do not have at least a year’s worth of log data, you are probably going to be in the dark about how, when and where the compromise occurred.  There is a lot of push back from organizations about hanging onto log data, particularly more than three months worth.  A lot of this comes down to the cost of storing such a huge amount of data.  However, had Wal-Mart only had three months worth of log data, they never would have known when they had been breached nor the focus of the breach.
  • What gets logged is also very important.  Wal-Mart’s breach would have been a bit easier to investigate had the log data been complete.  Just because you are on the inside of the network is not an excuse to not log everything.  As I have pointed out before, log data is IT’s version of a commercial airliner’s flight data recorder.  Without all of the data, it can be almost impossible to isolate the cause of a compromise.
  • As soon as employees and contractors are terminated, they need to be removed from the access control system.  I know that this can cause issues with some operating environments, but there are work arounds to avoid those complications.
  • And finally, there are no easy ways to ensure security.  Security requires diligence.  Extended diligence typically results in tedium which then results in diligence faltering.  As a result, organizations interested in maintaining their security need to combat tedium by rotating security and operations personnel through positions so that tedium does not set in.  This has an added benefit in improving cross training of personnel.
21
May
10

Passing The Buck

When you are providing services to customers and those services are in-scope for compliance with any of the PCI standards, do not be shocked when your customer’s QSA asks you to prove that you are complying with the relevant PCI standards.  What sort of services are we talking about?  While not a completely inclusive list, here are some of the most common services I run across that are in-scope for PCI compliance.

  • Network management.  This includes management and/or monitoring of firewalls, routers, switches, etc.,
  • Server management.  This includes configuring of servers, patching of servers, add/change/delete of user accounts, monitoring of servers, management of server log files, etc., or
  • Network security management.  This includes management and/or analysis of infrastructure and/or server logs, monitoring of security devices such as firewalls and IDS/IPS, incident response, etc.

The most common point of confusion I run across is with those third parties that are providing network management services.  If the service provider is only providing a telecommunications circuit, then the service provider is not in-scope of PCI compliance.  This fact has been confirmed time and again by the PCI SSC.  However, once you start to be responsible for managing routers, switches or other networking infrastructure, those services are in-scope for PCI compliance.

What I think these service providers forget is that it is not just the storage of cardholder data that is the concern of the PCI standards.  It is the processing and transmission of cardholder data that is also covered.  Now, if cardholder data transmissions are encrypted and the third party does not have the ability to decrypt those transmissions, then the third party is not in-scope.  However, where service providers get in trouble is that the data stream is encrypted at the router that they manage or they manage other devices that come into contact with unencrypted data.  They think that because they are off the hook in one instance, they are off the hook for all which is not the case.

If your company is managing customers’ networks, then explain just how your customers can respond to the following sample of network management compliance tests from the PCI DSS.

  • 1.1.1 – Verify that there is a formal process for testing and approval of all network connections and changes to firewall and router configurations.
  • 1.1.4 – Verify that firewall and router configuration standards include a description of groups, roles, and responsibilities for logical management of network components.
  • 1.2 – Examine firewall and router configurations to verify that connections are restricted between untrusted networks and system components in the cardholder data environment …
  • 1.2.2 – Verify that router configuration files are secure and synchronized—for example, running configuration files (used for normal running of the routers) and start-up configuration files (used when machines are re-booted), have the same, secure configurations.

The bottom line is that your customers cannot respond to these requests if your organization is performing them, just ask your customers.  They expect as part of your service agreement to respond to these requests.  Given the ingenuity of entrepreneurs, almost anything can be outsourced for a price, hence why each service that is outsourced needs to be addressed individually to determine whether or not the service is in-scope for PCI compliance.

For those service providers that are reading this and are still unconvinced, I would ask you this question.  If your organization is not responsible, then who is?  Your customer contracted with you to perform the service; therefore they no longer have the knowledge to respond to anything regarding these requests.  If they cannot respond, then who does respond?  And I would point out that if a QSA cannot obtain satisfactory responses to these requirements, then the QSA is obligated to mark them as ‘Not In Place’ which means your customer is not in compliance and must  remediate the problem.

I would remind everyone that security is an all or nothing operation.  Either everyone and everything is secure in the business process chain or they are not secure.  All it takes is just one weak link and the party is over.  We live in a very interconnected world and therefore the security of any one entity can make or break the security of all others.

And if you are still unconvinced, I would have you ask your attorney what happens if a breach occurs at one of your customer’s and is the result of your organization’s failure to comply with one or more of the PCI DSS requirement that caused the breach?  My guess is that your attorney will tell you that you are legally on the hook and that likely all fines, penalties and any other sanctions will be against your organization, not your customer.

And finally, if you are still saying this is all BS, then you better get out of this business because this is what is coming down the line.  QSAs are just the messengers, so do not complain to or about us.  It is the PCI SSC and the card brands that set the rules.  And the PCI SSC is cracking down on QSAs and making sure that we all consistently interpret the PCI DSS and other standards.  So the fact that “no one has asked us about this before” is rapidly coming to an end as every QSA will begin asking for your compliance.

As they like to say, “If it’s too hot in the kitchen, then maybe it’s time to get out.”

17
Apr
10

Managed Networks And PCI Compliance

Here is a subject that keeps coming up, particularly with organizations that manage networks for merchants.  If I manage an organization’s network, is my organization in-scope for PCI compliance?  The answer is yes, the services you are providing are placing your organization and its relevant services in-scope.

The first response from the network management company is usually, “How can that be, all other telecom companies are out of scope, why not us?”  Quickly followed by, “No other QSA has ever asked us to go through this.”  Remember, the QSA is just the messenger.  Everything stated in this post is from the PCI DSS, the DSS Glossary and other PCI SSC and card brand publications.  This is what the PCI SSC is asking all QSAs to do as part of their PCI assessment work.  If you do not agree, talk to the PCI SSC as they are the only ones that can change the standards.

To answer, “How can that be, all other telecom companies are out of scope, why not us?”

It is very simple. Your organization is not providing just a circuit.  The PCI SSC has been very clear on this.  If all you are providing is a circuit and no other services, then you are out of scope.  The moment your service to a merchant or service provider goes beyond just providing a basic method of transport, you cross into PCI compliance territory.  Basically, the PCI SSC’s interpretation is that if the merchant or service provider has outsourced all or part of a role to your organization, it stands to reason that your organization has assumed that responsibility and, by default, also assumed the relevant PCI compliance responsibility.

But, what if the data is encrypted before it gets to our equipment?  As long as your organization does not have the ability to decrypt the data stream, then your services are out of scope.  However, if the cryptographic process involves your equipment or you manage cryptographic keys, then you are in-scope and must comply with the PCI DSS.

What are your compliance obligations?  Based on my analysis, your organization is involved or responsible for at least the following PCI DSS requirements; 1, 2, 4, 6, 7, 8, 9, 10 and 12.  Here is my high-level take on what you need to be prepared to document, discuss and/or prove you are doing.

  • Provide policies, standards and procedures for device management.
  • Provide policies, standards and procedures for physical and logical security.
  • Provide a copy of your incident response plan.
  • Provide access control definitions for groups and roles that manage the devices.
  • Provide job descriptions for the personnel that manage the devices.
  • Document all protocols/services that are used for managing the devices including a business reason for each of the protocols/services.
  • Provide configurations of a sample of physical devices.  Sampling is allowed as long as the service provider can prove that it has implemented a standard process for managing the devices in question.
  • Provide documentation that supports that device configurations are properly backed up and secured.
  • Provide documentation that supports that device configurations that are running and those that are stored are one in the same.
  • Verification that all relevant policies, standards and procedures are followed in configuring new devices.
  • Verification that documented protocols/services are the only ones configured on the managed devices.
  • Verification that security is properly implemented on all managed devices.
  • Verification that appropriate access control systems are implemented on the managed devices.
  • Verification that remote access is secure.
  • Verification that all user accounts are appropriate managed and controlled.
  • Verification that all logging is implemented and logs are reviewed at least daily.
  • Verification that log information is properly secured.
  • Verification that the time management is properly implemented on each device.
  • Verification that some form of critical file monitoring is being performed.
  • Verification that there is a formal change management process in place including testing.
  • Verification that any cryptographic keys are properly managed and secured.
  • Verification that all devices have been appropriately patched and that there is a patch management process in place.
  • Verification that appropriate physical security controls are in place.
  • Verification that logs are maintained for any backups stored off-site.
  • Verification that alerts are responded to as documented in the incident response plan.

Now for the second comment, “No other QSA has ever asked us to go through this.”  If no other QSA has asked you to go through this, shame on them.  This is why the PCI SSC implemented its Quality Assurance program so that all QSAs start doing the same level of work.  This is also why there is such a variance in QSA costs.  We are finding that the QSAs that are the cheapest are the ones that are not being appropriately rigorous with their assessment of an organization against the PCI DSS.  As the PCI SSC takes more QSAs through the QA process and puts them through remediation, things will change and assessment costs will become more consistent.

01
Feb
10

Threat Landscape Is Changing – Advanced Persistent Threat

If you are not familiar with Advanced Persistent Threat or APT, you better get yourself up to speed as soon as possible.  This is a threat that will likely catch you flat footed if you are not addressing it.  As a member of InfraGard I was made aware of APT a year or so ago, but it was a great report recently produced by MANDIANT Corporation that really brought this threat into perspective.  I cannot stress how urgently you should go to their Web site and request a copy of their latest M-TRENDS report.  It is covers this topic in much more detail and is very enlightening.

APT is not your usual attack.  As the name implies, it is a very skilled long-term siege on your network and computer systems.  The attack is taken slowly and carefully so as not to trigger any alerts at the target.  These are teams of very skilled professionals, not hactivists, script kiddies or even organized crime groups.  As far as anyone can figure out, these professionals are state sponsored based on the scale and logistics of their operations.  Their “job”, so to speak, is to compromise networks and systems for the purpose of gaining access to information.  What makes APT particularly insidious is that they set things up so that they can keep coming back.  What makes APT even more effective is that regardless of the countermeasures put in place to thwart attacks; these people have the resources and knowledge to work around those countermeasures.  In effect, APT brings my adage to life, “If someone wants to get you bad enough, they will do whatever it takes to make that happen regardless of what you do to prevent it.”

While I know that you are likely saying to yourself that your organization would not be on the APT radar, think again.  If you have a presence on the Internet whether that is ecommerce, a static Web site or even an email server, you are a potential target of APT.  And while you may not have information they want, you may have a business partner that they wish to compromise and they will use your network to get a way in to your business partner.  This all goes back to a post I made a while back regarding the fact that we are all interconnected these days, one network to another and so on.  So while APT may not be able to directly get into a target, they may be able to compromise a network attached to the target and get in that way.  As a result, we all need to take precautions to ensure we have each other’s backs.

The M-TRENDS report goes into great detail on the methods used, so I will not bore you here with those details.  But, some of the take aways I got from the report are as follows.

  • These are very sophisticated attacks and require a level of sophistication in information security that most organizations do not practice.  As a result, if you intend to stay out of APT’s clutches, you are going to have to raise the bar on your information security program significantly.  Raising the bar does not necessarily mean spending more money on the latest and greatest security technologies.  On the contrary.  APT wants targets that think security technology is the only way to secure an organization.  It is organizations that rely heavily on technology and ignore or belittle security training that they prey upon.  This means that you need to focus your efforts on things like training and being more diligent on log reviews and alert follow up.  The requirements in PCI DSS requirement 10 go a long way in assisting you with finding anomalous network traffic and the like.
  • APT relies on heavy reconnaissance of networks and the gathering of information to be used in their social engineering attacks.  There initial forays into your network will likely be as innocuous as port and vulnerability scans as well as spidering all of your public Web pages and LinkedIn, Facebook, MySpace, Twitter, etc.   While you can do very little about the port and vulnerability scanning, you can do quite a bit about spidering.  Now is the time to reconsider the information you post publicly on your Web sites.  It is also time to start managing the information that is ending up out on social networking sites.  A just published study in the UK indicated that information regarding a number of top secret projects for the military could be found on various social engineering Web sites.  If that is the case for really hush-hush projects, imagine the sorts of information that could be garnered about your own organization.  Remember, it is this sort of information gathering that have caused most of the break-ins to celebrities’ and politicians’ email and social sites.  In addition, all of this ‘personal’ information is just a quick Advanced Google search away.
  • With social engineering as one of the big keys to APT, it is time to get serious about training of your personnel.  APT use a number of targeted social engineering techniques such as ‘spear phishing’ to gain ways into an organization.  If you still think social engineering training is useless, here is the biggest reason I can give you to get serious about training.  It does not have to be boring, but it does need to convey a sense of urgency and the extreme risk presented.  Just having people read the M-TRENDS report and then discussing it would likely go a long way to motivating people to think before they do something they will regret later.
  • The malware used by APT is very sophisticated and is constructed in such a way as to thwart most current anti-virus and anti-malware solutions.  In addition, APT malware is regularly updated to continue to blind these solutions.  As a result, relying on these solutions is not feasible.  You will need other measures in place to ensure your security such as critical file monitoring, file signature hashing and similar measures.  I am not suggesting that you take these measures on all you systems, but you probably should consider it on systems that contain critical data or have access to critical data.  There are a number of PCI DSS requirements that can help you with this, but the biggest is requirement 10 again followed by requirement 11.5.
  • You will likely need to make your network segmentation even more granular.  As I stated in the last bullet, you do not want to have to put these countermeasures on every system you have.  Unfortunately, unless you further tweak your network segmentation to keep sensitive systems and non-sensitive systems apart, you are not going to keep APT at bay.  Granularity does not mean more VLANs or segments; it more likely means more or tighter ACLs to control access to information.
  • To hide their activities, APT uses encrypted data streams between their malware and their command and control systems.  As a result, traditional network traffic monitoring will not help unless you are monitoring for “unknown” encrypted traffic.  Again monitoring can detect this, but you need to be monitoring for encrypted data traffic that is not “normal.”  This can also be controlled by controlling outbound traffic to unknown destinations.
  • Finally, a lot of these attacks are from known locations such as China.  If your organization is not conducting business outside of the United States, why is your firewall configured to accept traffic from anywhere on the Internet?  For that matter, why does your firewall allow outbound connections to foreign countries?  All of this is configurable if you take the time to enable it in your firewalls, but most organizations never go to that length.  Now you have a big reason why to start restricting traffic in and out of your network like you should have been doing all along.

The PCI DSS has a number of controls in it that, if properly implemented and monitored, would go a long way in making APT’s activities more difficult.  However, that is the rub.  Unfortunately, most organizations do not execute the PCI DSS consistently and therefore they can end up being owned by APT.  And just complying with the PCI DSS is not necessarily going far enough, so you need to go beyond it to ensure your network’s security.

Always remember security is not and never will be perfect.  Your goal then is to make the life of APT as miserable as possible so when they come calling, they will likely go somewhere else to get what they want.  However, if you are their ultimate target, then you need to be sharp as they will do whatever it takes to get in.

Update: According to Jerry Dixon, director of analysis at Team Cymru, APT is no different than any other attack.

20
Sep
09

Wireless Security Update

I had an opportunity this week to be involved in some testing of Motorola’s AirDefense wireless security solution at a client where we were conducting their annual PCI Security Assessment.  I wrote in a post a while back about wireless IDS/IPS and discussed my findings from that testing a couple of years ago.  With everything in technology, as time passes, things change.  Fortunately, things change for the better, but there are still potential pitfalls.  This should not be viewed as an endorsement of the AirDefense solution.  It just happens to be the solution that we were able to test.

Again, as I have stated previously, I am not going to delve into the details of how to accomplish making a network device ‘stealthy’ because I do not want to give anyone ideas or a leg up.  However, it should be noted that doing this is not difficult.

Two years ago, we configured some wireless access points to be very stealthy and located them around a facility and then had AirDefense attempt to locate and identify them.  While AirDefense was able to identify half of the rogue access points and that they were potentially rogue access points, it was not able to confirm that these devices were in fact within the confines of the facility.  The bottom line, half of the devices were not identified, so you had a 50-50 chance of finding a potentially rouge access point.  Not very good odds in the security business.

It is now two years later and I am going to conduct a very similar test.  First, this test was not quite as similar as I would like as we used cable/DSL routers with integrated wireless b/g access points.  The reason the testing will not be similar to the last time is that the test devices being used this time have built-in firewalls that protect the wireless versus the access points that we tested with the last time which had no built-in security features beyond WEP.  One of these devices we kept stock, using only the vendor supplied security capabilities.  On the other device we loaded up DD-WRT and made significant changes to make the access point and device itself as stealthy as possible.  In addition, we only have two devices to use versus a variety of devices two years ago.  However, using vendor firmware and DD-WRT should give us a good set of tests.

In our first test, we plugged the routers in (electrical as well as network WAN port on the router) and let them run in native, non-stealthy mode so that we could see how AirDefense would respond for our baseline.  As expected, AirDefense performed flawlessly and found and identified these devices without a problem and about two minutes later delivered the security alerts to the client’s security and networking personnel.  Unfortunately, our client is in the process of implementing more AirDefense sensors because they just moved into their new expansion space and we were located there.  As a result, AirDefense could not specifically locate the wireless.  However, with a few more sensors, they could have narrowed the search area and somewhat triangulated on the location of the rogue devices.

For our second test, we configured both devices to be as stealthy as possible.  Because of the limitations of the vendor configuration software, we were not able to configure that unit as stealthy as the one running DD-WRT.  The client reset the AirDefense database and we plugged everything in again.  The router running the vendor software was identified very quickly just as in the original test.  The DD-WRT device took some human intervention to determine that it was likely a rogue device.  However, the good news is that this time the DD-WRT device was found by AirDefense unlike two years ago when most of these devices were not found.

During our debrief regarding the testing with the client’s security personnel, we identified some frustrations with AirDefense.  The biggest is that, with the prevalence of wireless, the sensors are flooded with signals out in their retail locations.  While AirDefense claims that with an appropriate number of sensors it should be able to sift through the chaff of signals, my client’s experience is that it does not.  They have spent a significant amount of time attempting to tune the system so that spurious signals have a minimal impact, but they have found that all it takes is adjacent retailers or even homeowners to add wireless and AirDefense goes on alert, regardless of the number of sensors.  This client has installed AirDefense at only 20% of their locations, but they tell us that the number of daily alerts can be mindboggling and a lot of work to clear.  While the client’s staff has slugged through these alerts day after day, management is obviously very concerned about maintaining this level of diligence going forward as the rollout completes.

Another problem that they run into is with the coffee shops that are located within their retail locations.  However, it is not with the separate access points that these locations operate as they have been tuned out.  No, it is the coffee shop’s customers’ notebooks and netbooks that are the problem.  Most of these devices’ wireless are mis-configured and are acting as access points as well as wireless clients.  This creates the bulk of their alerts within their retail facilities and masks a lot of the real alerts.

The other point that the client’s security personnel wanted passed along to others is that an AirDefense type of solution is not a guarantee that you will identify every rogue access point.  Most of this problem is related to the human element.  All it takes is a lapse in diligence and you can end up with problems.  This was brought home the week before we arrived when the client’s resident wireless security guru was on vacation.  While on vacation, a couple of alerts were written off by their back up because of this person’s inexperience.  Turned out that these alerts were real problems and required action when they were uncovered when the guru got back.  There will be more remedial education for all other security personnel on the AirDefense system.  However, the bigger change will be making sure that the guru is not the only one making the call on what gets investigated.  With that responsibility spread out to more people, it is hoped that coverage will be more consistent when the team is not together.

In the end, I am glad to report that wireless IDS/IPS is advancing.  However, it is not a silver bullet nor do I expect it to ever be a silver bullet.  It still requires humans to make the call on what to investigate and what to ignore.  That requires skill and experience with the tool in a particular environment.  And that skill and experience takes time to develop.  So, just because you have implemented wireless IDS/IPS, does not mean that you are immediately protected.  Your security personnel will still have to ramp up on the tool in your environment.

06
Jun
09

The Shortcomings Of Wireless IDS/IPS

In my first post, I discussed the wireless analyzer approach to complying with requirement 11.1.  I documented where I think the current techniques fall short and give organizations a false sense of security.  In this post, I am going to give you what I think are the shortcomings of wireless intrusion detection/prevention systems.

Wireless IDS/IPS seen to break down into two types, ones that work like the wireless analyzer approach from my previous post on this subject and the ones that work like an IDS/IPS.  Let us discuss the analyzer IDS/IPS first.

From the analyzer IDS/IPS style products, I have had demonstrations of, most of these solutions work essentially the same way as the wireless analyzer methods I discussed in my last post.  These products typically work with wireless sensors connected to your network and a central server that also provides analysis of the wired network when a suspect rogue AP is discovered.  The wireless sensor is used as a wireless spectrum analyzer to locate potential rogue APs.  The idea being that multiple sensors can triangulate on the rogue AP and provide the location.  The ability of these sensors to accurately locate APs outside of a 15-foot radius can make things dicey and potentially expensive.  Therefore, for large facilities, you can expect to have to spend a lot on sensors for full protection.  For example, an average Wal*Mart is around 100,000 square feet.  In order to provide adequate coverage, the average Wal*Mart store would require approximately 445 sensors.

On the wired side of things, these analyzer IDS/IPS solutions along with their exclusively wired solutions are looking for rogue network traffic, ICMP response, MAC address and/or SNMP information that indicates the device is a rogue AP.  In the end, they sound sophisticated, but they still rely on the fact that the rogue access point is configured to be discovered.

Attackers know how these solutions operate and configure their rogue APs to deter or even avoid these identification techniques.  As a result, these more sophisticated solutions are also blind to the truly rogue AP.

In addition to these obvious issues, false positives can be quite a problem for those solutions that conduct monitoring of the wireless spectrum.  This is particularly true in situations where APs are added on a regular basis outside of your facilities.  And with wireless becoming more and more common, that can keep your security team quite occupied while they sort through the false positives to find the real potential threats.

And then there is the whole issue of 802.11 devices being the only source of compromise.  If an attacker is going to go to the length of compromising your network, why would they not use cellular technology and avoid 802.11 all together?  With 3G cellular networking all the rage, the speed of these cellular solutions are no longer a limiting factor.  None of these solutions truly addresses the cellular issue, so there is still a vulnerability.  Unfortunately, the security vendors, PCI SSC and card brands seem to only react to incidents, not think ahead.  So, until a breach occurs involving cellular, we will likely not see anything to address this risk.

And what about other forms of wireless such as Bluetooth and satellite?  Before you write them off as either not having any transmission distance or being too complicated and expensive, it is that short sidedness that will get you in trouble.  Believe it or not, there are Bluetooth USB adapters that have ranges of up to 350’.  In addition, pairing and security codes are well documented by vendors so attaching to any Bluetooth device is an easy proposition.  Bluetooth can be used to load malware on a system and begin the compromise process.  If you think satellite is the last safe wireless solution, at this year’s Black Hat, Adam Laurie discussed not just hacking satellite TV but also data transmissions.

In the end, the important thing to remember is that the public airwaves are just that – ‘public’.  And you must treat them as public or you will get burned.

In a future post, I will discuss my thoughts on how I think the PCI DSS should address these shortcomings.

28
May
09

The Shortcomings Of Wireless Scanning

I’m probably going to really stir the pot with this and my coming posts, but I think this is an important subject to discuss.  I don’t have all the answers on this topic, but I know that the current approaches I see out there are just not providing the level of security that I think is needed.  So, to paraphrase Bette Davis from ‘All About Eve’, “Fasten your seat belts. It’s going to be a bumpy post.”

PCI DSS requirement 11.1 states:

“Test for the presence of wireless access points by using a wireless analyzer at least quarterly or deploying a wireless IDS/IPS to identify all wireless devices in use.”

The requirement gives you two options, you use some sort of wireless analyzer or you implement a wireless IDS/IPS solution.  Of course, there is also the third option of meeting this requirement with a compensating control.  In this post, I will discuss the shortcomings of the wireless analyzer approach.

A lot of my clients taking the wireless analyzer approach are typically having someone (usually an internal auditor or IT support person) go out to as many of their facilities as possible and use a notebook computer, a wireless network card or the notebook’s built-in wireless adapter and a shareware tool like NetStumbler or Kismet.  This person then walks the interior of the facility and the exterior perimeter of the facility using the tool to record what wireless is discovered, saving the results to a file.  A pretty straight forward process – quick, easy, done.

While this process meets the PCI compliance requirements, it certainly does not ensure security or that there are not unauthorized wireless devices on the network.  This is because in most instances the results are not analyzed to ensure that only authorized wireless was discovered.  However, even if an organization were to analyze the results produced from NetStumbler or Kismet, they would be hard pressed to draw any conclusions from those results since you really have to analyze them in real-time, not after the fact.

Besides the fact that results are not analyzed, I seriously doubt most of my clients have the technical expertise to even conduct an informed analysis of a wireless scanner like NetStumber or Kismet.  And, to add insult to injury, the test for 11.1.a states, “Verify that a wireless analyzer is used at least quarterly …”  No where does the PCI DSS state that you must analyze the results of the analyzer, you just need to use a wireless analyzer quarterly.  The end result is that most people, even those in the information security profession, and the organization’s management believe that this is sufficient to ensure the security of their networks.  In my opinion, this is a VERY false sense of security.

So, what do I see as the shortcomings of just scanning with a notebook, NetStumbler/Kismet, etc.?

  • The majority of wireless scanning is done using an omni-directional antenna.  Most wireless cards use built-in antennas and those antennas are omni-directional meaning that they can receive their signals from any direction.  Also, many of the external antennas are also omni-directional.  The problem is that an omni-directional antenna does not provide the best method of locating potential rouge access points since it is difficult to determine the location of access points based on the direction of their signal.  It takes a significant amount of walking around and detailed monitoring of signal strength to get a fix on a given access point.  It’s not that it cannot be done, it’s potentially a lot of work which makes it difficult for all but the most experienced operators of wireless scanners.  As a result, it can take a significant amount of time to locate all of the wireless access points in a facility and prove that they are all valid.
  • This wireless scanning approach assumes the attacker wants to be found or is unaware of wireless security techniques.  One of the things that fascinates me about wireless scanning is that it assumes that someone wants the access point to be found.  A smart attacker would configure their rogue access point so that it is electronically ‘hidden’ on your network (I’m being purposely vague here to avoid giving away the entire store, but be assured this can be accomplished).  Not that such an AP configured this way cannot be found, but the effort required to find it will be extremely difficult using the basic scanning techniques I’m talking about.  As a result, with the right attacker, you will be compromised until you take your approach to a higher level.
  • If you identify a rogue access point, then what?  Obviously, you want to remove it from your network as soon as possible.  However, most retailers I work with would be hard pressed to get this done as quickly as they like because of a lack of qualified personnel in the field that can locate the rogue unit and then remove it.  As I stated earlier, it will be difficult to find a properly configured rogue access point, so the likelihood that you will even identify such a device is low.
  • Then there is the whole problem of if you were hit once, what will stop the attacker from coming back?  With the price of access points on eBay and the like going for as little as $5 including shipping, it’s highly likely that if you find an attacker’s access point, they can absorb the loss and quickly replace it.

I’ve taken enough of everyone’s time on explaining where I think the wireless analyzer approach falls short.  Coming are my thoughts on the wireless IDS/IPS approach.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031