Archive for September, 2009


Wireless Security Update

I had an opportunity this week to be involved in some testing of Motorola’s AirDefense wireless security solution at a client where we were conducting their annual PCI Security Assessment.  I wrote in a post a while back about wireless IDS/IPS and discussed my findings from that testing a couple of years ago.  With everything in technology, as time passes, things change.  Fortunately, things change for the better, but there are still potential pitfalls.  This should not be viewed as an endorsement of the AirDefense solution.  It just happens to be the solution that we were able to test.

Again, as I have stated previously, I am not going to delve into the details of how to accomplish making a network device ‘stealthy’ because I do not want to give anyone ideas or a leg up.  However, it should be noted that doing this is not difficult.

Two years ago, we configured some wireless access points to be very stealthy and located them around a facility and then had AirDefense attempt to locate and identify them.  While AirDefense was able to identify half of the rogue access points and that they were potentially rogue access points, it was not able to confirm that these devices were in fact within the confines of the facility.  The bottom line, half of the devices were not identified, so you had a 50-50 chance of finding a potentially rouge access point.  Not very good odds in the security business.

It is now two years later and I am going to conduct a very similar test.  First, this test was not quite as similar as I would like as we used cable/DSL routers with integrated wireless b/g access points.  The reason the testing will not be similar to the last time is that the test devices being used this time have built-in firewalls that protect the wireless versus the access points that we tested with the last time which had no built-in security features beyond WEP.  One of these devices we kept stock, using only the vendor supplied security capabilities.  On the other device we loaded up DD-WRT and made significant changes to make the access point and device itself as stealthy as possible.  In addition, we only have two devices to use versus a variety of devices two years ago.  However, using vendor firmware and DD-WRT should give us a good set of tests.

In our first test, we plugged the routers in (electrical as well as network WAN port on the router) and let them run in native, non-stealthy mode so that we could see how AirDefense would respond for our baseline.  As expected, AirDefense performed flawlessly and found and identified these devices without a problem and about two minutes later delivered the security alerts to the client’s security and networking personnel.  Unfortunately, our client is in the process of implementing more AirDefense sensors because they just moved into their new expansion space and we were located there.  As a result, AirDefense could not specifically locate the wireless.  However, with a few more sensors, they could have narrowed the search area and somewhat triangulated on the location of the rogue devices.

For our second test, we configured both devices to be as stealthy as possible.  Because of the limitations of the vendor configuration software, we were not able to configure that unit as stealthy as the one running DD-WRT.  The client reset the AirDefense database and we plugged everything in again.  The router running the vendor software was identified very quickly just as in the original test.  The DD-WRT device took some human intervention to determine that it was likely a rogue device.  However, the good news is that this time the DD-WRT device was found by AirDefense unlike two years ago when most of these devices were not found.

During our debrief regarding the testing with the client’s security personnel, we identified some frustrations with AirDefense.  The biggest is that, with the prevalence of wireless, the sensors are flooded with signals out in their retail locations.  While AirDefense claims that with an appropriate number of sensors it should be able to sift through the chaff of signals, my client’s experience is that it does not.  They have spent a significant amount of time attempting to tune the system so that spurious signals have a minimal impact, but they have found that all it takes is adjacent retailers or even homeowners to add wireless and AirDefense goes on alert, regardless of the number of sensors.  This client has installed AirDefense at only 20% of their locations, but they tell us that the number of daily alerts can be mindboggling and a lot of work to clear.  While the client’s staff has slugged through these alerts day after day, management is obviously very concerned about maintaining this level of diligence going forward as the rollout completes.

Another problem that they run into is with the coffee shops that are located within their retail locations.  However, it is not with the separate access points that these locations operate as they have been tuned out.  No, it is the coffee shop’s customers’ notebooks and netbooks that are the problem.  Most of these devices’ wireless are mis-configured and are acting as access points as well as wireless clients.  This creates the bulk of their alerts within their retail facilities and masks a lot of the real alerts.

The other point that the client’s security personnel wanted passed along to others is that an AirDefense type of solution is not a guarantee that you will identify every rogue access point.  Most of this problem is related to the human element.  All it takes is a lapse in diligence and you can end up with problems.  This was brought home the week before we arrived when the client’s resident wireless security guru was on vacation.  While on vacation, a couple of alerts were written off by their back up because of this person’s inexperience.  Turned out that these alerts were real problems and required action when they were uncovered when the guru got back.  There will be more remedial education for all other security personnel on the AirDefense system.  However, the bigger change will be making sure that the guru is not the only one making the call on what gets investigated.  With that responsibility spread out to more people, it is hoped that coverage will be more consistent when the team is not together.

In the end, I am glad to report that wireless IDS/IPS is advancing.  However, it is not a silver bullet nor do I expect it to ever be a silver bullet.  It still requires humans to make the call on what to investigate and what to ignore.  That requires skill and experience with the tool in a particular environment.  And that skill and experience takes time to develop.  So, just because you have implemented wireless IDS/IPS, does not mean that you are immediately protected.  Your security personnel will still have to ramp up on the tool in your environment.


Reliance On Other QSA’s Work

This is an issue that is an under discussed problem that I believe needs to be clarified by the PCI SSC before we all get into some real trouble.

The amount of outsourcing and use of third parties to perform critical business functions is extremely high, such that there are very few organizations these days that perform all of their business functions in-house.  Whether it is the outsourcing of support and maintenance of servers and networks, to contracted call centers, there are all sorts of outside organizations that will manage and take care of non-core competency business functions.  Unfortunately, a lot of these third parties end up in-scope for PCI compliance because of the fact that they come into contact with cardholder data in some form.  For a lot of these third parties, the fact that they are in-scope for PCI compliance will come as a surprise.  As a result, these third parties will have to either endure PCI assessment after assessment or have their own PCI ROC created.

So, what is the problem with these third party PCI ROCs?  There are no rules governing how to use them, when to use them, and if to use them.  As a result, some QSAs will not accept them, some QSAs will accept them, but with conditions, and some QSAs blindly trust them without any sort of scrutiny.  I would like to put forth some suggestions I have for rules regarding third parties and their PCI compliance reporting requirements.

The biggest problem we have to resolve, in my opinion, is with the ROC process itself.  ROCs are a view of an organization’s compliance with the PCI DSS as of a point in time – as of the ROC’s reporting date.  This is troublesome because other reports on an organization’s control environment, such as those reports created for Sarbanes Oxley (SOX) or a Statement on Auditing Standards (SAS) 70, cover a given period of time, typically 12 months.  As a result, there is a question of how does a QSA rely on a report that only gives assurance as of a particular date when the QSA needs assurance as of a date further down the road?  In the long term, I would suggest that the PCI SSC needs to make the ROC assessment process a review of controls over a period of time, preferably a 12 month time period.  In the near term, I would like to suggest that a third party’s ROC report be used to determine the level of additional testing that needs to be performed to give the QSA a level of understanding and comfort that the third party is truly in compliance with the PCI DSS.  In addition, if the ROC is dated more than six months earlier than the organization’s ROC reporting date, the level of testing should be higher than if the report is less than three months old.

The next problem with this situation is that a QSA has the option to not accept a ROC as evidence of an organization’s compliance with the PCI DSS.  I would suggest that while this option still is available, that a QSA needs to document in detail why they chose not to accept the ROC report rather than just allow them to blindly say that they did not accept it.  That said a QSA should also not just blindly trust a ROC either.  A QSA needs to determine how much additional testing to conduct to ensure that the third party is still in compliance.  That testing needs to be tempered by the age of the ROC (the older the more testing should be performed), the services covered by the ROC (if a service is not covered, it needs to be assessed accordingly), and the sampling discussion in the Executive Summary (determination of whether or not the third party’s QSA did the necessary testing).  Use of any third party’s ROC should be documented in the Executive Summary so that all reviewers understand that there was reliance on another QSA’s work.

Of course, all of this is predicated on the fact that all third parties will issue a ROC prepared by a QSA for all their services used by their PCI compliance customers.  I would suggest a ROC be the only acceptable report for a number of reasons.  Since the third party will not necessarily know which of their customers are considered a Level 1 merchant or service provider and which are some other level, a ROC just makes sense.  A Self Assessment Questionnaire (SAQ) is not acceptable because it does not have the same level of detail needed to respond to the testing required in the ROC.  It is possible to summarize from a ROC to an SAQ, but almost impossible to go from an SAQ to a ROC’s level of detail without going back to the third party for additional information and clarification.  The purpose of this process is, after all, to minimize the impact on the third party by all of its customers that are conducting their own PCI compliance process.

Finally, there is the issue of the age of a third party’s ROC.  Obviously, if the ROC is more than 12 months old, there are going to be concerns about the relevance of the report to your assessment.  This is also a problem with SOX and SAS 70 reports.  As a result, there are already possible solutions to this situation.  Auditors that produce the SOX and SAS 70 reports can issue an interim letter regarding the status of the organization’s compliance.  This process could be used for ROCs that have not yet been accepted by the card brands.  Such a letter would document any potential compliance issues with the ROC by requirement so that your organization’s ROC can take into account any compliance issues with the third party.

There needs to be much more discussion about this issue amongst the QSAs, Participating Organizations, PCI SSC and the card brands.  However, until those discussions occur, I would like to suggest that in the interim we operate from my suggestions.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

September 2009