Archive for October, 2013


If You Read Nothing Else About PCI – v1

David Froud has a great blog post out regarding his frustrations with PCI compliance and the industry’s lack of progress since he last did a Report On Compliance (ROC).  I have to say that some organizations have made a lot of progress in this area.  However there are, unfortunately, still way too many organizations that are putting more effort into figuring out how to dodge compliance or pawn it off on someone else than I would prefer.




Atom AMPD And PCI Compliance

Here is a relatively new player in network security space for small and mid-sized businesses (SMBs).  A friend of mine that does a lot of work with SMBs is encountering this solution more and more.  And is there any wonder why when it is portrayed as a God send for SMBs.  On the Atom AMPD Web site, they explain their Kwick Key solution.

“Kwick Key™, a USB flash drive, is the delivery device behind the AtomOS.  The Kwick Key is bootable and plugs into nearly all servers.  Kwick Key users experience a significant savings achieved with a high quality “one key” solution for their networking needs.

Simply install the Kwick Key into an internet-connected server, display the web interface, configure the features and you’re done.  The server is transformed into a multi-functional networking and communication device.

The underlying operating system behind the Kwick Key AtomOS is Linux. The content stored on the server is also backed up on the Kwick Key.  Once configured, the Kwick Key can be transferred to new equipment while maintaining its configuration, providing portability in the event of equipment failure.  A redundant option is also available.”

What is wrong with this picture?

If you said, “Too good to be true,” you would be correct.  There are no silver bullet solutions to security.  However these sorts of “all in one” security solutions are being marketed to SMBs all of the time as a cost saving way to be secure.  And since SMBs do not typically have any significant IT personnel, they are always looking for ways to reduce IT workload and save money.  However, if you need to be PCI compliant, this is not a solution for your organization.  Why?

If you read the Savings page on their Web site, they state:

“Your current IT infrastructure is likely requiring multiple boxes to serve your network and communication needs.  This likely includes multiple boxes supporting firewalls, content filters, routing and VoIP applications; each requiring individual training, maintenance, and ongoing licensing fees.  The AtomOS provides just one platform, one interface, one operating system. It brings to bear the BEST practices via a convergent technology.  All modules are tied together by our proprietary user interface.”

That “all in one” solution approach violates PCI DSS requirement 2.2.1 which states:

“Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)”

The reason for requirement 2.2.1 is to leverage the concept of “defense in depth”.  Defense in depth relies on multiple layers of defense such that if one layer develops vulnerability, the other layers still can provide some security and mitigate for the vulnerability until the vulnerability is fixed.  Under the Atom solution, vulnerability anywhere potentially creates a situation where the whole solution is at risk because of one part’s failure.

As a result, in order to be PCI compliant, it will require you to purchase multiple Kwick Keys.  I would assume that multiple keys will result in costs that negate Atom’s cost advantage over other PCI compliant solutions.

Then go to the solution’s product page for Kwick Key.  Take a look at all of the firewall features that are available.  Looks pretty good until you realize there is one notable feature missing – stateful packet inspection (SPI).  Basically, Atom has implemented port filtering which comes standard on Linux distributions.  Not that this is not secure, but it does not comply with requirement 1.3.6 which explicitly requires that SPI be implemented.

There are ways to add SPI to this solution.  However, that will mean you will have to support it yourself and the whole point of the Atom solution is to get out from under supporting such a solution for your organization.

My assumption is that with an appropriate wireless adapter in the system running Kwick Key that the solution will serve as a wireless access point.  Under requirement 1.2.3, wireless is required to be segregated from an organization’s cardholder data environment (CDE) by a firewall.  Given that the wireless is operating on the same device, it is questionable if compliance with this requirement could be truly accomplished.

The same concerns with wireless would exist with the virtual private network (VPN) solution.  Having the remote access to the internal network also running on the same system is not a best practice.  And how secure such a situation would be on this device is questionable.

You need to remember, this is not a purpose built networking device, this is a repurposed computer running Linux.  It is potentially susceptible to any number of Linux-based attacks and vulnerabilities depending on the services running.  And the more services you pile onto this device, the more potential for vulnerabilities.

Then there is the ability to add a voice over IP (VoIP) call manager solution.  Seriously?  What a silly and very dangerous idea.  Why?  VoIP protocols are primarily stateless (i.e., UDP) which means that they cannot be protected by today’s firewall technology which only work with stateful protocols (i.e., TCP).  I have actually had vendors correct me on this because VoIP call set up (pick up the handset) and tear down (hang up the handset) are conducted using TCP.  What these folks always miss is that the actual conversation is conducted over UDP so that the conversation can be streamed between the phones in use which is the bulk of the activity with a telephone call.  And it is not just one or a few UDP ports that can be open; it is typically a range of thousands of UDP ports that are open to support telephony.  Talk about a target rich environment.

Adding a VoIP call manager on top of your firewall is probably the most dangerous thing an organization could do because VoIP is so easy to attack due to the stateless nature of its protocols.  By implementing VoIP on a firewall you are essentially negating the firewall.  Running VoIP on anything but its own dedicated server on its own dedicated network is the only way VoIP should be configured for security, regardless of a need to be PCI compliant.

Finally, there is no pricing provided for the USB “key”.  I always get concerned about “wonder” solutions that do not provide pricing without contacting the vendor’s sales operation.  Nine times out of ten, all this does is force potential customers to then be contacted relentlessly by sales people until they purchase the solution which is likely overpriced.

This post is not to say that this solution is not appropriate for other organizations.  However, if you need to be PCI compliant, this solution is not for your organization if it is implemented as the vendor describes.


Thoughts From The 2013 PCI Community Meeting

I got lucky that my new employer allowed me to go to this year’s PCI Community Meeting held in Las Vegas.  It is always nice to hear things first hand versus getting the slide decks, asking questions of the people that attended about certain points in the slide decks and finding out that the people attending did not think a question was needed or they cannot remember what was said.

Probably the biggest revelation out of this year’s Community Meeting was from the Qualified Security Assessor/Approved Scanning Vendor (QSA/ASV) session the day before the Community Meeting started.  It seems that the Council has finally addressed with v3 the message a lot of Qualified Security Assessor Companies (QSACs) have been putting forth regarding the time it takes to write a Report On Compliance (ROC).  Those of us complaining have argued for years that the ROC writing process occupies too much of a QSA’s time (anywhere from 50% to 70% of a project).  As a result, under today’s Reporting Instructions, QSAs tend to worry more about what to write than making sure their client is actually compliant and advising that client how to better maintain compliance.

The ROC will now have an overall ranking for the entire requirement that is a check box that indicates whether the requirement is ‘In Place’, ‘In Place with Compensating Control’, ‘Not Applicable’, ‘Not In Place’ or ‘Not Tested’.  Then under the requirement, will be the tests for the requirement.  It is with each test where the QSA will provide a list of documents reviewed, a list of people interviewed, observations made and sampling selected.  QSAs will no longer have to write meaningless but Quality Assurance approved diatribes regarding a test.  All that will be needed is a list of documents, a list of persons interview, a list of observations made and a list of samples taken.

Now before some of you go nuts about the check box, the Council has not adopted a check box assessment approach.  QSAs are still required to conduct their assessments in a process similar to what is conducted today for a PCI DSS v2 assessment.  What the Council is doing is simplifying the reporting generation and review processes.  The Council finally acknowledged the fact that, other than QSAs and their QA reviewers, practically no one was reading the 300+ page tomes that were being produced – even the acquiring banks.  What they are doing is making the ROC easier to review and read as well as lessening the writing requirements so that QSACs can more readily adopt automated systems to facilitate the conducting of assessments and generating the ROC.  The idea is that this will then allow QSAs to focus on assisting their clients with PCI compliance related activities and fold into the ‘Business As Usual’ approach that is being rolled out with v3.

The most intriguing category is ‘Not Tested’.  The Council added this category for those situations where a QSA did not test the control.  This most often occurs when a QSA is relying on a Service Provider’s Attestation Of Compliance (AOC) for that control such as with firewall configuration management, network security monitoring, custom application software development or other services.  The Council also indicated that Service Provider AOCs may also be getting modified so that QSAs and others relying on them can determine what PCI requirements the Service Provider’s AOCs actually tested.

Now the bad news about this great advancement forward.  The Council has no idea when the final version of the Reporting Template will be available.  Since v3 of the PCI DSS will not be finalized until November, they do not yet have a timeline on how quickly after the release of v3 that the Reporting Instructions will be released.  As a result, QSACs will likely not be rolling out PCI DSS v3 assessments until the Reporting Template is released.  One reason will be because of the major changes involved in v3 and, probably the larger reason, because none of the QSACs want to be put in remediation by the Council for creating ROCs that do not meet the Council’s Quality Assurance requirements.

On the new requirements front, there was a lot of discussion within the QSA ranks on requirement 6.5.6 regarding protection and secure deletion of device memory when cardholder data (CHD) or sensitive authentication data (SAD) were stored in memory during a transaction.  As the Council has defined it in the QSA/ASV session, the QSA will interview developers to discuss what they are doing to secure memory and properly delete CHD/SAD.  QSAs will also observe that developers are in fact coding for this condition.  While this requirement will bring the memory scraping threat to light, most of the QSAs I talked with agreed that the proposed testing is not going to actually have any significant impact on the memory scraping threat of vSkimmer, BlackPOS and the like.  And for PA-DSS certified software (v3 of the PA-DSS addresses this threat as well), the horse has already left the barn for those applications.  The bottom line is that memory scraping attacks will continue to proliferate until the software vendors address the problem.

I asked a question about PCI DSS v3 requirement 1.3.7 and how it would impact merchants that have their card terminals connected to the Internet for transaction processing.  Requirement 1.3.7 states:

“Place system components that store cardholder data (such as a database) in an internal network zone, segregated from the DMZ and other untrusted networks.

System components that store cardholder data must not be located in a DMZ or be accessible from the Internet or other untrusted networks”

My confusion was over how all of the merchants I encounter with card terminals that directly connect to the Internet would be able to comply with 1.3.7.  After all, in most instances, these card terminals store SAD in memory until the transaction is completed.  The Council representatives admitted that the wording in 1.3.7 was a problem and that they would be looking at rewording it because of this and other similar issues that create a conflict.

Finally, the other big discussion topic was ‘Business As Usual’ or BAU.  For v3 of the PCI DSS, the Council wants organizations to integrate the requirements of the PCI DSS into their day-to-day processes and then periodically test that the integration has occurred and confirm that the integrated controls are working.  At the QSA/ASV session BAU generated a number of questions as to what QSAs were expected to test to assess BAU.  The Council admitted that BAU is only a suggested ‘best practice’, not a requirement, which placated a lot of the audience.  But if you think about it, if an organization desires to be PCI compliance, BAU should already be occurring in the organization.  This is obviously a reminder to organizations that do not get security to provide an incentive for further integrating security practices into their day-to-day processes.

As usual, there was a lot of information sharing between QSAs, ASVs, POs, card brands and the Council at this year’s Community Meeting.  The Council members seemed more relaxed than at the roll out of v2 three years ago.  Hopefully the roll out of v3 will go much less eventfully than v2.

Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

October 2013