Archive for May, 2009

28
May
09

The Shortcomings Of Wireless Scanning

I’m probably going to really stir the pot with this and my coming posts, but I think this is an important subject to discuss.  I don’t have all the answers on this topic, but I know that the current approaches I see out there are just not providing the level of security that I think is needed.  So, to paraphrase Bette Davis from ‘All About Eve’, “Fasten your seat belts. It’s going to be a bumpy post.”

PCI DSS requirement 11.1 states:

“Test for the presence of wireless access points by using a wireless analyzer at least quarterly or deploying a wireless IDS/IPS to identify all wireless devices in use.”

The requirement gives you two options, you use some sort of wireless analyzer or you implement a wireless IDS/IPS solution.  Of course, there is also the third option of meeting this requirement with a compensating control.  In this post, I will discuss the shortcomings of the wireless analyzer approach.

A lot of my clients taking the wireless analyzer approach are typically having someone (usually an internal auditor or IT support person) go out to as many of their facilities as possible and use a notebook computer, a wireless network card or the notebook’s built-in wireless adapter and a shareware tool like NetStumbler or Kismet.  This person then walks the interior of the facility and the exterior perimeter of the facility using the tool to record what wireless is discovered, saving the results to a file.  A pretty straight forward process – quick, easy, done.

While this process meets the PCI compliance requirements, it certainly does not ensure security or that there are not unauthorized wireless devices on the network.  This is because in most instances the results are not analyzed to ensure that only authorized wireless was discovered.  However, even if an organization were to analyze the results produced from NetStumbler or Kismet, they would be hard pressed to draw any conclusions from those results since you really have to analyze them in real-time, not after the fact.

Besides the fact that results are not analyzed, I seriously doubt most of my clients have the technical expertise to even conduct an informed analysis of a wireless scanner like NetStumber or Kismet.  And, to add insult to injury, the test for 11.1.a states, “Verify that a wireless analyzer is used at least quarterly …”  No where does the PCI DSS state that you must analyze the results of the analyzer, you just need to use a wireless analyzer quarterly.  The end result is that most people, even those in the information security profession, and the organization’s management believe that this is sufficient to ensure the security of their networks.  In my opinion, this is a VERY false sense of security.

So, what do I see as the shortcomings of just scanning with a notebook, NetStumbler/Kismet, etc.?

  • The majority of wireless scanning is done using an omni-directional antenna.  Most wireless cards use built-in antennas and those antennas are omni-directional meaning that they can receive their signals from any direction.  Also, many of the external antennas are also omni-directional.  The problem is that an omni-directional antenna does not provide the best method of locating potential rouge access points since it is difficult to determine the location of access points based on the direction of their signal.  It takes a significant amount of walking around and detailed monitoring of signal strength to get a fix on a given access point.  It’s not that it cannot be done, it’s potentially a lot of work which makes it difficult for all but the most experienced operators of wireless scanners.  As a result, it can take a significant amount of time to locate all of the wireless access points in a facility and prove that they are all valid.
  • This wireless scanning approach assumes the attacker wants to be found or is unaware of wireless security techniques.  One of the things that fascinates me about wireless scanning is that it assumes that someone wants the access point to be found.  A smart attacker would configure their rogue access point so that it is electronically ‘hidden’ on your network (I’m being purposely vague here to avoid giving away the entire store, but be assured this can be accomplished).  Not that such an AP configured this way cannot be found, but the effort required to find it will be extremely difficult using the basic scanning techniques I’m talking about.  As a result, with the right attacker, you will be compromised until you take your approach to a higher level.
  • If you identify a rogue access point, then what?  Obviously, you want to remove it from your network as soon as possible.  However, most retailers I work with would be hard pressed to get this done as quickly as they like because of a lack of qualified personnel in the field that can locate the rogue unit and then remove it.  As I stated earlier, it will be difficult to find a properly configured rogue access point, so the likelihood that you will even identify such a device is low.
  • Then there is the whole problem of if you were hit once, what will stop the attacker from coming back?  With the price of access points on eBay and the like going for as little as $5 including shipping, it’s highly likely that if you find an attacker’s access point, they can absorb the loss and quickly replace it.

I’ve taken enough of everyone’s time on explaining where I think the wireless analyzer approach falls short.  Coming are my thoughts on the wireless IDS/IPS approach.

Advertisement
25
May
09

When A Business Fails, Where Does The Data Go?

When Circuit City went out of business recently, where did all of their data go?

I have seen a couple of articles lately on this and thought it would make a good discussion topic in light of PCI compliance and the fact that this topic is not discussed by the PCI DSS.  Given the economic contraction we are in, this will likely become a big point of discussion for the PCI SSC and the card brands.  I have also been part of a couple of going out of business shutdowns, so I can give you some first hand experience of what can happen.

In the case of Circuit City, Systemax, Inc., owners of the TigerDirect and CompUSA brands purchased the online assets of Circuit City and the right to use the Circuit City logos, trademarks and other intellectual property.  However, the Circuit City online presence was only a portion of the total automated presence of Circuit City.  What happened with all of the transaction data from Circuit City’s brick and mortar stores?  From the news releases, it is unclear what happened to the data generated by the stores, so time will tell if this data was handled properly.

From my own experience, what happens to an organization’s data when it ceases to be an organization can be haphazard at best.  The reason is that many of the key people that know where all of the data resides have usually left by the time the liquidation team arrives.  For most organization of reasonable size, documentation is usually available, but the necessary detail to point out non-obvious locations may not be in any of the available documentation.  The reason for the gaps in documentation is not deliberate or for job security.  It typically occurs because people forget all the details unless they are prompted.  This is why professional documentation analysts can be invaluable because they are trained to dig out this level of detail for documentation.  Unfortunately, most organizations cannot afford this cost and, as a result, the documentation does not contain all relevant details.

While data obviously resides on servers and data storage systems, organizations can have off-site storage as well as numerous other locations where data can be stored.  I had an organization that had data stored at three different off-site storage vendors.  The reason was that they had had four different CFOs in the last three years and three of those individuals changed off-site storage vendors for reasons of cost and level of comfort with the vendor.  By the time I got there, I was only able to find two of those vendors as they were still transitioning to the new vendor.  I got lucky about the other vendor when I happened to run into a former employee who let me know about the other vendor during our conversation.  Had I not had this fluke of a run in, I would have never known about the third vendor.

Then there is the documentation related to the applications that store data.  It is difficult enough when the business is running to get people to determine what applications store PCI in-scope data, let alone other personally identifiable information (PII).  However, it is 100 times more difficult when a business is going out of business to locate the important data to ensure that it is handled properly.  Even when a business is going out of business, there is certain data that still needs to be retained for historical or customer service purposes.  While you do the best you can to get it all, I will guarantee you that you will miss something.

And retaining data is just not about doing back ups.  You also need to capture the operating system, system software such as any RDBMS and the application software.  After all, if you only have the data, how do you make heads or tails out of it if you cannot restore the application?  As a result, you need the application and its operating environment in order to ensure you can get at the data intelligently.  However, because of hardware changes, your ability to recover may be severely limited or may become impossible.

Once the data that needs to be retained has been captured and backed up, it is time to properly get rid of the rest of the data by ensuring that data that can be destroyed is properly destroyed.  Just going through and deleting files is not good enough.  If the hard drives will not be sold or fail after being powered down, then they should be physically destroyed.  If the hard drives will be sold for reuse, you need to follow the Department of Defense National Industrial Security Program Operating Manual (NISPOM) standard 5220-220M that states:

“Overwriting is a software procedure that replaces the data previously stored on magnetic storage media with a predefined set of meaningless data.  Overwriting is an acceptable method for clearing.  Only approved overwriting software that is compatible with the specific hardware intended for overwriting will be used.  Use of such software will be coordinated in advance with the Customer.  The success of the overwrite procedure will be verified through random sampling of the overwritten media.  The effectiveness of the overwrite procedure may be reduced by several factors: ineffectiveness of the overwrite procedures, equipment failure (e.g., misalignment of read/write heads), or inability to overwrite bad sectors or tracks or information in inter-record gaps.  To clear magnetic disks, overwrite all locations three (3) times (first time with a character, second time with its complement, and the third time with a random character). Items which have been cleared must remain at the previous level of classification and remain in a secure, controlled environment.”

There are all sorts of shareware programs available for all platforms for conducting a NISPOM-compliant disk wiping programs.  So, there is no excuse for not properly wiping the drives before you sell them off.  While hard drives can be degaussed, I have found that the degaussing process can sometimes cause the controller board or other electronics of the hard drive to fail.  So, if you are reselling the hard drives, I recommend using a DoD-compliant wiping program to ensure that the hard drive is still in working order after the data is destroyed.

For magnetic tape, you need to either degauss or destroy the tapes.  If the tapes will not be resold, then they should be physically destroyed.  If they will be resold, I highly recommend hiring a professional company to degauss the tapes before you resell them.  Degaussing is not just running the tapes past a magnet a couple of times.  It requires professional degaussing equipment that costs tens of thousands of dollars to ensure that the magnetic field is strong enough to wipe the bits on all recording surfaces.

23
May
09

It Is All About Compliance, Not Security

There was a posting made recently on the SPSP Forum stating, “The PCI standards are all about compliance, not about security.”  This is a complaint I hear often, not just about the PCI DSS, but about other standards as well.  However, I would argue that compliance with the PCI DSS has everything to do with security.

I think people confuse compliance with absolute assurance that activities are performed.  Compliance is defined by Webster’s Dictionary as “conforming to a regulation.”  In this example, the PCI DSS is the regulation and the card brands are requiring your organization to conform to the PCI DSS.  Compliance does not imply assurance, which is defined as “a guarantee.”  As I continue to tell you, security is not perfect, so absolute assurance is never a possibility.

Technical people typically do not have the best relationship with executive management, so their recommendations for better security typically get ignored.  As a result, security does not have as high a posture as it should have in many organizations.  In addition, in many cases, technical people are great at configuring the technology, but fail to monitor and follow up timely on changes and issues with the technology.  How many times have you heard the statement, “Yeah, we get lots of those alerts.  But we know they are all false positives, so we just ignore them.”  What that says is rather than fix the problem we just ignore it.  Because it is highly likely that in amongst all of those alerts, a real alert is ignored and that may be the difference between being secure and being breached.  It is these sorts of issues that an assessment brings to the surface.

I also think the people that throw out the “compliance is not about security” statement are threatened by the “witch hunt” that some assessors and auditors conduct.  Security people may also view the assessment as a threat to their expertise and judgment.  In some cases, they may fear that they will be shown to be not as competent as they are currently perceived.  Any assessment should never be a “witch hunt,” although, unfortunately, a lot of executives and their reports see it that way and, in some cases, executives use it in that way.  An assessment should be an honest appraisal of an organization’s security policies, standards and procedures and how well they are implemented and followed.  We are all human, so an appraisal will always find areas where policies, standards and procedures are not operating properly.  It is those areas where improvements need to be made to better ensure that security holes do not develop or are closed.  Assessments should be an opportunity to improve the organization.

I think many people that are using excuses are looking for a standard that dictates to them what they need to do.  That way it is the standard’s fault, not theirs, when a breach occurs.  Many people complain that the PCI DSS is not prescriptive enough.  I say to these people, if the PCI DSS told you that only Cisco ASA or CheckPoint firewalls were allowed, you would then complain that the standard was not flexible enough.  All security standards were developed to allow for the use of a variety of security technologies and solutions because what works for one organization, may not work for another.  One solution does not fit all and never will.

The requirements in the PCI DSS are a collection of security best practices.  How would complying with the PCI DSS not be about security?  In order to comply, you need to implement and maintain proper security.  In the end, I think people that make such statements are making excuses for their own shortcomings rather than owning up to them and doing their best to correct them.

19
May
09

‘Defense In Depth’ And The Control Triad

A lot has been written recently regarding that the PCI DSS relies on the concept of ‘defense in depth’.  However, very little has been written about how ‘defense in depth’ is actually implemented by the PCI DSS.

Before going into the PCI side of defense in depth, let us discuss the concept of defense in depth.  Defense in depth relies on the triad of preventative, detective and corrective controls.

  • Preventative controls are those controls put in place to prevent, as best possible, a security incident.
  • As I have repeatedly pointed out, security is not perfect.  Detective controls are designed to back up preventative controls by detecting incidents that may occur due to a shortcoming in or failure of the preventative control(s).
  • Corrective controls are those controls that back up both the preventative and detective controls by providing feedback to ensure that any lapses in those controls are corrected or those controls are redesigned to better prevent or detect their respective conditions.

With that as background, let us discuss an example of the preventative, detective and corrective controls that are part of the PCI DSS.

One of the primary preventative controls in the PCI DSS is network segmentation.  Properly implemented, network segmentation physically or logically separates PCI network traffic from all other network traffic.  By separating PCI network traffic from all other network traffic, you minimize the potential that PCI traffic is corrupted by non-PCI traffic.  Network segmentation comes in many forms.  It can be the firewall based on the rules that are implemented to segregate PCI network traffic from the Internet to the DMZ, the internal network to the DMZ or the general internal network to the PCI internal network.  It can also be the VLANs that are implemented across your internal network to segregate PCI traffic from all other internal network traffic.

On the detective side, logging is probably one of the biggest detective controls.  Every device can generate a log and, if properly configured, logging can provide a plethora of information regarding an organization’s network.  However, in order to detect an incident, log information must be reviewed and analyzed.  Not only must it be reviewed, but also with today’s sophisticated attacks, the log information must be correlated to other devices’ log information.  This requires a security information and event management (SIEM) system that centrally collects all log information from all devices and then conducts real-time or almost real-time analysis of that information looking for indications of any potentially anomalous behavior.  When potentially anomalous behavior is believed to be detected, the SIEM alerts the appropriate personnel to further investigate the behavior.

Detective controls can also be corrective controls.  The analysis of the log information is an example of just such a control.  Not only is it a detective control, it is also a corrective control.  That is because the analysis of the log information typically results in corrective actions to address the anomalous condition that is detected.

The Report On Compliance or Self-Assessment Questionnaire process is also an example of both detective and corrective controls.  Either process asks your organization to examine the triad of controls and detect any shortcomings in how these controls function.  If the controls are not functioning properly, the process identifies those controls that need to be changed and allows for the development of plans to correct those controls.

An incident occurs when the triad breaks down.  The preventative control is not properly designed to prevent all likely incidents.  A detective control does not detect all possible flaws in the preventative control(s).  And the most common offense?  When issues are identified with preventative or detective controls, action is not taken to correct the shortcomings of the preventative or detective controls.

Trust me; regardless of how good your organization is at following its policies, standards and procedures, you will have areas where the control triad does not function properly.  A good organization seeks out, identifies control issues, and addresses them as soon as they are identified.  A really good organization does not make their efforts to seek out and identify failings as a ‘witch hunt’ because they recognize that people are only human and always need to improve.

I could go on and on and on regarding the control triad and how it applies to the PCI DSS requirements.  However, I think you get the general idea.  So, the next time you are complaining about why a particular requirement exists, think about what it prevents, detects and/or corrects.  All of the PCI DSS requirements are there for a reason and provide cover for a significant number of the other requirements.  And that is what ‘defense in depth’ is all about.

17
May
09

Is “End-To-End Encryption” Realistic? Part 3

Hopefully by this point I have pointed out that encryption, end-to-end or otherwise, is not a silver bullet.  It is just another tool to minimize the risk of data loss.  But why has it become the topic du jour?  That is what I hope to examine in this post.

There is the issue of end-to-end encryption even being feasible.  As I pointed out in my last post, while it is feasible, it may not be as secure as Mr. Carr and others desire.  In some cases, it may not be able to be implemented considering the technology used by all merchants.  Merchants live on very thin margins, even Target and Wal-Mart.  So the investment required to make changes may put some merchants out of business.  In today’s economic climate, the loss of jobs will far outweigh the monetary losses.  Until the economy picks up, merchants will likely fight to minimize any expenses to make changes to their systems and networks.

Speaking of monetary losses.  Based on the latest statistics I could find, 7.5% of Americans (almost 23 million people) have suffered from financial fraud.  While that is a fairly large number of people impacted, the total monetary losses to fraud versus total credit card charges are still well below 1%.  Until that percentage gets higher, we will likely see the card brands and merchants to accept this loss as the cost of doing business.

The fact that the US House of Representatives looked at this issue in the Committee on Homeland Security speaks volumes.  There is an assumption that this is the case since the bulk of fraud is now committed by criminal organizations.  I do not discount the possibility that some of these fraud moneys likely flows to terrorists, but the amount is likely so small that it is inconsequential.  Then there is the fact that Internet access in known terrorist countries and the number of attacks coming from those countries just does not support the conclusion that fraud funds terrorism.  Granted, a lot of attacks and fraud are conducted by surrogates on behalf of others.  However, based on everything I have read, there has been no correlation between the attackers and terrorists.  Until this can be correlated, this is just a smoke screen in my book.

In her statement during the House hearings, Representative Yvette Clark (D-NY) held out Chip and PIN as one of the keys to securing credit card transactions.  As I pointed out in my Chip and PIN post, this technology is not a silver bullet.  In fact, it has its own security issues, the largest being that the encryption it offers is weak at best.

Unfortunately, I think this issue is being discussed because the people discussing it believe that encryption solves the data breach problem.  If properly implemented, encryption will reduce the risk of successful data breaches, but it will not entirely get rid of them.  It will just make them more difficult to execute.  After all, banks and art museums still are robbed even with all of the security measures they have implemented.  What makes anyone think that data breaches will stop because of encryption?  That is the point, it will not.  Data breaches will continue to occur with or without encryption.  It is how successful those breaches are that will change.

16
May
09

Is “End-To-End Encryption” Realistic? Part 2

Let us examine what Robert Carr, CEO of Heartland, possibly means by ‘end-to-end encryption’.  In the Heartland press release it says, “For the past year, Carr has been a strong advocate for industry adoption of end-to-end encryption – which protects data at rest as well as data in motion – as an improved and safer standard of payments security.”

One of the keys to defining end-to-end encryption is the fact that Mr. Carr refers to it as protecting data at rest as well as when it is in motion.  As a former telecommunications and networking person, ‘end-to-end’ to me means from the initial point of contact with the network to that point on the network where the transmission terminates.  However, Mr. Carr is implying that he also includes the point at which the cardholder data retrieved from (i.e., the card, Smartphone, etc.) and then stored.  Therefore, it is from this definition that we will work.  Technically, what Mr. Carr describes seems possible.  However, there are some obstacles and limitations of the technology in place today that will make this re-engineering difficult and possibly impossible, at least for the immediate future.

In order to get true end-to-end encryption requires that the credit card also be encrypted.  Guess what?  We have that technology today in the Chip and PIN credit card.  If you remember from my post on Chip and PIN, this is not a technology without if flaws.  As I pointed out in that post, the chip on the Chip and PIN card is encrypted using either DES, 3DES, RSA or SHA.  Since DES is no longer considered a secure encryption method, the card brands should no longer recommend its use.  The larger problem with Chip and PIN encryption is that the encryption key is a four digit number (the PIN), which does not create a very secure cipher.  Essentially, we are talking about something in the neighborhood of 13-bit encryption versus the more robust 128-bit or better encryption required by the PCI DSS.  As a result, regardless of the encryption method used, it is the weak key (PIN) that creates weak encryption at the card.  If Mr. Carr thinks that he has part of his end-to-end solution in Chip and PIN, he needs to think again.

Then we have the encryption from the swiping device to the processor for approval or decline of the charge.  Now, a number of you may be saying, what about the POS system?  Hold that thought and I will discuss it in a little bit.  Let us talk about stand-alone terminals that may or may not be integrated into a POS system.  In order to get end-to-end encryption, the encryption must occur from the terminal that accepts the card all the way through to the ultimate storage location of the transaction.  The good news here is that the terminal is typically capable of using the latest encryption algorithms, so the transmission from the terminal to the processor can be properly secured.  However, the problem with encryption from the terminal to the processor is that this technology currently is an encrypted tunnel.  This means that any network devices between the terminal and the processor are unable to act on the message because it is contained in an encrypted tunnel.  For true stand-alone terminals, this is not a problem.  For terminals integrated with a POS solution, implementation of end-to-end encryption requires a separate connection from the terminal to the POS that transmits the approval/decline code and transaction amount back to the POS.  All of this is available today from some POS solutions.

However, end-to-end encryption gets trickier when it is provided as part of an integrated POS solution.  This is because these solutions typically integrate the terminal with the POS hardware and software.  The trickiness comes from the fact that we are relying on hardware and software to provide our security.  Since most of today’s POS solutions are based on some form of Microsoft Windows, security can be haphazard at best depending on how Windows has been implemented.  All that is required to compromise this solution is a piece of malware that positions itself between the reading and decryption of the credit card and the application that processes the transaction.  Based on what has been published, this appears to be exactly what happened in the Hannaford breach.  Therefore, the POS solution must be rigorously hardened and sufficiently monitored to ensure that it is not compromised.  Alternatively, if it is compromised, an alert is generated almost immediately to notify management of the compromise so that it can be addressed as soon as possible.  All of this technology exists today in the form of anti-virus, anti-malware and critical file monitoring solutions.  However, additional controls may also be needed to ensure that POS solution ghost images are not tampered with (use of hashing and periodic examination of images to ensure they hash properly) and that critical file monitoring is actually monitoring the correct files.

Those of you thinking of using Kerberos had better think again in regards to end-to-end encryption.  Kerberos encrypts between devices and/or applications.  Therefore, Kerberos just ensures encryption from the POS application/device to the next application/device it connects, most likely, just another application/device on your own network.  So the threat of malware still exists, it just may be further spread out to other devices/applications.  The other problem with Kerberos is interoperability outside of your own environment, in this case with your processor.  While Kerberos supports such capability, in my experience, very few organizations get Kerberos implemented properly on their own networks, let alone working properly with an outside network.

Going back to encryption between the terminal or the POS system to the processor.  I brought up the fact that current network encryption solutions (IPSec, PPTP, L2TP, etc.) create a tunnel that network devices between the two endpoints do not have access.  Obviously, for those of you using MPLS (aka multiprotocol label switching), this is an issue because your traffic cannot be rerouted by MPLS if it is tunneled.  Supposedly, this is being addressed by an IEEE committee to develop an encrypted tunnel that only encrypts packet payloads and leaves the information necessary for MPLS to operate in clear text.  When this standard will be released is anyone’s guess.  But until it is, MPLS networks become a potential problem for encrypted tunnels.

As you can see, end-to-end encryption is feasible, but its true value may not be what Mr. Carr believes or has been lead to believe.

Final post on end-to-end encryption.

14
May
09

QSA Liability

There is an article on Digital Transaction News this week that set me off.  So, I felt obligated to give the counterpoint view.  So please pardon my rant.

That article states, “One issue Avivah Litan says has been raised by the RBS WorldPay and Heartland breaches—and others she says that haven’t been publicly disclosed—involves the responsibilities of the so-called qualified security assessors (QSAs) that do PCI assessments for processors and merchants. Assessors typically structure their contracts so that they bear little or no liability if a client is breached, she says. “

As an employee of a public account firm, you bet we limit our liability.  Here is why.

First, I would like to get a hold of the reporter that wrote this article and correct their use of the term “so-called.”  We are not “so-called” QSAs, we ARE QSAs.  We are certified by the PCI SSC and deserve to be referred to as QSAs.  Are there QSAs that are not doing the proper testing for PCI compliance?  Yes, I am sure there are, but you have that problem with any profession, so move on.  Besides, the PCI SSC’s QA program should correct this situation.

Ms. Litan needs to remember that the PCI assessment, for the most part, is an assessment of a specific part of an organization’s control environment at a given point in time.  The exception is for items such as quarterly vulnerability and penetration testing, but even those items are likely to be only four distinct periods.  As a result, there are significant gaps in the testing of controls that no lawyer, accountant or other sane individual would accept as a liability.

Then there is the human factor.  How are QSACs responsible for a client’s employee making an improper change to a firewall or a consultant getting CHD for a financial analysis from the CFO anytime after the assessment is completed?  In order for that to happen, the QSAC would have to have QSAs on site, at all times, monitoring all work relevant to the PCI DSS requirements 24/7.  That is not cost effective and is not going to ever happen.

What a QSAC is responsible for is not properly conducting the testing of the PCI DSS requirements and then reporting a company is PCI DSS compliant when, in fact, the company is not compliant.  That is negligence, and I can tell you that my firm’s contracts do not protect my firm from negligence.

Just had to get this off my chest.

10
May
09

Is “End-To-End Encryption” Realistic? Part 1

A January 26 Digital Transactions News article discusses Robert Carr’s call for end-to-end encryption.  The Heartland press release that drove all of the media stories around that time states that, “For the past year, Carr has been a strong advocate for industry adoption of end-to-end encryption – which protects data at rest as well as data in motion – as an improved and safer standard of payments security. While he believes this technology does not wholly exist on any payments platform today, Heartland has been working to develop this solution and is more committed than ever to deploying it as quickly as possible.”  Mr. Carr’s call for encryption appears to have been unknown to the public as the earliest reference of encryption a Google search could uncover was in Heartland’s 2008 annual report.  Given Heartland’s fiscal year ends on December 31, it is likely the statement on encryption in the annual report was written about the same time as the press release.

In response to Mr. Carr’s call for end-to-end encryption, Kevin Nixon argues in a March 24 article that end-to-end encryption is “bad medicine.”  So, what is meant by end-to-end encryption, the “one brief point” where cardholder data is exposed and why is this discussion creating such a stir?

In this post, I would like to discuss the “one brief point” comment.  Based on my experience with credit card processing, depending on where you are in the process, there can be more than just one “brief point.”  In fact, “brief“ may also be a misnomer.  Depending on the process, there could be numerous points where cardholder data is exposed.

Let us look at this from the merchant’s perspective.  For merchants using dial-up terminals, there is truly only one potential “brief point” of exposure between the terminal and the connection with the processor.  However, this exposure requires that the dial-up line be wire tapped and the electronic transmissions be recorded.  Given that a merchant with dial-up does not have a high volume of credit card transactions and that wire-tapping is a very serious federal felony, the risk of this occurring is low as the payback is also very low.

For merchants that use integrated POS, the first exposure point that can exist is between the credit card terminal and the POS solution.  This point exists because of the multitude of terminals that could possibly be connected to the POS.  Since POS vendors do not always have the resources to develop interfaces for every possible combination of credit card terminals, they develop interfaces to the most popular terminals following the 80/20 rule.  This compatibility issue has gotten less and less of a problem over the years as the terminal vendors adopted USB and other standard connectivity solutions to connect the terminals to the POS.  However, most POS vendors have made the wrong assumption that the connection between the terminal and the POS is secure which may or may not be true.

Then there is the connection between the integrated POS and the processor.  For large merchants that perform their own transaction switching, the connection between the POS and the internal switch can sometimes be unencrypted and that can be another point of exposure.  Interestingly enough, the connection between the merchant and the processor can also be an exposure point.  I cannot tell you the number of large processors that even up to a year ago were still struggling with getting connections to merchants encrypted.  And if a merchant’s connection to their processor is a private circuit, it is likely not encrypted because the PCI DSS does not require it.

Another exposure point is settlement.  While settlement is usually conducted over private circuits, it is typically done using FTP to transfer files between the merchant and the processor.  Some processors have moved to transferring settlement files via secure FTP, but have not necessarily moved all of their merchants to secure FTP.  As a result, there is potential risk in the fact that standard FTP is used during the settlement process.

These are just the obvious risks to cardholder data security.  Based on all of the custom solutions implemented by merchants and processors, there are unique risks present throughout the cardholder data flow.  As a result, each instance presents its own unique challenges to provide adequate security.  This is why securing cardholder data can be daunting.

In my next post, I will examine the definition of ‘end-to-end encryption’.

09
May
09

Changes Coming To A QSA Near You

As I stated in an earlier post, I had my recertification training back around the first part of April in Chicago.  One of the outcomes of this training was identification of a number of work program areas that are being significantly changed or expanded under v1.2 of the PCI DSS.  The purpose of this post is to give all my readers a ‘heads up’ on these changes as they have the potential for significantly increasing the amount of time it takes to get your Report On Compliance completed and, hence, likely increase the cost.

There is a reason for these changes to the procedures.  The PCI SSC is just trying to improve consistency between QSAs as well as obtaining more accurate and reliable results from the information provided by the QSA in the Report On Compliance.  This is probably one of the biggest complaints regarding QSAs and this is just one of the ways that the PCI SSC is attempting to address the QSA consistency issue.

First, we were told point blank that we would have to document and justify our sampling strategy.  We had heard for years stories of QSAs conducting ridiculously small samples such as two retail locations out of a 1,500-store retail chain.  Or sampling one Windows, one Linux and one Unix server each out of 2,000 total servers.  That is around 1/10th of one percent and, no matter how good your procedures are and the fact that you maintain consistent configurations, a QSA cannot justify these as good samples.  In addition, some QSAs were also sampling the same locations or servers year after year, which had always been not allowed, but without the PCI SSC QA program, there was no way to know that this sort of procedure was being followed.

One option to keep QSA sampling to a minimum is to get the client’s internal audit function to include PCI coverage in their retail outlet audit program.  Most large retailers conduct periodic on-site internal audits of their retail locations.  These audits are typically conducted on a three year rotating schedule, so every retail location will always get reviewed at lest every three years.  Potentially more often if issues are identified during a review.  We have added a number of basic PCI procedures to our large merchant audit programs to cover those PCI requirements that are relevant out in their retail locations.  We then review the results and conduct our own testing based on a much smaller sample size.  If issues are uncovered in our sampling, we then increase our sample size.

Another area that is being expanded is in the area of infrastructure configuration reviews.  Short of demanding that automated analysis tools be used, we were told that the analysis of firewall, router, switch and other infrastructure device configurations need to be much, much more robust and detailed.  Essentially, QSAs are being held responsible for ensuring that the analysis of the network is complete enough to give reasonable assurance that all of the relevant network security requirements can be answered from the detailed analysis of the devices’ configuration files.  We were also told that using tools such as Cisco’s Output Interpreter is not going to be acceptable.  The PCI SSC and card brands want a very detailed analysis of the network conducted to ensure that the PCI in-scope network is truly secure and controlled.  Based on my understanding of the results they are looking for, there is no way a manual assessment will meet their requirements unless the network is very simplistic.  As a result, you can expect QSAs to start using a variety of network configuration analysis tools such as Secure Passage’s FireMon or Athena Security’s FirePac and Verify.  Depending on the number of devices involved, this sort of analysis can be expensive.  If your network is consistently configured to specific standards, your QSA should be able to pick a sample of devices to obtain an end-to-end analysis of your network and, therefore, be able to get detailed answers.  If your network is not consistent, then this analysis is going to end up being expensive.

We were told of a number of QSAs that are under probation for issues with their Reports On Compliance uncovered during their QA review.  We were also given the grading scale that the PCI SSC uses to assess Reports On Compliance.  This grading scale is essentially the Report On Compliance with a rating and weighting scale for each requirement.  Any QSA company not scoring at least 85% on their reports is placed on probation for at least three months.  While under probation, the QSA company must submit every Report On Compliance to the PCI SSC for review before it can be issued.  Any QSA company scoring less than 75% can be removed as a QSA company by the PCI SSC.  I can tell you that my firm is going through its QA review and we are all on pins and needles.

Finally, there is a change coming in all QSA’s proposals and engagement/project legal agreements.  For this year’s QA reviews, all Reports On Compliance must be redacted to ensure client confidentiality is maintained.  This can be a very painful process for a QSA company that issues a lot of Reports On Compliance.  The change is to require a statement in all engagement/project legal agreements that allow the PCI SSC to have access to Reports On Compliance for the purposes of the QA review process.  This will effectively create a non-disclosure agreement between the PCI SSC and the QSA company’s clients.

The times they are a changing.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2009
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031