Archive for May, 2011

24
May
11

Should the PCI Guru allow for “Guest” Postings?

I am going to try out the WordPress PollDaddy feature to collect your thoughts on this topic.  I have had a number of requests lately asking for time to post to the PCI Guru blog on PCI topics.  These requests have come from individuals as well as security vendors.  I have some reservations, but I wanted to get the readership to respond on this topic.  So, let us try this out.  The poll will be open for one month.

20
May
11

Social Engineering Gains Ever More Credibility

ignosecond – ig no sec ond – \‘ig nə se kənd – The time between the moment one does something inherently stupid and the moment one realizes that it is too late to stop the results of that action.  Examples include pushing a locked car door closed and realizing that the keys are in the ignition or opening an attachment or clicking on a link in an email message from supposedly a business associate or friend and recognizing the telltale signs of a phishing scam.

It is turning out that the latest breaches were the result of an ‘ignosecond’ by one or more employees that in turn caused a security breach to be possible.  All it took was an email message to personnel that included a piece of malware hidden in a file attachment that exploited a vulnerability that then allowed the installation of a backdoor and viola, another compromise.

This should be a wakeup call to all security professionals.  It does not matter how sophisticated your security technology is, it only takes one person to cancel all that out.  This is why the PCI DSS dedicates requirements in 12.6 to security awareness.  The requirements in 12.6 state:Implement a formal security awareness program to make all personnel aware of the importance of cardholder data security.

  • Educate personnel upon hire and at least annually.  Note: Methods can vary depending on the role of the personnel and their level of access to the cardholder data.
  • Require personnel to acknowledge at least annually that they have read and understood the security policy and procedures.

The problem is that a lot of security professionals give only lip service to security awareness training.  Let us face it; security awareness training is not as sexy as security technologies like SEIM and WIPS.  And besides, our users are, well, users.  Even if you train them, they still make mistakes, so why bother with security awareness training?  However, at the end of the day, everything in an organization’s security posture comes down to the people that interact with the information you are trying to protect.  As I stated earlier, it only takes one person having a bad day or a “bad apple” to make all of an organization’s security technology and other controls impotent

This is why security awareness is such an important part of an organization’s security posture.  Whether you like it or not, there are human beings in the equation and human beings are fallible.  The only way to address this situation is to educate your fellow employees on how to make things secure and avoid being taken in.  But remember, human beings are fallible, so no matter how hard you press security awareness in your organization, you are still going to have incidents.  Therefore, the goal of security awareness training is to minimize the number and impact of those “ignoseconds.”

But we need to be honest about all of this.  Human beings are fallible and we all have our “moments.”  As a result, even with a lot of appropriate security awareness training and periodic reminders, one or more personnel can have a “moment” and create the possibility of a breach.  Even with defense in depth, all it takes is one well crafted attack, a fallible human and your security is breached.  As I have repeatedly stated, security is not and never will be perfect.  And this is particularly true when human beings are involved.

My favorite story about such a situation is from years ago when I was conducting a social engineering attack against a subsidiary of a Fortune 500 company.  We had crafted a very real looking email message from a known Human Resources consulting firm indicating that they were conducting a survey of the subsidiary’s employees on behalf of the corporate parent.  We instructed recipients to log into a phony Web site and take the survey.  All they had to do was use their network logon credentials to gain access to the survey.  We only got two hits before the parent company’s HR department sent out an urgent email message telling employees that our message was bogus.  One of the two people caught was the CFO of the subsidiary that had hired us.  His comment when confronted with the fact of his “moment?”  “I suppose I shouldn’t have done that.”  What an understatement!

But this story points out the problem all security professionals face and this is one problem that technology is not going to solve.  In the end, people are always going to make mistakes and all we can do is minimize the impact of those mistakes.  Minimizing the impact means real security awareness training coupled with social engineering testing to assess how the security awareness training is working.  In addition, you need to structure your preventative, detective and corrective controls such that you address any points in your controls where one “moment” results in a compromise.  In some cases, you may need to restrict peoples’ access to certain resources or divide up responsibilities.

Most security professionals loathe social engineering tests and rightly so.  As someone famously said a while back, “When on a witch hunt, you are always going to find at least one witch.”  As I have already stated, everyone has their moments and as social engineers such as Kevin Mitnick have shown, there are always ways to social engineer your way into any organization.  Not that organizations have done themselves any favors in this area.  For the last quarter of a century, most organizations have been focused on customer service improvements.  A by-product of this customer service improvement focus has been to train employees to be customer friendly to a fault.  It is those faults that are now being used against them by social engineers.  While good customer service is necessary, customer service training needs to be coupled with a healthy dose of skepticism to ensure that information is not provided without proper authorization.

The best example of customer service gone awry is from the 2010 DEF CON “How Strong Is Your Schmooze” contest.  This contest was a social engineering exercise against large companies that resulted in some very embarrassing results.  Contestants had two weeks to prepare for their social engineering exercise by conducting research on their target.  Of the 15 organizations contacted and 25 available “flags” that could be obtained, 14 gave up one or more “flags.”  To add insult to injury, the social engineers had only 25 minutes to perform their telephone calls in front of a live audience.  If you have read the report you may have issues with the 25 “flags” that were used (God knows the FBI was very concerned and advised the DEF CON people on what they considered okay information to obtain), but you must remember that if this sort of information was obtainable, then probably just about anything could be obtained.

The lesson to be learned in all of this is that if you are not worrying about social engineering and conducting security awareness training, then you are kidding yourself if you think your organization is truly secure.  Yes, there is little you can do to stop human beings from having “ignoseconds.”  But you can take steps to minimize the impact and one of the most important is to get serious about your security awareness training and to follow that training up with social engineering testing.  Just acting on those two items can make a significant difference in the impact of a social engineering attack.

UPDATE: If you think I’m blowing smoke, here are the results of a survey that confirms what I am saying. 

http://www.csoonline.com/article/682709/to-be-breached-is-human?source=rss_data_protection

15
May
11

Doctored Credit Card Terminals

It was announced this week that the Michaels retail stores breach was much larger than originally thought.  However, to those of us in the PCI business, this breach should not have been a surprise.  This sort of breach has been going on for quite a while.

Now before everyone goes out and pillories Michaels for their bad luck, I would like all of you to consider how you would have detected such a situation?  The bottom line is that for most of you, you would have been just as clueless until the FBI and Secret Service turned up at your doors.  So be careful about getting too sanctimonious.  More on how to detect these attacks later.

Regardless of what the PCI SSC has said in the past, credit card terminals are not the “dumb” devices as portrayed by the PCI SSC.  With the introduction of the PA-DSS v2.0, there was an indication that the PCI SSC had finally come to their senses and recognized this fact by having the software that is used in credit card terminals finally certified under the PA-DSS standard.  Let us face it, certain credit card terminals are just as sophisticated as netbooks, use a Linux or Windows Embedded OS as a base and have their own software development kits (SDK).  Not exactly the specification for a “dumb” device.

Another point that should not have been a shock is the fact that the terminal was used as the attack vector.  Those people advocating end-to-end encryption fail to remind people that wherever the endpoints are will become the new attack points.  As a result, the Michaels attacks will become a very popular attack vector once end-to-end is implemented.

Wait a minute, the terminal will become more popular for attacks?  After all of last year’s belly who about end-to-end encryption, I can tell you that merchants are under the impression that end-to-end encryption gets them out of being breached.  It is people like Bob Carr, CEO of Heartland Payment Systems, that are the biggest neglectors of explaining the whole story behind end-to-end encryption.  End-to-end encryption just moves the attack points, in this case out to the terminal at the merchant’s location.  Worse yet, it also makes security of the merchant’s endpoint even more difficult than it already is because the techniques used in doctoring terminals can easily go unnoticed.

Early attacks on terminals were crude and, for the most part, remain this way.  Typically, USB thumb drives are soldered into the terminals.  This attack approach requires the criminals to swap out the doctored devices periodically to obtain their contraband.  Fortunately the “electricians” used to doctor these terminals are usually not good at their tasks.  As a result, only a few of the doctored terminals actually work and collect usable track data.  To get their doctored terminals into retail locations, the criminals hire on as night custodians or other overnight stock help and swap the devices during that time.

But times change.  The criminal element gets smarter and begins doctoring terminals using software, not hardware.  After all, these devices are just small computers.  So now the device is programmed to collect the data and then transmit it during the overnight to a server on the Internet or, worse yet, a server that has been compromised on your own network.

So what can a merchant do to counteract such attacks?  Actually, quite a bit.

  • Put serialized security tape on all of the seam openings on your card terminals and check it at least daily to ensure that it is still in place and that the serial numbers match.  When the tape becomes worn, replace it and record the new serial numbers.  If you ever notice the tape missing or the serial numbers do not match, take that terminal out of service.  Contact your acquiring bank or processor and obtain a terminal directly from them to replace the potentially tampered with unit.
  • Use only reliable card terminal vendors.  I know that merchants are under tremendous cost pressures, but are these savings really in your best interest when you are leaking cardholder data on every transaction?  Probably not, particularly when customers start complaining and your legal costs start ramping up.  However, even trusted vendors can become a bad source of equipment, so keep this in mind.
  • Do not trust anyone that just shows up to replace your card terminals.  If you did not ask for service, no acquiring bank or processor is going to be proactive replace terminals unless they notified you.  Be very skeptical of any service person that appears out of nowhere to “fix” your terminals.
  • If your terminals are on your network, monitor your terminals for when they are disconnected.  In most organizations, terminals are rarely disconnected, so any such alert would be an indication that something abnormal has occurred and should be investigated.
  • Monitor your external network connections and connections from the terminals to devices that should not be in your cardholder data environment (CDE).  Any traffic from the terminals outside your network or to devices not in your CDE is probably someone leaking cardholder data from your terminals for their criminal use.  If you see such activity, notify your local FBI office immediately and ask them if you should stop the traffic.  At this point, you should also probably get a computer forensic analyst involved to begin gathering documentation on the attack.

These are just some ideas on how to address this situation.  You may have additional options available to you because of the way your organization is configured.  However, you need to begin considering this new attack vector as one that will only get worse.

06
May
11

Self-Assessment Questionnaires

I have received some interesting questions of late regarding various scenarios and how to fill out specific self-assessment questionnaires or SAQs.  The troubling part to these questions is that they are totally misinterpreting how to apply the SAQs to particular businesses.  As a result, I thought it was a good time to discuss the various incarnations of SAQs and how they apply to various businesses.

For those of you unfamiliar with the PCI SAQs, there are five; A, B, C, C-VT and D.  The first four are designed for very specific business scenarios and D is the catch all when none of the previous four seem to fit.  In the QSA trade, SAQ D is referred to as Report On Compliance (ROC) ‘Light’ because any organization that has to fill out SAQ D is essentially going through all 12 PCI DSS requirements, albeit on a reduced scale.  If your business does not fit the criteria for the other four SAQs, then you are expected to use SAQ D.

The first important fact about the SAQs is that they can only be used by merchants classified as Level 2 through 4 or Level 2 service providers.  And the most important fact, while anyone can give you an opinion regarding which SAQ your organization should use, only your acquiring bank can officially determine which SAQ your organization should use.  That said, in the front of every SAQ under a section entitled ‘Completing the Self-Assessment Questionnaire’, the SAQ documents the criteria for using the particular SAQ.  If your organization does not meet all of the criteria, then you cannot use the SAQ.

SAQ A is designed for merchants that have no brick and mortar stores such as those similar to Amazon.com.  In addition, the merchant must be totally outsourcing its processing, storing and transmission of cardholder data to a third party such as Level 3 or IBM and those providers must be PCI compliant.  Finally, the organization cannot be storing cardholder data electronically.  However, the organization can have paper reports and receipts that contain cardholder data, but those documents cannot be received electronically.

For SAQ B, your company needs to go back to the “stone age” of credit card processing.  The organization must be using stand-alone card terminals or manual embossers also known as a “knuckle buster.”  In the case of a stand-alone terminal, the terminal cannot be connected to a network or the Internet.  No cardholder data can be stored electronically.  The organization can have paper reports and receipts that contain cardholder data, but those documents cannot be received electronically.

In SAQ C, we get to versions; the standard SAQ C and SAQ C-VT.  The original SAQ C is for organizations that run integrated point of sale (POS) systems on a network that only connects to the Internet for authorization and does not store cardholder data.  To qualify to use SAQ C, the organization must meet the following criteria.•    The payment application system and the Internet connection are on the same device and/or same local area network (LAN);

  • The payment application/Internet device is not connected to any other systems within the organization’s environment (this can be achieved via network segmentation to isolate payment application system/Internet device from all other systems);
  • The organization’s retail facility is not connected to other retail locations, and any LAN is for a single retail location only;
  • The organization retains only paper reports or paper copies of receipts;
  • The organization does not store cardholder data in electronic format; and
  • The organization’s payment application vendor uses secure techniques to provide remote support to your payment system.

Where most organizations go wrong with the original SAQ C is when they have an integrated POS that connects back to a corporate network.  Remote management is allowed in this environment, but the entity that remotely connects must not have unlimited or uncontrolled access to the POS environment.  We have run into a number of instances, particularly in the fast food and hospitality/hotel industry, where the franchisee’s POS solution fits the SAQ C criteria.  However, upon further investigation, we find that SAQ C cannot be used because the POS environment is connected managed from the franchisee’s corporate office or it is managed or connected to the franchiser’s corporate office.

New for version 2.0 of the PCI DSS is SAQ C-VT.  This was developed to handle virtualized environments.  Virtual can be either full on thin clients such as a Wyse terminal or a PC where only a browser is used to process cardholder data.  However, the same connectivity requirement remains in that the thin client or PC must only connect to an acquirer, processor or third party.  Finally, and the most important aspect for this SAQ, cardholder data can only be entered manually.

So those are the rules surrounding using SAQs.  Hopefully all of you small merchants can now figure out which SAQ to use.  However, remember, please consult with your acquiring bank on which SAQ to use before you pick one.  If your acquiring bank gives you no idea, then use this posting to make your choice.

02
May
11

Draft PCI DSS v2.0 “Scorecard” Released

It has been a long wait, but the PCI SSC has finally given us a look at the new “scorecard” for v2.0 of the PCI DSS.  For those of you that never knew about the “scorecard,” it was given to QSAs to assess Reports On Compliance (ROCs) to ensure that QSAs have properly conducted PCI assessments.  I have not had a chance to get through all 112 pages of this document, but I have gotten through the first part of it and I wanted to share my thoughts.

The first change to the “scorecard” is its name.  It is no longer the “scorecard,” it is titled ‘ROC Reporting Instructions for PCI DSS v2.0’.  The naming seems to indicate that once the QSA review period is over it will be posted to the PCI SSC’s Web site in the Documents Library.

Overall, the document is similar to the scorecard for v1.2.1, but no longer documents the scores that the PCI SSC QA team will use to assess QSAs.  However, from the way it is written, I would assume that if a requirement in the ROC does not contain everything documented in the Reporting Instructions, that it is considered to have not met the QA requirement.

Another general comment I have is that it is woefully lacking in examples.  While there seems to be a significant amount of guidance provided for what to write in the ROC, there are also ambiguous or unclear references that could be explained if the PCI SSC provided relevant examples of what they desires the QSAs to write.

The biggest change I have found thus far is the removal of the requirement to observe network traffic as the Network Monitoring column is gone from the Reporting Instructions.  Prior to this point, QSAs were required to obtain network traffic via WireShark or similar tool to prove that network traffic is encrypted.  I reviewed requirements 1.2.1.a, 1.2.1.b, 3.2.1, 3.2.2, 3.2.3 and 11.4.a that had the Network Monitoring requirement in the v1.2.1 scorecard.  Based on the training for the 2011 QSA recertification, networking monitoring testing is still something needed for confirming compliance with requirement 1, so even though it has been removed as a column, it appears to still be required.  However, from the Reporting Instructions, the network monitoring is not explicit, so this is one of those areas where the PCI SSC will definitely need to clarify things.

The section in the Executive Summary at the front of the ROC that discusses how a network is segmented to minimize scope will now require a fairly detailed discussion regarding that segmentation.  All network segments need to be described along with their purpose as well as a discussion of how the segments are architected and whether the segments carry cardholder data (CHD).  If access is provided to the cardholder data environment (CDE), that access needs to be described and that description needs to document how access is controlled.  It is very clear from the write up surrounding this section that QSAs and their clients will have to put much more work into this section to satisfy the PCI SSC.

Another clarification area is with the review of system configurations done as part of requirements 1 and 2.  The guidance now given by the PCI SSC is that they no longer want the documentation to be a list of configuration files that were reviewed by the QSA.  However, in the next breath, the Reporting Instructions tell the reader that a QSA must provide enough detail to prove that configuration files were reviewed.  So what is an acceptable level of detail?  Can we say that we reviewed 5 or 25 firewall configuration files?  In the past, we were told that this sort of approach was unacceptable.  The PCI SSC will need to provide one or more examples of language that they will accept.

Of all of the things I have read thus far, the one that just gets me seething is from the “Dos and Don’ts” page.  One of the “Don’ts” is “Don’t copy responses from one Testing Procedure to another.”  Further down on the list is “Don’t cross reference between responses”.  After going through our QA assessment and remediation, we were told by the QA person that we needed to do a better job of putting all of the information from earlier requirements that was relevant into every requirement as each requirement needs to be able to stand on its own.  But now, according to the Reporting Instructions, you cannot bring all of that documentation to the new requirement by using cut and paste.  What a bunch of “make do” work.

But this “make do” work is all because the PCI SSC is basically implying that it cannot trust its QSACs to do the work that is required to ensure an organization is complying with the PCI DSS.  However, just because a QSA writes something in a ROC does not mean they actually did the work.  It just means that the QSA knows how to write what the PCI SSC wants to read.  And to make matters worse, the PCI SSC provides the Reporting Instructions to provide guidance on just what to write as well as telling QSACs to develop ROC templates to speed the writing process.

A prime example of this is a new requirement in the section of the ROC where the QSA documents the list of people interviewed and/or observed.  The PCI SSC now requires the QSA to document what these people were interviewed about or were observed doing.  The purpose of this new requirement is to provide even more “proof” that a QSA did their job.  Another minor example of the PCI SSC trying to get “proof” of a QSA’s work effort is the increase in the level of detail being asked to document is the dates that QSAs were on-site for fieldwork.  In the past, QSAs were only required to document the period covered by the assessment.  However, QSAs are now being required to also document all of the dates of their fieldwork as well as the duration of their fieldwork and review period.

This is one of my biggest issues with the ROC process.  The PCI SSC refuses to adopt a more intelligent and cost effective reporting process of documenting requirement exceptions.  Instead, the PCI SSC requires QSAs to document their fieldwork process in the report.  As a result, an inordinate amount of time, paper and hence money is spent on what is really, in my humble opinion, a totally worthless effort.

I understand why this was required.  When the PCI SSC did not have the right to review a QSA’s work papers and other documentation, having such documentation in the ROC was the only way the PCI SSC, card brands and acquiring banks could assess whether or not a QSA had done their job.  Now that more than a year has gone by since the PCI SSC required all QSACs to include verbiage that allows the PCI SSC to review a QSAC’s work papers, putting all of this effort into a response writing requirement should no longer be required.  QSAs should be able to mark a requirement either ‘In Place’ or ‘Not In Place’.  If a requirement is ‘Not In Place’, then the QSA should document why the requirement is not in place and what the organization is doing to remediate the problem and when the remediation will be complete.  Such an approach would make the creation of the ROC much faster and would make the ROC much quicker to read and easier to understand.  This is the approach used in the accounting industry for their SSAE 16 reports and there is no reason why the PCI SSC could not adopt the same approach.

The PCI SSC continues to cling to this inane reporting requirement because it apparently is relying on the readers of the ROCs to “rat out” those QSACs that are producing inadequate reports.  I hate to be the bearer of bad news, but based on my review of ROCs from other QSACs that I have encountered over the last year, the “bad eggs” are not being weeded out.  Based on my interaction with acquiring banks and various card brands, there are a lot of ROCs that are not being read in detail.  And even those ROCs that are being read, most comments surround anything that is determined to have been ‘Not In Place’.  Occasionally we get a question about an ‘In Place’ item.  Obviously the current approach is not working and as long as the PCI SSC continues this approach, we are not going to build trust between the PCI SSC and QSACs.

I know that this is a dilemma for the PCI SSC, but it is something that needs to be addressed and soon.  Organizations that have to go through the ROC process are pressuring QSAs to reduce costs as much as possible not only due to our current economic conditions but also because of the thin margins retailers live on.  In order to keep the PCI compliance process relevant, the PCI SSC needs to get out in front of this issue.  The PCI DSS assessment process is very labor intensive, so the only cost savings to be obtained will be in making the process less labor intensive.

UPDATE: On the morning of September 20, 2011, the PCI SSC released the final version of the Reporting Instructions along with an FAQ.  These documents can be obtained from the PCI SSC Document Library under the Addition Documents – QSA heading.




May 2011
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Months