Author Archive for PCI Guru

20
May
16

Just Had To Comment

A friend posted to LinkedIn an interesting article from Dark Reading titled ‘Epic Security #FAILS Of The Past 10 Years’.  It is an interesting read, but I had to comment on a few premises that I found totally misguided or uninformed.

Perimeter Security

“But clinging to the old castle/moat model has been a wakeup call for many enterprises, while others (mostly SMBs) are still in denial that their old-school firewall stops hackers.”

Firewalls, intrusion detection and intrusion prevention technologies are proven to stop hackers from hacking organizations through the use of network attacks.  As a result, hackers changed tactics and went to the use of social engineering techniques such as spear phishing and the like to get around these technologies.  But because tactics change does not mean that these technologies are worthless.  What it does mean is that organizations relying on only these technologies need to understand the changes in attacks and respond accordingly.  As a result, firewalls, intrusion detection and intrusion prevention technologies all still have a place in the information security ecosystem.

Where they have failed is in their implementation and execution.  In too many organizations, the rules implemented by these devices are sloppy and loose.  Why?  Because of the mistaken belief that it is the only way to be flexible and speedy in our ever changing world of business.  Business needs to be educated that security is the result of thoughtful inspection of what is only absolutely needed to ensure that known risks are properly managed.  Taking knee jerk responses and just tossing solutions out are not the way you secure anything.  Any intelligent leader will understand those facts.

“The network perimeter is evaporating. Mobile devices, cloud, and now the Internet of Things, have sucked the life out of traditional, static “set it and forget it” network security, and the bad guys are bypassing the corporate firewall with spear phishing emails that land on the desktops or devices of end users.”

While I agree the perimeter is changing, there still needs to be a defined perimeter.  Otherwise, how do people know where their responsibilities start and end?  The media keeps talking about the disappearing perimeter, but in fact the perimeter will never disappear.  It is not my job to secure the Internet, but it sure as Hell is my job to secure my organization’s network.  I have to consider the threats the Internet and networks I do not directly control present which is why I need to set where my organization’s perimeter exists.

But even more disconcerting is this concept of a “static” or “set and forget” mentality.  When did any information security professional ever think that information security was “static” or a “set it and forget it” situation?  The mantra I was always taught and have passed along in my classes and presentations was “information security is a journey, not a destination”.

End Users

“You can’t patch end users, so what is left?”

Why is it that people throw up their hands and say such things?  There is no doubt that security awareness training is a disaster.  But why is that a fact?  Could it be that we only lamely implement security awareness training.  Who really, truly believes that annual security awareness training will be effective?  Anyone?

People can be “patched” BUT … it takes a LOT of time, patience and persistence.  Attributes that seem to never exist with most security awareness programs I have encountered.

The example I love to trot out is from my own experience.  Right after I got married, my spouse constantly badgered me over my lack of consideration over putting the toilet seat down after using the bathroom.  Years went by and the badgering continued.  It took probably three to four years, but I finally got it down and began habitually putting the toilet seat down after using the facilities.  I learned my lesson and continue to put the seat down even now.

The lesson here is that it takes time and consistent and constant reminders to change human behavior.  Are you really going to put your lame annual security awareness training up against my example and claim it is actually effective?  I seriously doubt it.

Organizations have implemented all of the technological solutions they need to protect information and their networks.  The only last risk that exists are the people that use and interact with those systems.  That is why the hackers have switched over to social engineering techniques.  Why go after hardened systems when people can do it for you?

For the most part and unfortunately, information security professionals are great technologists and lousy people persons.  We need to partner with human resources and industrial psychologists to develop truly useful security awareness training that is focused on changing peoples’ behaviors so that they are more aware of the risks they present to the technology they use and interact.  As information security professionals, we can identify the risks.  But we need to leave the actual training to the HR and psychologists so that it is done effectively.

Point-of-Sale Systems

Point of sale (POS) systems are a dead end.  Just as hackers have changed tactics, so has the POS ecosystem.

With the advent of encryption or tokenization at the swipe/dip of a customer’s payment card, POS systems no longer encounter sensitive authentication data (SAD).  Add in the fact that transaction processors also return back a token instead of the PAN to POS solutions, the POS is no longer a source of cardholder data (CHD).

Most mid-sized and large merchants have implemented or are in the process of implementing these technologies.  Within the next 12 to 18 months, these efforts will be completed and the days of huge merchant data breaches will have come to an end.

This leaves small merchants as the risk of card data breaches.  The question then is will there be enough cards involved to make hacking small merchants worthwhile as a source of card data?  Do not get me wrong, small merchants will be breached, but the value to the attackers will not be the gold mines of breaching a Target or Home Depot.  As a result, attackers will have to move to breaching transaction processors and banks for those big scores and will, for the most part, leave small merchants alone.

Java and Flash

I do not think I really need to comment of Flash.  I think the risks of Flash speak for themselves.

“Many developers had written applications based on older versions of Java or to a specific version of Java that if upgraded to its latest iteration, wiped out some features or functions.”

If rapid application development has a downfall, it is with Java.  In all of the years I have been doing security assessments, I have yet to encounter any organization that developed in Java that does not have a Java security problem.  It never ceases to amaze any assessor just how old some organization’s implementations of Java can be.

But it is not just the loss of features and functions that creates issues with Java.  Typically the larger reason for antiquated versions of Java is just the simple fact that organizations do not have the manpower, time and/or budget to go in and rewrite applications to get them to newer versions of Java every time Java gets updated.  As a result, Java applications sit at their old and potentially risky versions.

Developing mitigation plans for such environments is also challenging.  The most typical approach is to increase log data generated by the application to increase the likelihood that an attack against the application can be identified.  Changes are made to the application to identify key issues that could be indicative of an attack and to generate appropriate log data or messages that can then alert operations personnel to the potential attack.

Another approach is to lock down as much as possible the ports/services and devices that can communicate or invoke the application through firewall rules.  Obviously this is not a good approach for anything Internet facing.

The bottom line is, whatever an organization can do to mitigate or remove the risks of Java it should be doing.  And the risk because of Java should be assessed often to ensure that it is properly managed.

11
May
16

Heads Up – Changes To SAQ A

I had a question this week regarding v3.2 of SAQ A that pointed out there have been some changes to that SAQ that people may have not noticed given the larger issues with the changes to the PCI DSS.  As a reminder, SAQ A is the least amount of PCI requirements any in-scope organization can comply.

Those added requirements to SAQ A are:

  • 2.1(a) – Are vendor-supplied defaults always changed before installing a system on the network?
  • 2.1(b) – Are unnecessary default accounts removed or disabled before installing a system on the network?
  • 8.1.1 – Are all users assigned a unique ID before allowing them to access system components or cardholder data?
  • 8.1.3 – Is access for any terminated users immediately deactivated or removed?
  • 8.2 – In addition to assigning a unique ID, is one or more of the following methods employed to authenticate all users?
  • 8.2.3(a) – Are user password parameters configured to require passwords/passphrases meet the following?
  • 8.5 – Are group, shared, or generic accounts, passwords, or other authentication methods prohibited
  • 12.10.1(a) – Has an incident response plan been created to be implemented in the event of system breach?

Even when a merchant outsources all of their card processing, these are controls that can still apply because in a lot of cases, a merchant is responsible for setup, management and maintenance of their outsourced payment processes and applications.

In addition, merchant employees will also interact with an outsourced payment system to handle chargebacks and disputes.  Those user accounts used by the outsourced environment will normally be managed by someone at the merchant, not necessarily the service provider.

In regards to incident response, the merchant will be involved with incident response even when they have totally outsourced their payment environment.  The merchant will work with their outsourcer to work through an incident and those responsibilities of the merchant need to be documented.

As a result, the addition of these controls should not be a surprise to anyone.

28
Apr
16

Hold Your Horses

UPDATE: The ROC Reporting Template is available as a PDF on the Document Library page after the Reporting Template and Forms banner almost all the way down the page. The Word version of the ROC Reporting Template is now available from the PCI Portal. No word yet on the PA-DSS and ROV Reporting Template.

Yes, the PCI SSC released the final version of the PCI DSS v3.2, an updated Glossary and Summary of Changes document on their Web site this morning, but we are missing a key piece.  The Report On Compliance (ROC) Reporting Template.

Why is that important you might ask?

The ROC Reporting Template is the document that contains all of the tests that a QSA/ISA needs to conduct to prove that an organization is PCI compliant.  It tells you and your QSA/ISA the evidence needed to gather, how to gather the evidence and level of effort required.  Without that information, an assessment under v3.2 cannot be performed.  Let alone do we truly know the breadth and depth of the changes the Council has made.

The Council promised on their Webinar a month ago that all documents would be released on the same date.  But as of this writing, the ROC Reporting Template is missing in action.

Until we have that document, we have nothing.

Also of note is that the PA-DSS v3.2 and its related Report On Validation Reporting Template are also missing in action as well.

22
Apr
16

Learning Moments From Security Conversations – Part 1

Attacker With Administrator Rights

This conversation was a discussion of an attacker gaining administrative privileges on a network.  This conversation started out benign enough and yet rapidly escalated into a full on war of words.  I had to spend almost 40 minutes arguing with an administrator over the facts that if an attacker had administrative rights it was then “game over” for their organization.  I could not believe the lengths that this administrator went to prove I was wrong.

What started this fiasco was a discussion of the results of their vulnerability scans and penetration testing reports.  The reason the conversation got tense was that the administrator was arguing about how the penetration tester was able to escalate privilege to administrator.  At the core of the argument was the “Low” rated vulnerabilities that were used by the penetration tester to gain access to the system and ultimately compromise the environment.

I am not sure where this idea/myth actually started, but it continues to persist even today after around 20 years of vulnerability scanning.  That idea is that “Low” rated vulnerabilities are somehow not a threat.  Even when you try and explain that regardless of ratings, vulnerabilities are vulnerabilities, some are just easier to use than others and provide quicker compromises than others.

Another reason this is an issue is that most information security personnel are not penetration testers.  Penetration testing is not so much a skill as it is an art form.  Anyone can take high and medium vulnerabilities and leverage them to compromise an environment.  That is why they are rated so high in the first place.  But it takes a true artist with a tremendous amount of knowledge in networking, operating systems and applications to look at the results of a vulnerability scan, take certain low rated vulnerabilities, pair those with certain other vulnerabilities, compromise a system and then compromise the environment.  Not that this always ends up leading to a compromised environment, but it is not as simple and easy which is why it is a shock when it happens.

What the penetration tester did once they had compromised a few systems is that they discovered a way to escalate their privilege to domain administrator through the use of a keyboard logger on a compromised system.  They then collected the domain administrator credentials and it was “game over”, or at least that was the penetration tester’s and my opinion.

So the first point of contention were those “Low” vulnerabilities that the penetration tester used to gain access to a system on the network.  Somehow the administrator believed that those vulnerabilities were off limits because they were rated “Low”.  I did my spiel on vulnerabilities are vulnerabilities and that even the PCI DSS states that all vulnerabilities must be patched within 90 days (some of the “Low” vulnerabilities were over 90 days old).

Finally the administrator conceded that at least those old vulnerabilities needed to be patched but continued to argue that using any “Low” vulnerabilities were not “fair”.  Fair?  I tossed that back in their face and asked what attacker would play “fair”?  Point taken and we moved on.

The next point from the administrator was that even if the penetration tester had domain administrator privileges, they did not have access to the data bases and encryption keys.  Those rights are kept in a different group away from the domain administrators.

I could not believe what I was hearing.  So I next asked if domain administrators could modify the members to those domain groups.  “Of course,” was the quick answer back.  So our simulated attacker could have created a new domain administrator account and added them to the data base and encryption groups?  “Well, yeah, I suppose so,” was the quiet answer back as the administrator was starting to see where things were heading.

Then the argument moved on to control of network devices and the exfiltration of data outside.  This revolved around the fact that domain administrators did not have access to network devices.  However, the RADIUS server that did control access to the network devices was integrated with their Active Directory environment.  So I asked what would stop someone with domain administrator rights from creating a new account and adding that account to the network administration group which would then be replicated to the RADIUS server.

The silence created by that question was deafening.  The administrator was speechless.  They now understood the gravity of the situation.  They were owned and they really did not like that fact.  Granted we had not taken things that far because it is a pain to clean up.  But the client now understood after 40 minutes of arguing about it, that the game was over and their environment was no longer under their control.

This is the problem that most organizations face.  They see everything framed in the control paradigms they have implemented.  The problem is that attackers do not care about controls or their paradigms.  They just care about getting access to information and they structure their efforts accordingly without regard to a control environment.

This is why monitoring is so very important and why near real-time monitoring can save your life if it is configured properly.  But monitoring only works if rules have been structured around those same control paradigms so that when the paradigms are violated, alerts are generated.

In the above example, alerts that would have raised red flags are:

  • Creation of administrative accounts. Such accounts are only rarely created in most environments so when they are created there should be an alert generated and then matched against the account creation request.
  • Addition of accounts to administrative groups. As with administrative accounts, there are very infrequent changes made to these groups.  Again when such an alert is generated, there should be a corresponding change request of some sort.
  • Changes to configurations of network devices and/or servers. These can be problematic because of volume particularly on “Patch Tuesdays” or whenever you do volume patching.  But matching changes to change tickets pays off in discovering attackers.  Since attackers do not register their changes in the change management system, any changes popping up that do not have a corresponding change ticket are likely to be part of an attack.
  • Redirection of network traffic to public IP addresses outside of your business partners or other legitimate IP addresses. Where organizations are most at risk is communications with business partners.  Because of the speed of business these days, a lot of information security people do not sufficiently restrict network traffic between their organization and business partners so that they do not have to constantly make changes.  While that allows near immediate communication flexibility it also allows business partners to be a ready source of attacks and data exfiltration points.
  • Significant increases in outbound traffic volume over ports such as DNS that should not have such increases. Attackers do not obey the port protocol rules, particularly if they are trying to avoid changes to network devices.  In the Target breach, the attackers exfiltrated Target’s cardholder data out through port 53 (DNS).  The reason is that because in most instances port 53 will be open and will not have a restriction on IP addresses allowed to communicate with port 53.

But the obvious area that should receive attention are the patching of those medium and low ranked vulnerabilities.  It just amazes me the twisted logic that sometimes gets used to justify putting off applying patches until the very, very last possible moment all because the vulnerabilities being addressed are not high or critical.  As I said earlier and I cannot stress this enough, vulnerabilities are vulnerabilities regardless of their rank.  They make devices/systems vulnerable, hence their name.

I will share another such discussion in a future post.

16
Apr
16

PCI DSS v3.2 Draft Released

On Friday, April 15, 2016 while a lot of you were probably getting your US income taxes done, the PCI SSC decided to release the draft of v3.2 of the PCI DSS.  I know the announcement message to me from the Council ended up in my company’s spam filter, so you may want to check there if you did not receive a message.  I was lucky enough for a colleague to forward his copy along to me.  However to get the draft, you need access to the PCI Portal to obtain the draft PCI DSS v3.2 and the requisite change log.

These are some of the more notable changes in the new PCI DSS version.

  • The draft provides an official sunset date for v3.1 of the PCI DSS. Regardless of the date in April that v3.2 is released, v3.1 will be withdrawn on October 31, 2016.  So any assessments done after that date will need to comply with and use v3.2.
  • Two new sections to Appendix A have been added. In addition to the Appendix for shared hosting providers (now marked A.1), we get Appendices A.2 and A.3.  2 covers SSL and early TLS for those of you that will miss the June 30, 2016 date.  For those of you that thought 2018 was the deadline and missed discussions on the Webinar about the SSL/early TLS deadline, while the deadline was extended to June 30, 2018, any organizations missing the June 30, 2016 date must fill out Appendix A.2.  A.3 is where the Council added the designated entities supplemental validation (DESV) requirements.
  • There are a number of new requirements for service providers that are best practices until February 1, 2018. Those new requirements include: (1) maintain a documented description of the cryptographic architecture, (2) detect and report on failures of critical security control systems, (3) perform penetration testing on segmentation controls at least every six months, (4) executive management to establish responsibilities for the protection of cardholder data and a PCI DSS compliance program, and (5) perform reviews at least quarterly, to confirm personnel are following security policies and operational procedures.  I would bet that numbers three and five will likely create a lot of contention with service providers.  But you have until February 1, 2018 to get those in place.  However, if experience teaches us anything, service providers had better start now getting these new requirements in place and operating.
  • All organizations picked up the following new requirements that are best practices until February 1, 2018: (1) change control processes to include verification of PCI DSS requirements impacted by a change, and (2) multi-factor authentication for all personnel with non-console administrative access to the CDE. As with the aforementioned new requirements for service providers, these will also require a lot of organizations to get started now to ensure these new requirements are in place and operating.
  • The Council clarified requirement 8.1.5 to show that it is intended for all third parties with remote access, rather than only vendors. While most organizations understood the intent of this requirement, there were a few that played “legal eagle” and refused to require compliance for non-vendors.
  • Requirement 6.5 has been clarified that developers must go through secure coding training at least annually. This change will likely create some consternation for some organizations that are developing their own software that is in-scope for PCI compliance.
  • Clarified 11.5.a by removing “within the cardholder data environment” from the testing procedure for consistency with requirement, as the requirement may apply to critical systems located outside the designated CDE. This will likely expand the number of systems that require critical file monitoring.
  • Clarified 12.8 1 by saying that the list of service providers now must include a description of the service(s) provided.
  • Clarified 12.8.2 by adding guidance that service provider responsibility will depend on the particular service(s) being provided and the agreement between the two parties.
  • One of my pet peeves has finally been addressed. I have always had an issue with requirement 1.1.6 and the use of the terminology “insecure protocols”.  The reason is that in one way or another, all protocols have their insecurities whether they are known or not.  In v3.2, the Council has finally removed the “insecure” designation as, in their words, “these may change in accordance with industry standards.”  It is those small battles at times that make your day.

There are other clarifications and edits that have been made to the new version.

For all of us QSAs, we await the Reporting Template which will detail out the actual testing to be performed which will allow us to assess the real impact to the effort required to conduct an assessment.  As a result, there could still be some surprises with this new version of the PCI DSS.  So stay tuned.

15
Apr
16

Multi-Factor Authentication

During the recent PCI SSC’s presentation on the coming changes to v3.2 of the PCI DSS, one of those changes is the adoption of the term “multi-factor authentication” instead of “two-factor authentication”.  This change resulted in some heated discussion in the Q&A session that followed their presentation.

Even though the Council was very, very clear what they meant by multi-factor a lot of us have concerns about that terminology.  The reason for all of this concern?  It is because most of us in the information security profession dislike the term “multi-factor authentication”.  It is driven by the fact that the term typically includes stupid practices such as using two sets of credentials.  I have discussed this in a previous post, but I thought the time was right to discuss this topic again before QSAs start running into organizations trying to pawn off two passwords as valid multi-factor authentication.

Multiple factors of authentication are defined as:

  • Knowledge also referred to as ‘something you know’ such as a password or passphrase,
  • Possession also referred to as ‘something you have’ such as an RSA SecurID or Symantec VIP token, and
  • Inherence also referred to as ‘something you are’ such as a fingerprint, hand or voice.

In order for multi-factor authentication to be secure, you must use one of the factors from each category, hence the terms one, two and three factor authentication.  Using only a password is single or one factor authentication.  Using a PIN with a fingerprint is two factor authentication.  Using a password with the token number and a fingerprint is three factor authentication.

Therefore using two passwords is not using factors from two of the three categories.  It is using the same category twice which is not considered secure.  This holds true for using a fingerprint and an iris scan as those are also two items from the same category.  Although those biometric factors are arguably much stronger than just two passwords.

Why are two passwords not considered secure?  An attacker only has to compromise your authentication system and they would likely have access to those two sets of credentials.  But if you also require either of the other two factors, the attacker may have credentials but they do not have those other factors needed to use those credentials.  Therefore if you are using true two or three factor authentication your security is still effective.

So all of you out there thinking the Council has approved of using two passwords as an approved multi-factor authentication solution need to think again.  Although I know there will be some that do not get this message and will try and use it in that context anyway.

09
Apr
16

Living In PCI Denial

This was one of those weeks where you see something and all you can do is shake your head and wonder what some organizations think when it comes to PCI.  What added insult to injury in this case was that the organization arguing over PCI compliance is the manufacturer of card terminals, also known as point of interaction (POI).  It shocked me that such an organization was so clueless about PCI as a whole when you would think it is their business to know. But to add insult to injury, my client’s transaction processor and acquiring bank are also apparently clueless.

As background, I am working on a client’s Report On Compliance (ROC).  This client has almost completed with their roll out of an end-to-end encryption (E2EE) solution at all of their 4,000+ retail locations.  This E2EE solution will take all but the POI at those retail locations out of scope for PCI compliance.  That is the good news.

But if there is good news, you know there must be bad news.  In reviewing their documentation of this E2EE solution, I discovered that the POI vendor is providing management and updates to the POI through a terminal management system (TMS).  Since this TMS solution/service connects directly to my client’s cardholder data environment (CDE), I naturally asked the client for a copy of the vendor’s Attestation Of Compliance (AOC) for the TMS solution/service.

I thought those worthless PCI Certificates of Compliance took the cake.  Then, BAM!  I got the following message forwarded to me by my client from the POI vendor.  I have redacted all of the potential information that could identify the relevant parties and the TMS solution/service.

“Please see the follow up note below that you can send to your QSA for review and feedback:

  1. TMS systems in our industry do not require any type of PCI certification since PCI is concerned about card holder information that would be at risk. Since [vendor solution] does not have any card holder data at all, it falls outside of PCI requirements.  [Vendor solution] is merchant configuration and estate management tool only and as such, no payment card information passes through it, or directed to it.  In addition, no secure keys are stored on [vendor solution] so transaction data cannot be decrypted with anything on [vendor solution] or POS.
  2. [Vendor] Hardware and [vendor solution] Software are all PCI PTS compliant and certified and listed on the PCI website. Transactions are encrypted in hardware using the [encryption solution] keys which again [vendor solution] has no knowledge.  Transaction information can only be decrypted by [processor] the processor.  [Vendor solution] has no knowledge of this encrypted information being sent directly from the [vendor] to the processor.
  3. The beauty and simplicity of [vendor solution] semi-integrated terminal application is that is has all transaction data go directly to the Processor ([processor]) and no customer data is directed to the POS or [vendor solution] which makes the POS out of PCI Scope by the very nature of no card holder data in their environment.
  4. [Client] has a merchant certification with [processor] for the [encryption solution] with our [vendor solution] terminal application. Any questions regarding the certification should be directed to [acquiring bank] or a [processor] representative.

Let us know if your QSA has any further questions and we can also schedule a concall with all parties to address any concerns on [vendor solution] TMS and PCI.”

The first thing that wound me up is that this vendor is a business partner of my client’s transaction processor.  The processor is also a business partner of my client’s acquiring bank.  Those two organizations put forth this vendor to my client as being able to provide POI compatible to the processor’s E2EE and tokenization solution.  Obviously from this vendor’s response, these two well-known institutions did nothing in the way of due diligence to ensure that this vendor and its services were PCI compliant.

The second thing that totally irritated me is that there is no excuse for this vendor’s uneducated response.  Granted, this vendor is new to the US market, but they have been supplying POI to other merchants all over other parts of the world.  Which then starts to make you wonder just how lame are the banks, processors, card brands and other QSAs that they have not been called on the carpet about this before.  But that is a topic for another post and a good reason why the FTC is investigating the PCI compliance industry.

So let me take apart this vendor’s response.

“TMS systems in our industry do not require any type of PCI certification since PCI is concerned about card holder information that would be at risk.”

Wrong!  On page 10 of the PCI DSS the first paragraph under ‘Scope of PCI DSS Requirements’ clearly defines what is in scope for PCI compliance.

“The PCI DSS security requirements apply to all system components included in or connected to the cardholder data environment. The cardholder data environment (CDE) is comprised of people, processes and technologies that store, process, or transmit cardholder data or sensitive authentication data. “System components” include network devices, servers, computing devices, and applications.”

The operative phrase the TMS solution/service falls under is “connected to”.  The TMS solution/service directly connects to my client’s CDE.  That solution/service may not process, store or transmit cardholder data (CHD) or sensitive authentication data (SAD), but it is directly connected to my client’s CDE.  As a result, according to the above definition, the TMS solution/service is definitely in scope for PCI compliance.

“[Vendor] Hardware and [vendor solution] Software are all PCI PTS compliant and certified and listed on the PCI website.”

PTS certification is a card brand requirement, not a PCI DSS requirement.  Nowhere in the PCI DSS does it require that a PTS certified POI be used so I really do not care about this statement as it has nothing to do with my PCI DSS assessment activities.  If PTS were a PCI DSS requirement, then all of those people using Square and the like would be non-compliant.

“In addition, no secure keys are stored on [vendor solution] so transaction data cannot be decrypted with anything on [vendor solution] or POS.”

“Transaction information can only be decrypted by [processor] the processor.”

True, your TMS solution/service does not have the encryption keys.  But the firmware delivered by the TMS solution/service does have access.  (Unless you are the first POI vendor I have ever encountered that spent the huge amount of money required to truly create a hardware-only encryption solution.)  Given the low retail price and discounting of your POI you gave my client, I very seriously doubt that is the case.  So the firmware that your TMS solution/service delivers is what is doing the encryption and therefore has access to the encryption keys.  So while the TMS solution/service does not have the keys, it could be used to deliver rogue firmware that could obtain them.

Then there is the firmware delivery itself by your TMS solution.  If someone hacks your TMS environment, how easy would it be for them to have it deliver a rogue version of your firmware?  Since my client has no AOC, I have no idea if your security measures surrounding your TMS solution are adequate to prevent such an attack.

“[Client] has a merchant certification with [processor] for the [encryption solution] with our [vendor solution] terminal application.”

Such a statement ranks up there with those previously mentioned worthless PCI Certificates of Compliance.  Any QSA is required to obtain an AOC for the TMS solution/service to ensure that it is PCI compliant or the solution/service must be assessed as part of the merchant’s PCI assessment.

PCI DSS requirements under 12.8 are very clear as to everything a merchant needs to be able to provide to their QSA regarding third party PCI compliance.  Primarily of which is that AOC for your TMS solution/service among other items of evidence.

So I have a conference call with my client’s bank to discuss this situation.  I pushed back very hard when they told me that my client needs to do a compensating control for their business partner’s incompetence.  I even got an “atta boy” from the bank for identifying to them that they have a PCI compliance and potential security issue.  But I could not make the bank budge on the compensating control so I am off to get that written.

The lesson to be learned from this post is that nothing can be taken for granted when doing a PCI assessment even when you transaction processor and bank are involved.  A lot of people and QSAs would assume that a POI vendor would know better and that their bank and transaction processor had vetted the POI vendor.  Therefore, why do I have to worry about this vendor?  However as I have pointed out, you can never take anything for granted even when it involves organizations that you would think would know better.

This is just one way of many that could result in an organization being breached.  The TMS solution/service is a gateway directly to the merchant’s CDE.  Yet there has been no PCI assessment of that solution/service to ensure that it is PCI compliant and the risk it could be subverted has been minimized.

Thank goodness it is the weekend.  Oh, wait.  This weekend’s project is my income taxes.  Looks like I will be cranky all weekend as well.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

May 2016
M T W T F S S
« Apr    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,565 other followers


Follow

Get every new post delivered to your Inbox.

Join 1,565 other followers