02
Apr
19

The PCI Dream Team Rides Again

On Tuesday, April 23, 2019 at 1PM ET (1700 UTC) the PCI Dream Team will ride again and tackle The Cloud as well as any other tough PCI questions you have.  If you are interested, you can register using this link. (http://bit.ly/2OLhsYh).

We look forward to talking to everyone then.  If you have questions you would like the Dream Team to consider, please submit them to pcidreamteam AT gmail DOT com.

Advertisements
11
Mar
19

The New NIST Password Guidance

In June 2017, the US National Institute of Standards and Technology (NIST) issued new guidance on authentication in the form of four Special Publications (SP).

  • SP 800-63 is an overview of digital identity and the other three publications in the series.
  • SP 800-63A discusses digital enrollment and identity proofing.
  • SP 800-63B discusses authentication and lifecycle management.
  • SP 800-63C discusses federation and assertions.

This post is about SP 800-63B which covers the new password guidance from NIST.  In the vernacular of NIST, a password/passphrase is referred to as ‘Memorized Secret Authenticator’.  Here are the key attributes offered by this new NIST guidance:

  • A Memorized Secret Authenticator must be at least a minimum of eight characters in length and should allow for at least 64 characters.
  • All printable ASCII characters should be allowed for comprising a Memorized Secret Authenticator.
  • A replacement Memorized Secret Authenticator used to reset a forgotten/corrupted/compromised Memorized Secret Authenticator must be at least six characters long.
  • A Memorized Secret Authenticator can be forced to comply with blacklisted words/phrases to avoid guessing or brute force attacks.
  • No hints are allowed to be provided to unauthenticated users.
  • When changing a Memorized Secret Authenticator, the provider should ensure that the new Memorized Secret Authenticator is not known to be compromised, a known word or expected value such as ‘12345678password’ or similar.
  • The Memorized Secret Authenticator can be displayed if it cannot be readily observed by others.
  • Verifiers should not impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets.
  • Verifiers should not require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers shall force a change if there is evidence of compromise of the authenticator.

A lot of clients are pushing hard to use these new NIST rules in place of the PCI DSS requirements.  As a reminder, the PCI DSS requires the following when it comes to passwords.

  • 8.2.3 Passwords/passphrases must meet the following: require a minimum length of at least seven characters, contain both numeric and alphabetic characters. Alternatively, the passwords/ passphrases must have complexity and strength at least equivalent to the parameters specified above.
  • 8.2.4 Change user passwords/passphrases at least once every 90 days.
  • 8.2.5 Do not allow an individual to submit a new password/passphrase that is the same as any of the last four passwords/passphrases he or she has used.
  • 8.2.6 Set passwords/passphrases for first-time use and upon reset to a unique value for each user, and change immediately after the first use.

So where are we in regard to NIST versus PCI DSS?

On length, we are good.  Both documents have a minimum of 8 characters.

Complexity is a sticking point as the PCI DSS imposes complexity rules on the composition of passwords, NIST states that the authentication system should not impose such composition rules.

NIST is more restrictive on the checking of password/passphrase changes to include ensuring that they have not been used somewhere else that was compromised.  In addition, it also requires that if believed to be compromised, the authentication system should force a change.

However, it is when we get to changing passwords/passphrases on a specific interval that we run into trouble.  NIST advises that arbitrary changing of passwords/passphrases are not required whereas the PCI DSS states that passwords/passphrases should be changed every 90 days.  NIST counts on the fact that they require to monitor that credentials have not been compromised to support their not requiring an arbitrary change of passwords.

The first thing that comes to peoples’ mind is the guidance to requirement 8.2.3 which states:

“Strong passwords/passphrases are the first line of defense into a network since a malicious individual will often first try to find accounts with weak or non-existent passwords. If passwords are short or simple to guess, it is relatively easy for a malicious individual to find these weak accounts and compromise a network under the guise of a valid user ID.

This requirement specifies that a minimum of seven characters and both numeric and alphabetic characters should be used for passwords/ passphrases. For cases where this minimum cannot be met due to technical limitations, entities can use “equivalent strength” to evaluate their alternative. For information on variability and equivalency of password strength (also referred to as entropy) for passwords/passphrases of different formats, refer to industry standards (e.g., the current version of NIST SP 800-63.)

Note: Testing Procedure 8.2.3.b is an additional procedure that only applies if the entity being assessed is a service provider.”

What people focus on is the last sentence before that note that states:

“For information on variability and equivalency of password strength (also referred to as entropy) for passwords/passphrases of different formats, refer to industry standards (e.g., the current version of NIST SP 800-63.)”

They then refer the QSA to that statement and say that gives them license to apply that guidance to requirement 8.2.4 and the other password related requirements.  Unfortunately, that guidance only applies to 8.2.3 as it clearly references “password strength” and nothing about change interval or anything else related to password attributes.

Another key point is that the guidance for 8.2.4 makes no reference to SP 800-63.  The Council will tell you that if SP 800-63 applied to 8.2.4, they would have included the same sort of reference in the guidance for 8.2.4 as they did in 8.2.3.  Without that reference, a QSA should not be using the new NIST guidance to replace the requirements specified in 8.2.4.

So, with that path ruled out, the second thing that comes to peoples’ mind is, we will write a compensating control for following the NIST guidance.

There is only one thing wrong with the compensating control approach and that is that a compensating control must go “above and beyond” the PCI DSS requirement.  Above and beyond 90 days would be a value less than 90, not more than 90.  The test is very specific that the change interval must be no more than 90 days.  As a result, there is no compensating control that will get you above and beyond the intent of a 90-day change interval.

That is not to say that you and your QSA cannot write such a compensating control.  The question then becomes if you can get your acquiring bank to sign off on such a compensating control?  There are a number of banks that are not so diligent with their reviews of PCI ROC filings and such a compensating control would sail under the radar. But that is no guarantee.

However, such a compensating control puts your QSAC at risk of remediation if the PCI ROC is selected as part of the Council’s Assessor Quality Management (AQM) review.  Such a compensating control would not be viewed favorably by the Council because it flagrantly violates the rules of a compensating control.  Remediation, while not a death nell to a QSAC, does adversely impact sales of PCI assessments and services and makes current clients uncomfortable, so going into remediation is avoided by QSACs like the plague.

The bottom line is that until the Council makes a change to the PCI DSS (i.e., v4), you are stuck with its password/passphrase requirements regardless of what other standards setting bodies state.

08
Mar
19

Sins Of The Past

This was a question we got from our last PCI Dream Team session on the Cloud.

“Issue – found CVV in historical call recordings that need to be purge/delete. We are not able to purge the entire call record and still need to keep it for record retention. What tools should be evaluated to help address this issue?”

A lot of organizations are discovering that how they did things in the past did not meet PCI, GDPR or other legal or regulatory requirements when data in their possession needs to be protected.  Gone are those freewheeling days of collecting data without worrying about how to secure and protect it.  Customers and the public at large are realizing the value of their information and the need to protect it.  All of which starts organizations thinking about how to reduce the risk they have because they have all of this data and they are being held responsible for having it.  The patchwork of state laws in the US hold a lot of organizations at risk, some higher than others.

There are also the sins that come to light down the road.  It is not unusual to have a PCI in scope application crawl out of the woodwork years down the road at large organizations.  It should have been identified way back when the organization was starting out in PCI, but somehow was missed and just now turned up.  Unfortunately, these discoveries tend to turn up at the 11th hour of the organization’s current PCI assessment and there is no way to include the application without causing a delay in issuing the ROC.

Surprise!

So, let us talk about the last case first.  The application that we uncover very late in the PCI assessment.

What should happen and in the example cited did happen, was a conversation with the acquiring bank.  The situation was explained as well as the risk involved (it was storing encrypted PAN) and the bank was asked do we delay filing the ROC and assess this application (likely a delay of longer than 90 days) or do we keep moving ahead as planned and pick up the newly disclosed application in the next assessment?

The bank decided that they did not want to delay the ROC filing since it was just out of our QA process, had been sent to the client for their review and was due in around 30 days.

The client looked further into the application and determined that it could be easily remediated with tokenization from their gateway.  As a result, when time came for the next year’s assessment, the application had been remediated with tokenization.  We took a look at it and confirmed it no longer contained encrypted PAN and explained to the bank that it would no longer be in scope.

However, things do not always end that well.  I have also had occasions where no remediation was possible for a variety of reasons and had to go in the following year and assess the new discovered application in all its PCI compliance (and in some cases non-compliance) glory.

Remediate

Getting back to our original sin, so to speak.

First and foremost, you may not be able to remediate your files due to legal or regulatory constraints.  So, before you go charging ahead on your remediation efforts, make sure you discuss it with your legal and compliance folks to ensure you are not creating an even bigger problem.  Assuming you are allowed to remediate data, you can proceed with reading the rest of this section.

Structured data is typically easy to remediate.  You find out what XML tags to look for, fields or what database columns are involved, you develop a program to remediate the data to first six and/or last four for the PAN or erasing data for any information you were not supposed to keep and execute.  Easy.  Well, easy until you take into account backups which can complicate remediation if you cannot just erase the backups.

Unstructured data as with call recordings and notes/comments fields can be a nightmare to remediate.  The reason of course is that the data has no structure and does not necessarily occur in the same place.  Unlike XML or a database where data is at least tagged or in a column, unstructured data exists wherever it exists and programs to remediate the sensitive data need to find it and then eradicate it.  That introduces the problem of false positive results.  I wrote all about the “fun” of trying to find cardholder data (CHD) five years ago and it has not necessarily gotten any better.  The bottom line with unstructured data is that it may not be possible to completely remediate the problem.

However, the best you may be able to do is to remediate the data when it is encountered.  Going back to call recordings, if the quality assurance review process or any process that has someone review recordings encounters CHD they redact the information from the file so that it is no longer in that file.  Not perfect, but slowly you reduce the amount you are storing.  You still have to encrypt the files for protection, but you are making an effort to reduce risk by reducing the amount of viable data.

Isolate It

This most commonly occurs with call recordings, but I have encountered the occasional “legacy” application that it applied to as well.

In the either case, the old system is being decommissioned and a new solution (usually outsourced) is being implemented.  The question comes up, “what do we do with the old system?”  The reason is that for customer service, legal and/or regulatory reasons it cannot just be wiped and destroyed.  It needs to be retained for some period of time before that can happen.

The answer is to keep the system powered up, but off any other network.  If people need access, they need to go to a PC or workstation that is connected to a private, air gapped, isolated network that consists of the old system and the PCs or workstations to be used to access the old system.  No internet or other network access is provided, only a network that contains those few isolated systems.  This will allow the system and workstations to age yet remain protected because of the air gap.  Remember, the PCs and workstations will also age along because it is highly likely that new software may not allow connectivity to the old system.  This is why everything will need to be air gapped.

I usually get asked for the reason to keep the old solution powered up.  That comes from a study done long ago by IBM.  What the IBM study found was that systems that get powered off after years of operation have a tendency to fail after being powered off for any extended length of time (i.e., long enough to cool down).  As a result, if you intend to keep the system around and available, you best keep it powered up albeit isolated as discussed earlier.

One of the larger issues with isolation will be monitoring of the air gapped network to ensure it remains air gapped and how you respond if that air gapped is breached.  There are a number of ways to address this issue, so pick the solution that best fits your environment.

Isolation is not a perfect solution.  It will likely require a number of compensating control worksheets (CCW) to address the fact that you have a number of “antique” systems around.  So be prepared for that work effort as it will likely not be small.

01
Mar
19

Will The Council Kill Off TLS?

On February 6, 2019, a technical paper was published regarding a new attack on TLS 1.2 and 1.3 had been identified.  Of course, the first thing that a lot of us wondered was, “Will the PCI SSC now kill off TLS 1.2 and 1.3?”

Before panic sets in, I am guessing that TLS 1.2/1.3 will not go away like SSL v3 and TLS 1.0/1.1 did before.  The reason is that this is just another variation of the Bleichenbacher attacks that seem to crop up every so often regarding SSL and TLS.  What is different about this attack is the new side-channel leak approach that was used.

The risk in this attack is best described from the researchers’ technical paper.

 “… even though  the  use  of  RSA  in  secure  connections  is  diminishing (only ≈6% of TLS connections  currently  use  RSA  [1,  51]), this  fraction  is  still  too  high  to  allow  vendors  to  drop  this mode.  Yet,  as  we  show  in  Section  VI,  supporting  this  small fraction of users puts everyone at risk, as it allows the attacker to perform a downgrade attack by specifying RSA as the only public key algorithm supported by the server.”

The problem is all related to the use of RSA PKCS#1 v1.5 in TLS.  The rest of protocol is just fine.  So, at worst case I could see the Council recommending that RSA PKCS#1 v1.5 not be allowed to be used.

Which reminds me of years ago when the US banking regulators came out and stated that by a certain date, Internet Explorer 6 would no longer be allowed to be used for internet banking.  According to the banks at the time, such a move by the regulators would create a support nightmare or, even worse, kill off internet banking.  However, the date came, the banks turned off IE6 and little happened.  Yes, there were a few days of higher than normal support calls about customers not being able to get into their accounts, but those quickly died off.

The issue with RSA PKCS#1 v1.5 is similar to the banking story.  At what point do we draw the line on these sorts problems?  10% of users?  2% of users?  1% of users?  In this case, 6% of the internet users are putting the remaining 94% at risk.  Is it worth it?  Each organization will have to determine if that risk is acceptable and justify why.

27
Feb
19

Bank Of America Makes NESA Mandatory

Remember the non-listed encryption solution assessment (NESA)?  Probably not because it really didn’t get legs.  That is until now and from an unlikely source – Bank of America (BoA).  QSAs that perform a lot of merchant Report On Compliance (ROC) that go to BoA have likely noticed that BoA have been scrutinizing those ROCs more than before.

This has been particularly true of ROCs that use end-to-end encryption (E2EE) solutions such as Verifone Verishield or First Data TransArmor and you are asking BoA for scope reduction to point-to-point encryption (P2PE).  I ran into this three years ago with a client that was implementing TransArmor at their retail stores.  After much negotiation by my client, they were finally granted P2PE scope reduction and their assessment moved on.

However, at the same client this past year, a shock.  BoA told them not so fast on P2PE scope reduction this year.  As the client and their new QSA found out, sometime in 2018 BoA introduced a whole program to deal with E2EE solutions that now requires a P2PE-QSA to assess the solution and produce a NESA report.  Surprise!

What makes this particularly sad and annoying is that First Data and BoA are joint partners in Bank of America Merchant Services (BAMS) the transaction processing arm of BoA.  BAMS relies on First Data solutions such as TransArmor for processing and securing payment transactions.  But guess what?  Think that your TransArmor solution will get a “pass” from BoA when it was recommended by BAMS?  Think again.  BoA is requiring all non-P2PE validated solutions to go through a NESA.  And that is exactly what this client has, TransArmor from First Data that is a partner in BAMS.

The lesson here is, be prepared as a QSA to deal with a new issue if you have E2EE, you want P2PE scope reduction and your client’s bank is BoA.

25
Feb
19

Network Segmentation Testing

As part of penetration testing, merchants and service providers are required to test that their network segmentation is properly implemented and functioning.  Sounds like a simple enough task, but you would be amazed at the bizarre and complicated discussions that QSAs encounter when segmentation testing comes up.

As a reminder, requirement 11.3.4 states:

“If segmentation is used to isolate the CDE from other networks, perform penetration tests at least annually and after any changes to segmentation controls/methods to verify that the segmentation methods are operational and effective, and isolate all out-of-scope systems from systems in the CDE.”

For service providers, requirement 11.3.4.1 adds in the requirement of testing at least every six months or any changes to network segmentation, not just “significant changes”.

Regardless of whether you are a merchant or a service provider, how segmentation testing is performed is the same.

So why all of the issues?

First, the PCI DSS does us no favors with the “guidance” for requirement 11.3.4 which states:

“Penetration testing is an important tool to confirm that any segmentation in place to isolate the CDE from other networks is effective. The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE. For example, network testing and/or scanning for open ports, to verify no connectivity between in-scope and out-of-scope networks.”

The first point of confusion typically relates to the phrase “penetration testing” as though segmentation testing somehow requires the use of a penetration testing tool such as Metasploit or similar to conduct the segmentation testing.  Nothing could be further from the truth.  But the terminology of “penetration testing” clouds the task.

The second point that seems to confuse is the last sentence that starts out with “For example …”.  People seem to miss that start of the sentence and take it that all they have to do is make sure that out of scope devices cannot get to the CDE and that is it.  While network segmentation testing is simple, it is not quite that simple.

What Is Segmentation Testing?

After going through the debunking of all of the mythology and rumors surrounding network segmentation testing, this is the first question asked.  I always take people back to what the purpose of network segmentation testing is – to prove network segmentation is implemented and is functioning as designed to keep the various networks logically separated.

When I say, “various networks”, I am referring to the network segments defined in the information supplement “Guidance for PCI DSS Scoping and Network Segmentation” issued in May 2017.  In that document, the following terminology is used.

  • CDE Systems – any systems/devices that directly process, store or transmit sensitive authentication data (SAD) or cardholder data (CHD) or are directly connected to such systems/devices. These systems/devices are also sometimes referred to as Tier 1 or Category 1.
  • Connected To or Security Impacting Systems – are systems that provide services to the CDE or have connections to systems/devices in the CDE that could adversely affect the security of the systems/devices in the CDE. These systems/devices can also be referred to as “Shared Services”, Tier 2 or Category 2.
  • Out of Scope Systems – are systems that cannot connect to the CDE also referred to as Tier 3 or Category 3.

For PCI compliance, all CDE Systems (Category 1) and Connected To (Category 2) systems are always in scope.  However, for network segmentation testing, Category 3 systems/devices are also included because the testing must prove that Category 3 cannot get to Category 1 and vice versa.  That is typically were network segmentation testing goes wrong is that it only proves that Category 3 cannot get to Category 1 and then stops.  The guidance for requirement 11.3.4 provides some clarity in the second sentence which states:

“The penetration testing should focus on the segmentation controls, both from outside the entity’s network and from inside the network but outside of the CDE, to confirm that they are not able to get through the segmentation controls to access the CDE.”

The Council has advised that what they want is testing from inside and outside the CDE as well as from other network segments including the internet if applicable.  The idea is to further support the analysis and findings from a QSA’s review of the firewall rules from the requirements in 1.3.x of the PCI DSS.  The reason for this is that with some breaches and the advent of “next generation” firewalls and more sophisticated security technologies, the Council felt that assessed organizations and QSAs were not necessarily proving that network segmentation was truly in place and wanted some additional testing and confirmation.

How Do I Test?

First and foremost, timing of the testing is very important.  For merchants, it should be conducted as close to annually as possible,  For service providers, they are required to be conducted as close to every six months as possible.  But you also need to consider the concept of “significant change”.  If there have been significant changes that affected network segmentation, then the network segmentation testing must be done as soon as possible (the Council typically recommends a maximum of 30 days) after the significant change has been implemented.

While the tool used to conduct the test can be as simple as Nmap or the like, the testing itself can be complicated depending on how your network is segmented.  I have clients that have hundreds of segments that results in a very time-consuming amount of testing.  The key here is to be thorough, but not insanely thorough.

I have no problem with network segmentation testing including a review of firewall and ACL rules and using that information to test for example from a particular network segment into another because the rules are the same for all the network segments being tested to support a particular rule.  The key is to be able to justify why you picked one segment over another and not repeatedly test from only one segment for every test.  Provide the rules with an explanation of your justification for what you did.  This will allow the QSA to understand how you worked and why.

But Nmap is not the only tool that can be used.  There are a number of network management/modelling/monitoring tools such as FireMon, Tufin and RedSeal that can also be used to prove out network segmentation.  In fact, these tools can provide ways to perform the network segmentation testing that do not need to involve scanning the network and merely running reports against the databases created by these tools.

Regardless of the tool used, be careful.  I have seen too many reports where the tools did not go to the devices within the network segment and the results did not necessarily prove segmentation is in place and functioning because when matched up to the server configuration it showed other forms of communication.

Segmentation Testing Reporting Requirements

Once you have completed your network segmentation testing, you need to create a proper report of those results.  At a minimum, a network segmentation testing report should have the following sections.

  • A one to two page (at most) Executive Summary of the network segmentation test, the date the testing was started, the date when testing was completed, the results (i.e., pass or fail) and a summary of all findings and recommendations.
  • Document who conducted the test including a bit of background as to why they are considered capable of conducting the test by including any information security certifications they hold and other relevant information security experience.
  • Provide the reader a frame of reference for the testing performed. At a minimum, this should include a high-level diagram of the various segments (i.e., CDE, Connected To and Out of Scope) and an overview of the IP addressing within each of those segments.
  • Document and discuss any significant changes that occurred since the last network segmentation test and what was done to prove that significant changes did or did not occur since the last segmentation test. This is necessary to confirm to the QSA and other readers that you are not just following some predefined schedule (i.e., annually or semi-annually) but are also ensuring that significant changes also potentially drive segmentation testing as required in by the PCI DSS.
  • Document the methodology that was followed and the tools that were used to prove out network segmentation. What is needed in this section is specificity.  Document step by step, in enough detail that someone else could conduct the testing, what you did to prove network segmentation was in place and functioning as expected.
  • Document any findings and recommendations that result from the network segmentation testing particularly those findings that prove the network is not segmented as expected resulting in a failed test. If segmentation is not in place, then you will need to remediate those findings and retest to prove that the remediation was successful.  If retesting is required, you need to keep all reports so that you have a record of everything that has been tested.
25
Jan
19

Where Is EMV When You Need It?

Juniper Research (not Juniper Networks) issued a report recently that stated that card not present (CNP) fraud would be $130B by 2023.  In response, there were a lot of people asking where EMV was to address this issue?  Apparently there are a lot of people that are confused about EMV including some that are directly involved in PCI compliance.

First a bit of background.

People need to understand that EMV as it is implemented anywhere today was originally developed for eliminating or minimizing card present (CP) fraud.  Europe became a hotbed of CP fraud in the early 1990s after the fall of the Iron Curtain.  To address this problem, Europay, MasterCard and Visa Europe (hence the acronym “EMV”) joined forces to develop the standard in an effort to minimize the CP fraud problem in Europe.  EMV was introduced in the UK in 1996 and continued to rollout throughout Europe for the next decade.

Then there is the term “Chip and PIN” that people inadvertently confuse with EMV.  Using an EMV card with a PIN is not a requirement as consumers in the US have discovered.  The term “Chip and PIN” comes from that original UK rollout.  The banks in the UK decided on requiring a cardholder to not only put their card into the card terminal but also to require a personal identification number (i.e., PIN) in order to complete a transaction.  That standard has continued pretty much throughout the world with the exception of the US.

The next key thing to understand about EMV is that it is no more secure than the magnetic stripe it replaced.  I know that fact might shock some people given all of the press EMV has gotten regarding security.  Somewhere along the line, people began to believe that EMV by itself was more secure.  I believe a lot of this misunderstanding was the result of other security technologies that were bundled as countries converted to EMV.

The biggest security feature was the requirement of a PIN for transactions.  A PIN is essentially implementation of multi-factor authentication (MFA).  The EMV card is the something you have, and the PIN is something you know.  Both of which are also known as two factor authentication (2FA).  2FA is great for dramatically reducing CP fraud, but still does not protect the data being transmitted and likely stored by any point of sale (POS) solution.

What came next in the evolution of EMV was the addition of end-to-end encryption (E2EE) between the card terminal or point of interaction (POI) and the transaction gateway or processor.  E2EE encrypts the sensitive authentication data (SAD) transmission from the POI to the processor meaning that any devices or networks between the two cannot access the data unless they have the encryption keys (which they will not if E2EE is properly implemented).

The final security feature that came to EMV was the addition of tokenization.  Tokenization takes the primary account number (PAN) and converts it to a token which can then be returned to the merchant’s POS solution without the worry that it was storing cardholder data (CHD).  Tokenization can be either be performed at the POI or by the processor upon completion of a transaction (most common).

With that as the background, I think most readers can start to understand why EMV and its currently used security features are not going to address the CNP fraud problem.  All of those security features we are familiar require a CP environment and exactly how does that translate into a CNP environment?  The answer is, they do not translate, at least easily.

It turns out that we have been here before with EMV although most people are probably not aware of that fact.  Around 2000 to 2002, a few UK banks and a card brand thought about addressing the growing CNP fraud issue with EMV.

In the UK, Barclays and Standard Chartered came up with competing application programming interface (API) standards for eCommerce sites to use.  Both Barclays and Standard Chartered paired their APIs with card readers that connected to PCs.  Their solutions relied on the new EMV cards that were being issued in the UK and used Chip and PIN for conducting transactions.

At around the same time in the US, American Express was rolling out their first iteration of their Blue card.  That card had a chip although it did not conform to the EMV standard.  Customers that were in that Blue rollout also got a handy USB chip reader along with the card.  As with the implementations in the UK, American Express also relied on Chip and PIN for completing transactions.

The idea with all of the schemes was to have consumers connect the reader to their computer and install some software for the reader.  Then when making a purchase online the consumer would insert their EMV card into the reader, key their PIN through the computer’s keyboard and complete the purchase.  No different than in a traditional brick and mortar store.

Unfortunately, there were some issues with all of these approaches.  The largest of which was that the APIs were all different.  As a result, the consumer could not make a secured payment unless the online merchant supported the payment API the consumer had installed on their local PC.  In the case of American Express, they had signed on Amazon as a merchant, but Amazon was a very small but up and coming fish in the eCommerce world at the time.  In the case of the UK, the banks had only signed a very few small online UK merchants.  As a result, with no large eCommerce merchants on board no API gained critical mass to win out.  The end result was that by 2003 the EMV CNP experiment had effectively died.

To those observant readers, I earlier alluded to the fact that there are other EMV security features that might be useful for addressing CNP fraud.

There are two features in the EMV standard that could be used and those are dynamic PAN and dynamic card verification value (CVV).  These two EMV fields are included in every EMV card but are not currently used.  The reason is that using them would require significant programming on the transaction processor’s end to make them work.  But using them would still require a card reader solution for eCommerce given the cards in circulation today.

Obviously with CNP, what is needed is a solution that would not require a card reader and therefore a standard API.

In the age of mobile applications, it would be relatively easy for an app to provide the dynamic PAN and dynamic CVV for entry into a Web site.  Granted this app would have to communicate with a bank or processor to ensure the generation of valid dynamic values, but it should be no more difficult than what RSA or Symantec do for multifactor authentication.

Another option would be to provide a browser widget or a whole PC application that would generate the dynamic PAN and dynamic CVV while the user was purchasing items online.

But what about people that do not have smartphones or prefer physical cards?  What immediately came to my mind is something like the FUZE, Edge or Dynamics cards.  While all of these are currently not EMV capable, they are expected to be at some point.  They all come with displays that could easily display the dynamic PAN and dynamic CVV just as from a smartphone.  Unfortunately, all of these electronic cards require a smartphone but could probably be easily adapted to program from a Web site through a PC since they need to be charged.

The bottom line is that there are solutions to the problem.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

April 2019
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
2930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 2,083 other followers

Advertisements