Posts Tagged ‘Card Brands



01
Mar
14

How Did It Happen?

This is just my supposition on how the Target breach occurred, but it is based on what has been released to date plus what limited knowledge I have of Target’s environment, the environments of other large retailers and my many years of penetration testing.

Fazio Mechanical Services

According to the latest reporting, Fazio Mechanical Services (Fazio) is believed to be the starting point of the Target breach.  From what has been reported, a Phishing attack on Fazio yielded access to Fazio’s computer systems and network.  In their statement regarding the breach, Fazio says:

 “Fazio Mechanical does not perform remote monitoring or control of heating, cooling or refrigeration systems for Target.”

“Our data connection with Target was exclusively for electronic billing, contract submission and project management, and Target is the only customer for whom we manage these processes on a remote basis. No other customers have been affected by the breach.”

If we take Fazio at their word, Fazio did not have direct access to Target’s network.  That means if Fazio was breached, that breach did not result in a direct path to Target’s network.  Brian Krebs reported that he spoke with an ex-Target employee who told him that Target uses the Ariba Supplier Management solution for managing its external vendors.  The Ariba system is available publicly on the Internet but it requires credentials in order to gain access to the application.

Based on these facts, my guess is that the Fazio attackers were likely separate from the Target attackers.  Therefore, the Fazio breach is like most breaches; the attackers get in, probe around and then leave if nothing of value can be identified.  That is not to say that they were not necessarily targeted for a way into Target, but I find it unlikely that Fazio was specifically targeted for the Target breach.

The Fazio attackers likely advertised the information and credentials that they gathered to other attackers on the Internet “underground” and sold them to whoever was willing to pay including the Target attackers.

The Russians

In my opinion, the Russians that eventually sold the card information were probably not the actual attackers that retrieved the cardholder data from Target.  However, they likely could have been behind the attack as the folks that instigated it and funded it.  Other than selling the cardholder information, until these individuals admit their role, we will probably never know if they were just a fence for the information retrieved or if they were behind the attack.

In my scenario, the Russians began scoping out likely candidates for compromise and picked Target because they found information on the Internet “underground” and determined that it was likely possible to successfully get in and get information.  Once the research was done, they then assembled a team to get the actual attack done.

The Malware

In reading the various news accounts, the Secret Service indicated that the attack was sophisticated.  A review of the infamous Microsoft case study, Target had implemented Microsoft Windows Server Update Services (WSUS) now part of Microsoft Service Center Operations Manager (SCOM) at all of their stores so that they could rapidly deploy updates to their stores in the smallest possible time frame.  In the retail business, IT people get very small windows of opportunity to perform updates so this architecture would provide IT with the ability to stage updates and then deploy those updates as quickly as possible.

A lot of people have commented throughout the numerous discussions of the breach on Google+, LinkedIn and Twitter questioning how the attackers could have compromised so many POS systems so quickly.  It is my opinion that this was done through SCOM.

But there is a huge problem with using SCOM when the software is not Microsoft’s – SCOM can be somewhat to very temperamental when it comes to deploying non-Microsoft software and updates.  Over the years it has gotten better with some non-Microsoft solutions, but considering the deployment of malware via SCOM and having it work right the first time requires knowledge of not only SCOM but the Target computing environment.

This brings me to the fact that I believe an insider had to have been involved in the breach.  Not necessarily an actual Target employee, although that cannot necessarily be ruled out, but more likely a knowledgeable contractor.  Like all large corporations, Target outsources development to contractors that have offices and staff located all over the world.  Those contractors also have their own contractors that are located all over the world.  It is my opinion that the Russians compromised one or more contractors with development knowledge of Target’s POS application and deployment of the POS software.  This was required to develop the malware from the BlackPOS code and develop a one-time successful deployment capability using SCOM.  Whether or not these individuals were actually part of the attack team is debatable.  They would only be needed to develop the solution and the SCOM deployment scripts and possibly procedures to avoid Target’s QA process.

Outsourced contractors in third world countries can be readily bought.  People in the West forget that these developers can be making anywhere from cents per hour to only a few dollars an hour.  That is why development work is outsourced to them as it is more cost effective than using developers where they are making one hundred dollars per hour or even more.

But that brings up an interesting conundrum in this breach.  If a contractor was compromised, could they not still be involved in Target’s development efforts and just deliver the malware directly as part of their deliverable?  I think that could have been a possibility, but it would have risked being discovered in Target’s code review, quality assurance and testing processes which is probably why the malware was not delivered by that method.

The Attackers

The attackers could have come from anywhere, but most likely are from Russia or one of the former Russian states such as Ukraine or Belarus.  The reason this is most likely is that the people that sold the Target cardholder data were Russians and they would want people with their same background to execute the attack as well as having some amount of control over the attack team.

The Attack

The attackers that broke into Target likely went shopping for ways into Target and found the Fazio Ariba credentials for Target as well as probably other credentials to other publicly available Target applications.  The attackers either bought those credentials or had their Russian bosses purchase those credentials.

I had to put my penetration testing hat on to figure out how the Ariba credentials came into play.  The reason is that if Ariba is available from the Internet to anyone, why would an attacker need credentials?  Then it dawned on me.  They needed the credentials in order to compromise Target quietly.

My rationale for this is that Target does a decent job at securing publicly facing applications, particularly since their 2007 breach.  Assuming the Ariba application was properly implemented, doing an attack without the credentials would have alerted Target’s information security personnel and it would have been game over.

As a result, the attackers needed the credentials so that they could gain access to Ariba so that they then could compromise it with a cross site scripting attack, SQL injection or whatever they used to gain access to one or more of the Ariba servers so that they could then breach the rest of Target’s network, specifically the SCOM system(s).  The reason this approach would be more likely to be ignored is that the attackers would have valid credentials and any anomalous activity would likely be written off by Target personnel.

This brings us to the next reason I believe an insider is involved.  The timeline discussed thus far gives the impression that the breach was a fairly quick operation.  The only way the breach could have been conducted so quickly is if the attackers had knowledge of where they needed to go to compromise the SCOM system.

That said, the database of Target guests that was also retrieved was likely collateral damage in that it was encountered during the attack and was taken so that the attackers did not walk away empty handed.  The other possibility is that the database was used to test the data exfiltration process to ensure it would go undetected.

Once the attackers owned the Ariba system, they would then have had access to the administrators of Ariba.  The insider would have given the attackers an idea of where the SCOM system was located and probably who had access.  It then became a process of compromising one of those Administrators to gain access to SCOM.  Because they were inside Target’s network, the administrators were likely compromised using an extremely targeted phishing attack using the internal email system.  As a result, the phishing message would have looked even more than legitimate because it was internally generated and delivered.  The message likely contained some sort of Word or Excel document that had backdoor software that would not be detected by the anti-virus solution.

However another option could have been used once the attackers were inside.  They could have approached any of the administrators and pretended to be a contractor and asked for access to SCOM in the test environment.  From there they could have staged their malware and then sent it through the QA process.  Regardless of how they gained access to SCOM, the attackers had to have used the SCOM system to deploy their malware with the speed that they deployed it.

Creating the data dispersal server was a straight forward problem.  With the insider’s knowledge, they knew where FTP was implemented and merely compromised the server to be their own collection point so as not to arouse suspicion.  To get the data out of Target they used DNS as every system needs access to DNS.  A lot of people have argued that Target should have seen the exfiltration of the data via DNS and have pilloried Target for their ineptitude.  However, if the attackers were as sophisticated as they have been portrayed, they likely constructed their exfiltration system to mimic the size of valid DNS packets and thus only traffic volume would have been a possible trigger.

Is this scenario correct?  We will not know until a final report is released if we ever see a final report that gives actionable information.

That said, I am sure there are a lot of you reading this and are shaking in your boots based on this scenario.  That fear is likely based on the fact that you realize how ill equipped your organization is to deal with this sort of attack.  And you should be scared.  This is a war of escalation that we are waging.  Organizations step up their game and the attackers up the ante on their side.  Like the famous saying, “When chased by a bear, I only have to outrun the last person to save myself” is very true in this situation as well.  Your organization’s security game only has to be better than the other organizations.  But when an organization like Target is breached and they were considered to be at the top of the security game, what chance does an organization with mediocre security have?

The only saving grace might be is that your organization is flying under the radar.  I say “might be” because, according to the majority of reports on the state of information security, most organizations have no idea that they have been compromised.  That is because people rely on anti-virus and other technologies that have a poor track record of identifying malware and sophisticated attacks.  And then, as we learned in this past week’s report on the Neiman Marcus breach, you can have information security personnel write off malware discovered as false positive results and let it re-infect for months without investigating or even worrying about what was going on.

It is easy to pillory the guy that got breached.  However, a lot of you should look inside your own organizations before tossing stones.  I would guess that most of you tossing those stones would not fair any better and likely worse than Target should your organization be breached.

Advertisement
08
Feb
14

Pre-Authorization Data

After a number of interactions with a variety of people over the last few weeks, it has become obvious that the concept of pre-authorization data is not clear to a lot of people.  And just because it is pre-authorization data does not mean that you are not required to protect it.  The Council has made it very clear that it is to be protected with the same rigor as post-authorization data.

Pre-authorization is defined as that time when an organization has a customer’s sensitive authentication data (SAD) but has not yet processed it for payment.  For most merchants, the time between collecting the SAD and processing it is measured in seconds.  For card present transactions, the SAD can be track or chip data.  For card not present transactions, it typically includes the cardholder name, primary account number (PAN), expiration date and CVV2/CVC2/CID2.

Here are some situations where that does not always happen that quickly.

Phone, facsimile and mail orders can extend the amount of time between receipt of the SAD and processing by seconds, minutes or even hours.  On the bizarre side of things, I encountered at one client people sending their physical cards in the mail for processing of their mail order.

At the fuel pump when a customer swipes their card, the system that manages the payment process will pre-authorize the purchase and then store the SAD until the customer finishes pumping fuel and hangs up the hose to complete the transaction.  That could be five to 10 minutes depending on fuel tank size.

In some industries the time of pre-authorization can be weeks, months and in some cases even a year or more.  Where this typically occurs is in the airline and hospitality industries.

When a customer makes an airline reservation, the airline will likely pre-authorize your airfare but may not actually charge your card until 7/14/60/90 days before you check in or even until you check in.  This can result in your SAD being stored for weeks or even months.

In the hospitality industry, a reservation typically does not cause a charge until a customer checks out even though they are required to have a card on file to hold the reservation.  When a customer checks into the property, the hotel’s billing system records the SAD and may also pre-authorize charges, but the actual card transaction is not processed until the customer checks out.  I have encountered SAD in hospitality systems that have been stored for more than a year due to reservations for special occasions such as graduations, birthdays, family reunions and anniversaries.  New versions of hospitality management systems encrypt pre-authorization data, however older systems did not and the security of the pre-authorization data may not be a rigorous as it should.

eWallet applications are another great example of pre-authorization data.  eWallets are just an electronic version of a consumer’s physical wallet.  eWallets can be applications on a smartphone/tablet or a specialized device such as Coin.  eWallets are not regulated by the Council or the card brands and never will be just as traditional wallets are not regulated.  That said, developers of eWallet applications should follow the PA-DSS for developing secure eWallet applications.

The most confusion over pre-authorization data typically occurs over SAD that an organization receives via email.  A lot of QSAs get their “undies in a bunch” over this and point to requirement 4.2 as the reason why this is unacceptable.  Requirement 4.2 states:

 “Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word in 4.2 is “send”.  Requirement 4.2 says nothing about receiving PANs by these methods and the reason is that it is pre-authorization data.  That does not mean that the Council recommends receiving PANs via email, IM or similar methods.  It is only recognition of what goes on in the real world.  There will always be a small percentage of people that will send their cardholder data insecurely and there is little an organization can do to stop it.  But even before you start the argument, you need to acknowledge that this data is pre-authorization data because a transaction has yet to be processed.

To keep your email system out of scope, the Council has told QSAs to make sure that organizations have a process to remove the SAD as soon as possible from the system(s) in question.  That typically involves the printing of the message, deleting the message from the system and then securely destroying the printed message once the transaction is processed.  This is all considered “incidental contact” in the eyes of the Council and the QSA can then consider the system out of scope as long as they can satisfy themselves that the manual process is reliable and consistently executed.

Web sites that store customers’ SAD for future purchases are also dealing in pre-authorization data.  Merchants providing such a service should contractually acknowledge their responsibility to protect that data with the same rigor as post-authorization data and that the stored data is only for use for purchases by the customer on the merchant’s Web site.

My final example of pre-authorization data is when an organization’s travel department stores employees’ card data for company travel reservations.  The systems used by these departments typically store the cardholder name, PAN, expiration date and the CVV2/CVC2/CID2.  When the PCI DSS originally was published, the card brands instructed QSAs that these sorts of systems were not in scope for an organization’s Report On Compliance (ROC).  However, as time has passed, some of the card brands now want those systems included in the ROC, so organizations need to contact the card brands to determine if their ROC should include such systems.  These systems can be interesting in how little protection they can provide for SAD, so regardless of whether they are included in the organization’s assessment, the organization should review the security of the application against the PCI DSS and mitigate any gaps.  As with the aforementioned Web sites that store pre-authorization data, an organization storing SAD for their employees should legally acknowledge their responsibility to protect that data as well as document how it may be used.

The bottom line is that all of these situations involve pre-authorization data and pre-authorization data can include everything recorded on a card’s magnetic stripe or chip.  Merchants can store pre-authorization data until a payment transaction is processed.  If they do store the pre-authorization data, they are obligated under the PCI DSS to protect it with the same rigor as post-authorization data, i.e., encrypted, extremely limited access, logging, monitoring, etc.

Hopefully we are now all on the same page.

18
Jan
14

Why The Paradigm Must Change

The Target, Neiman Marcus and the potential other breaches of retailers to come should be a learning moment for all of us to demand that the card brands change their business paradigm to one that is more secure.

Bolt-Ons Do Not Cut It

For all intents and purposes, how a credit card works has not changed since the late 1950s when they were introduced.  Yes, there have been advancements such as EMV, 3D Secure and end-to end encryption (E2EE), but those are all things that just bolt onto the original concept.  The trouble is that, given today’s technologies and their capabilities, the card and the bolt-ons are just no longer providing the security they once did.

With the Target breach there has been a call to get the US to finally convert to EMV.  The trouble is that EMV would have leaked enough information for fraud to be committed as well, so it is not an answer.

Trade association spokespeople trotted out 3D Secure and other methods of securing online transactions.  The trouble is that most merchants eschew 3D Secure and its kind.  In addition, there are known vulnerabilities with these supposedly secure payment methods so they also have potential issues that could be exploited.

Then there is E2EE also known as point-to-point encryption (P2PE) from a PCI perspective.  These also can be exploited.  It may be more difficult, but when you are determined to gain access to sensitive information, that does not matter.

After the release of the PCI DSS in 2008, a lot of retailers implemented a variety of E2EE solutions.  Unfortunately, the endpoint at the retail location was the POS register and not the terminal.  This was not due to merchants’ negligence; this was due to how their POS applications operated.  This allowed for attacks such as that used in the Target breach to succeed.  All the attacker has to do is insert their malware into the POS process so that the malware can “see” the cardholder data before it gets encrypted.

Even in solutions that do E2EE/P2PE to the terminal can be defeated by taking the same approach and inserting the malware into the terminal process before the terminal can encrypt the data.  Worse yet, if the terminal is breached, the attacker can capture PINs if they also have malware that captures the keystrokes on the terminal before the PIN is encrypted.  There are a number of methods to minimize these risks at the terminal, but if the terminal supply chain is compromised as it was over a year ago in the Barnes & Noble breach, there is little a merchant can do to stop such attacks.

The bottom line is that all of these solutions are bolt-ons to the existing card paradigm and all still have risks that a breach could occur.

Using Complexity Against Us

Brian Krebs and others have wondered aloud how a sophisticated organization such as Target that has information security and forensic resources second possibly only to the government could have been compromised.  Particularly after the 2007 compromise by Albert Gonzales when Target totally revamped and increased their security posture to minimize the likelihood of another event.

The first clue to me came when I read the iSIGHT PARTNERS report on the Target breach.  The theme that comes through loud and clear is that the attackers are using the complexity of Target’s technology infrastructure against Target.  I mean how could FTP activity and huge data transfers (internal and external) go so unnoticed?

Actually, that was likely fairly easy.  The attackers used existing network traffic to mask their own network traffic.  They sought out servers that already had large volumes of traffic and put their data collection server on one of those servers that already had a lot of traffic.  Better yet, a server that was already running as an FTP server.  As a result, even with diligent monitoring, the increase in traffic likely did not raise any alarms.

People assume that such breaches are like a “snatch and grab” in the real world.  The attackers break into an organization’s network, quickly take what they can off of the computers they encounter and leave.  That was the modus operandi (MO) in the past, but not today.  Sophisticated and organized attackers such as those that breached Target, do what they can to remain unseen while they learn more about their victim.  They take their time mapping out the network and determining what devices they want to compromise to further their efforts to gain access to the sensitive information they seek.  Because of this, it is highly likely that the Target attackers encountered the Target customer database during their investigation of the Target network and took it first so that they would have at least something for all of their efforts.

The most insidious thing I think the attackers did was that they likely used Target’s software distribution system to disseminate their malware.  Given the number of POS systems compromised (around 51,000); I find it hard to believe that the attackers manually installed their malware on those POS systems.  It would have placed their operation at extreme risk likely resulting in its discovery.  By using Target’s software distribution system, the attackers got an added benefit of legitimacy to their malware because they Target themselves did the installation.  As such, the malware would appear as valid because Target’s software management system initiated the change.

Now What?

All of this brings up an interesting conundrum.  If attackers are stepping up their game and using such techniques, how do we detect them?  It is a very good question with no good answers.  The iSIGHT report offers methods to stop and eradicate this particular attack.  However, the next attack and the attack after that will all likely use different malware and different techniques to get the data out of your network.

We are in is a war of escalation with no end in sight.  Merchants step up their efforts to stop such attacks and the attackers adapt and adopt new techniques to breach organizations and gain access to their sensitive information.  What we need is a solution that stops the escalation and gets us out of this vicious circle.

That is why I am pushing the 15 – 16 character single use transaction code as that solution.  My reasons are as follows.

  •  The algorithms already exist as a number of the card brands experimented with them a decade or more ago.
  • It will work with existing POS technology and applications.
  • It will work with existing eCommerce sites.
  • It can be implemented into eWallet applications.
  • It can be processed, stored and transmitted without encryption.
  • It can be generated by PCs, smartphones, tablets, credit card sized devices and any other devices that have computational capabilities.
  • It can be displayed on devices in a character format for manual entry or as one or 2D bar codes for scanning.
  • It can be transmitted via swipe, EMV, near field communication (NFC), Wi-Fi or even Bluetooth.
  • And best of all, it is secure by the very nature that it can only be used once.

There will be some changes that would be required at the transaction processors and acquiring banks to handle such a solution.  But given that some of the card brands already have experience with this solution, there is a body of knowledge that already exists as to how it needs to be implemented.

Let the discussion begin on how we move ahead with a better, more secure solution.

12
Jan
14

More Retailers Supposedly Breached

Reuters is reporting that Target and Neiman Marcus are not the only retailers that were breached during the holidays.  There are at least three more retailers have also been breached.  What makes this announcement interesting is some of the information disclosed in this article.

 “Law enforcement sources have said they suspect the ring leaders are from Eastern Europe, which is where most big cyber crime cases have been hatched over the past decade.”

This was reported by Brian Krebs on Christmas Eve.  However, based on Brian Krebs’ reporting, it is the Eastern Europeans that are marketing the cards obtained, but they are not necessarily the perpetrators of the actual crime nor are they necessarily be behind the crime.  So whether or not Eastern Europeans are the perpetrators is pure speculation at this point.  At one point there were reports that the attackers are from Southeast Asia, but those reports are also unconfirmed.

I really do not care who did these attacks.  I am more interested in understanding how they were done so that I can advise my clients as to what they need to do to minimize the likelihood that they end up in the news.

“One of the pieces of malware they used was something known as a RAM scraper, or memory-parsing software, which enables cyber criminals to grab encrypted data by capturing it when it travels through the live memory of a computer, where it appears in plain text, the sources said.”

“Yet a law enforcement source familiar with the breach said that even if the retailer had implemented those steps, the efforts may not have succeeded in stopping the attack.”

We now have an idea of how the crime was committed.  The attackers were taking card data out of memory.  It also appears that the attackers were using a memory scraper that was already available such as vSkimmer or BlackPOS.  However, based on the unnamed law enforcement source, the attackers either modified the malware or used it as a basis for their own malware such that anti-malware solutions would not recognize it as malware.

“One of the sources who told Reuters about the recent rash of attacks said the memory parsing malware cited in the Visa reports was among the tools that the hackers had used, but said they used other techniques as well.”

I found this information the most interesting as it seems to lend credence to my theory that the software was part of an update to the card handling application installed on the POS.

 “Avivah Litan, a security analyst for Stamford, Connecticut -based Gartner information technology research firm, said she learned about a separate set of breaches, dating back no more than a few months before the November 28 Thanksgiving Day start of the holiday shopping season, from a forensics investigator. She declined to provide his name.”

“Investigators believe that the early series of attacks on retailers staged before late November were mostly used as trial attacks to help the hackers perfect new techniques they then used against Target, stealing payment cards at unprecedented speed, Litan said.”

These quotes imply that these were attacks that were traditional hacks of the retailers’ networks from the outside.  The problem I have with that is that this speculation does not square with my knowledge of the changes that Target implemented after they were a victim of Albert Gonzalez back in 2007.  Target made significant changes that minimized the ability of an outsider being successful in breaching their card processing environment.  Not only that, but the PCI DSS push isolating cardholder data environments (CDE) from the Internet.  Assuming that all of the retailers involved followed the requirements of the PCI DSS, then they should have properly isolated their CDE and were monitoring it for such attacks.  Not that every retailer might have identified an attack on their CDE, but I know that a security aware organization such as Target should have identified such an attack.

Not only that, but we are no longer talking about a single retailer.  We now have at least five retailers that are potentially in play and possibly even more.  It seems to be awful long odds in my book that we have five retailers all hacked in one way or another and then had the same malware installed.  As a former penetration tester, I could see getting one retailer in this way, maybe two retailers.  But not five or possibly more with the same or similar methods in the same time frame.  Again, it can be done, but would require a lot of time, coordination, people and effort.

Hackers may be sophisticated, but they are like water and typically want to find the path of least resistance to accomplish their goals.  Attacking networks with firewalls and monitoring are to be avoided as they take lots of time and effort and the likelihood of getting caught in the process is too high, particularly when we are talking multiple organizations.  That is why I go back to compromising the software at the source.

If I were constructing such an attack, I would either infiltrate the POS application vendors for large retailers or coerce an existing employee of those companies to insert my malware in their code.  That way my exploit comes directly from the source.  The good news for attackers is that there are a limited number of companies that develop the code that most retailers use to handle card transactions, so an attacker would just have to look for the vendor with the customers that would provide the best results.

Since these vendors issue precious few updates, their customers are typically chomping at the bit to obtain those updates and get them rolled out before the holiday season.  They are going to be tested heavily, but a smart attacker would have set their malware up to know they are being tested and have the malware remain silent during testing.  Once placed into production, the malware would activate and begin collecting card data and sending it back to wherever the attacker decided they wanted to collect it.

Easy peasy.  And a lot simpler and easier than hacking networks.

Again, this is all speculation on my part.  But knowing how attackers work, I feel my scenario makes much more sense than what is being discussed.

10
Jan
14

The Economics Of EMV

There are a lot of people out there that have apparently taken big swigs of the EMV Kool Aid and think that merchants and banks in the United States are all idiots for not believing in EMV.  Well folks, here is EMV by the numbers.  Unfortunately, the best set of complete numbers I could get are from 2009, but I know that the fraud percentages have not radically changed since 2009.

As this example will illustrate, EMV in the US is a non-starter, not because we do not like EMV, but because it makes no financial sense. While I am using Target as the example, these numbers are pretty much what most retailers (large or small) are looking at as they evaluate going to EMV.

  • Target had around $65B USD in revenue for 2009 as reported in their Annual Report.
  • For 2009, card fraud amounted to 0.11% according to a report from the US Federal Reserve Bank of Kansas City report on EMV adoption. For comparison, card fraud in the UK (the best in Europe and the best for EMV countries) is 0.08%, a 0.03% improvement over the US.
  • We know that not all of Target’s revenue is in card transactions but I will estimate that 70% of revenue was card transactions (around $45.5B USD). Then Target has around $50M in losses related to card fraud for the year at 0.11%.  Therefore, assuming a 0.03% improvement in fraud due to implementing EMV, Target is saving around $13.5M USD a year.
  • Estimating between $50M to $100M USD to replace the POS (possibly), terminals and software to support true EMV (for comparison, Target is already spending an estimated $25M to $30M just on new terminals), Target gets a payback on that $13.5M USD savings due to EMV in around four to seven years.

I can tell you from experience that, if a merchant cannot get a three year or less payback, they will not even consider the investment. A two year or less payback is actually preferred and the only sure way for any project to get management’s consideration and approval.

But while the financials for EMV do not add up, there are also other factors that are causing retailers to question a conversion to EMV.

One of the largest is the fact that EMV does nothing to stem the fraud losses from card not present (CNP) transactions. Since most retailers are viewing eCommerce as their next new retail opportunity, the exponentially increasing losses due to CNP fraud does not improve the likelihood of converting to EMV. And with that larger focus on eCommerce and maintaining brick and mortar margins, there is also the concern regarding investing significantly in any changes to those brick and mortar operations that also hold back retailers from transitioning to EMV.

Another consideration is that a lot of retailers just upgraded their terminals a few years back to comply with the PCI PTS requirement. Most retailers like to get at least seven to ten years out of their technology investments. Had Visa and MasterCard played their cards right and coordinated their EMV push with the PTS changes, the US likely would have converted to EMV.

Finally, there are concerns about EMV even surviving given the advent of new payment technologies such as eWallets as well as Bitcoin and other new forms of payments. As a result, a lot of retailers are sitting on the sidelines while technology and payment methods sort themselves out before considering making any investments in new payment process capabilities.

That my friends are the cold, hard facts of why EMV is currently dead on arrival in the US.

22
Dec
13

How About We Fix The Problem?

As I pointed out in my last post, EMV would have not stemmed the loss of data in the Target breach.  All EMV would have done is restricted where the thieves could use the card data obtained.  Even though the thieves can supposedly clone cards from the data gathered, as far as anyone has reported at this point, cloned cards do not appear to be the method of fraud.  So the assumption I have is that all, or the vast majority, of the fraud committed to this point has been through card not present transactions.

In response to people clamoring for a solution to the breach problem, Visa and MasterCard have curiously remained silent.  I would have assumed that the card brands would have trotted out their press releases touting EMV as the savior.  Yet they have said nothing.  Could it be that the card brands are actually acknowledging that EMV would have not been the answer?  One can only hope.

So what is the answer?

To me the answer is single use transaction codes of 15 to 16 characters in length.  With the advent of smartphones and miniaturization of electronics, the ability to create a card or an application that generates such a code is not only possible, but has been demonstrated in recent years.  Not only that, but the card brands and banks themselves dabbled with such solutions over 10 years ago but for some reason backed off on pushing such a solution.  My best guess is that without a portable method of using the single use code system, there was no point to pushing such a system.  But times and technology change.

With the capabilities of today’s technology, the single use codes could be displayed as bar codes so that existing merchant POS systems could scan them and avoid data entry errors.  Since they are no more than 16 characters in length, the codes can be stored in applications’ existing fields used to store card numbers without modification.  Since the card brands and banks have already developed the algorithms for this approach, they only have to agree on which algorithms to use.  But best of all, since the code can only be used once, it can be processed, stored and transmitted wherever and however without fear of a compromise because it can only be used once.

This is just my thought for a solution but there are other people and organizations that have their own solutions to fix this problem.  The bottom line is that it is time to fix the problem, not keep kicking the can down the road with a known format that is at the end of its life.

21
Dec
13

EMV And The Target Breach

There are a lot of people now pointing to the Europay MasterCard Visa (EMV) card (aka “Chip and PIN”) as the savior from breaches such as those at Target and I am sure Visa and MasterCard are very pleased with that fact. Well, I hate to burst your bubble, but if the US was only using EMV like Europe and Canada, it probably would have had only a minor impact.

Are you stunned by that statement? After all, that is not how Visa and MasterCard are portraying EMV. If you read their media statements, they imply that EMV is the answer to these breaches.

To make sure I was describing the security features of EMV correctly, I reached out to my friend and EMV expert Andrew Jamieson, Security Laboratories Manager, at Underwriters Laboratories – Transaction Security in Kew, Australia. Underwriters Laboratories tests and certifies a lot of things, one of which is card terminals (magnetic stripe and EMV) to the PCI standards. As such Andrew has a lot of knowledge in the area of EMV and how it works.

I asked whether or not EMV cards are encrypted.

“EMV cards are not encrypted, per se, but instead store a couple of secret keys which are used as part of the authentication of the entire transaction. All card data can be output from the card in the clear – PAN, CVV, etc – except for the customer PIN and the secret keys. The CVV will also be different from that on a magnetic stripe, either static (called an iCVV) or can also be a dynamic value that changes with each transaction (dCVV).”

Well there is a piece of interesting news. While the transaction gets encrypted with the secret keys, an EMV card would still provide some information in a Target-like breach.

Then I asked if there is a risk even with EMV.

“So, any chip based transactions from an exposure such as the Target one would only have exposed the PAN (technically, the PAN on the card can be different from the PAN on the face/track, but in reality this never happens), not the full track. As the CVV would not have been exposed, the PAN would have limited value.”

If the magnetic stripe was not present, the CVV would not be required or recorded in the chip, so only the iCVV or dCVV would be available and those would not be usable as the code printed on the card would not match either of those values. Therefore the information gathered would not allow for the cloning of cards because the information recorded in the chip is not the same as the information that is printed on the physical card. But this should not be a surprise because that was what the EMV standard was designed to do, prevent the cloning of cards.

However in a Target-like breach where the terminal and/or POS system were compromised, the chip would have still given up enough information to be used in card not present transactions such as those conducted via eCommerce. As a result, the attackers would be limited to only defrauding online merchants but that is where most card fraud is being committed.

EMV is not a “silver bullet” such as the card brands like to imply. Yes, it is better than the magnetic stripe, but it does nothing to stem the tide of the growing fraud in online transactions. There are a number of new technologies on the horizon that will minimize the fraud risk of using credit/debit cards in both card present and card not present situations. But until the card brands get behind those solutions, they will continue to push their old solutions and not address the current problems.

09
Dec
13

Why The Continued EMV Push?

Visa and MasterCard continue their push to get merchants in the United States to install Europay, MasterCard and Visa (EMV) capable terminals so that they can push issuers to transition to what most of the world refers to as “Chip and PIN”.  Because Visa and MasterCard have a vested interest in EMV technology, they feel obligated to push this “dead horse” onto the rest of us.  The problem is that merchants and everyone else outside of Visa and MasterCard have with EMV is that there is not a business driver to convert as EMV does little or nothing to address today’s card fraud issues.  

As background, EMV was developed to address the rampant card present transaction fraud that occurred with the fall of the Iron Curtain back in the late 1980s.  Overnight, credit/debit card cloning of the magnetic stripe on the cards became big business in Eastern Europe.  With the rollout of EMV in Europe in the mid-1990s, card present transaction fraud plummeted to at or below the levels in the United States because the chip in the EMV card was impossible to clone (although to be compatible, EMV cards have a magnetic stripe which still can be cloned).  Spin ahead a decade to the mid-2000s to today.  Card present transaction fraud continues to be at about the levels in the United States and Europe.

Times change and so does fraud.  With the advent of eCommerce over the Internet starting at the turn of the century, fraud has moved to card not present transactions.  As long as someone has the PAN, expiration date and cardholder name, you can shop almost anywhere.  And if you are someone who is committing fraud, you can buy that information via the Internet for around $2 to $10 an account.  Pay more and you can get the three to four digit code (CVV2, CVC2, CID, etc.) that confirms you have the card in your possession.  Card not present frauds run around 10 times or higher than card present fraud and is costing merchants and some consumers billions every year.

So what does EMV do to minimize card not present fraud?  Absolutely nothing.  Not that there have not been attempts to introduce EMV-based solutions for eCommerce.  A number of European banks and American Express in the early to mid-2000s tried to introduce standards that used inexpensive serial and USB EMV card readers connected to a shopper’s PC.  But none of these solutions could gain traction with eCommerce application developers and merchants, so eventually they dropped their efforts.  Had Visa and MasterCard had some foresight, they would have partnered with a few of the influential eCommerce merchants and eCommerce application developers and created an eCommerce EMV standard and related APIs, but that did not happen.

To add insult to injury, EMV probably only minimally improves the risk of data breaches.  The reason is that EMV moves attacks to compromising terminals and POS systems at the merchant and gaining access to systems and information at the transaction processors and financial institutions.  That is because once the information in the chip is being processed, it is handled the same way as information off of a magnetic stripe.  If it is not processed, stored or transmitted securely, an EMV card is just as susceptible to being breached as its older, less secure magnetic stripe counterpart.  And given the current state of affairs with BlackPOS, POS botnets, vSkimmer and the like, the risk with EMV is probably only slightly better than magnetic cards.

Unfortunately for Visa and MasterCard, technology has moved on.  With the advent of smartphones and tablets, application developers created eWallet applications.  eWallet applications store a cardholder’s credit/debit card information in a secure file or database.  Some eWallet applications use these devices’ near field communication (NFC), Bluetooth or Wi-Fi capabilities to securely transmit the card information to a merchant’s POS solution.  There are also eWallet applications that display the PAN as a bar code so that merchants can use their existing POS technologies to scan it from the screen.  Coming in the near future are eWallet applications that will generate a single use 16 digit number with bar code, NFC, Bluetooth and Wi-Fi capabilities.  All of these solutions offer as much, if not more, security than EMV.

The times have changed and so has card fraud.  Yet here we are with Visa and MasterCard continuing to push EMV technology.  EMV does little to nothing to address today’s issues or issues that are down the road.  It is time for Visa and MasterCard to move on from EMV and look for the next new solution and stop pushing a dead end technology on merchants that have no good business reason to adopt it.

06
Oct
13

Thoughts From The 2013 PCI Community Meeting

I got lucky that my new employer allowed me to go to this year’s PCI Community Meeting held in Las Vegas.  It is always nice to hear things first hand versus getting the slide decks, asking questions of the people that attended about certain points in the slide decks and finding out that the people attending did not think a question was needed or they cannot remember what was said.

Probably the biggest revelation out of this year’s Community Meeting was from the Qualified Security Assessor/Approved Scanning Vendor (QSA/ASV) session the day before the Community Meeting started.  It seems that the Council has finally addressed with v3 the message a lot of Qualified Security Assessor Companies (QSACs) have been putting forth regarding the time it takes to write a Report On Compliance (ROC).  Those of us complaining have argued for years that the ROC writing process occupies too much of a QSA’s time (anywhere from 50% to 70% of a project).  As a result, under today’s Reporting Instructions, QSAs tend to worry more about what to write than making sure their client is actually compliant and advising that client how to better maintain compliance.

The ROC will now have an overall ranking for the entire requirement that is a check box that indicates whether the requirement is ‘In Place’, ‘In Place with Compensating Control’, ‘Not Applicable’, ‘Not In Place’ or ‘Not Tested’.  Then under the requirement, will be the tests for the requirement.  It is with each test where the QSA will provide a list of documents reviewed, a list of people interviewed, observations made and sampling selected.  QSAs will no longer have to write meaningless but Quality Assurance approved diatribes regarding a test.  All that will be needed is a list of documents, a list of persons interview, a list of observations made and a list of samples taken.

Now before some of you go nuts about the check box, the Council has not adopted a check box assessment approach.  QSAs are still required to conduct their assessments in a process similar to what is conducted today for a PCI DSS v2 assessment.  What the Council is doing is simplifying the reporting generation and review processes.  The Council finally acknowledged the fact that, other than QSAs and their QA reviewers, practically no one was reading the 300+ page tomes that were being produced – even the acquiring banks.  What they are doing is making the ROC easier to review and read as well as lessening the writing requirements so that QSACs can more readily adopt automated systems to facilitate the conducting of assessments and generating the ROC.  The idea is that this will then allow QSAs to focus on assisting their clients with PCI compliance related activities and fold into the ‘Business As Usual’ approach that is being rolled out with v3.

The most intriguing category is ‘Not Tested’.  The Council added this category for those situations where a QSA did not test the control.  This most often occurs when a QSA is relying on a Service Provider’s Attestation Of Compliance (AOC) for that control such as with firewall configuration management, network security monitoring, custom application software development or other services.  The Council also indicated that Service Provider AOCs may also be getting modified so that QSAs and others relying on them can determine what PCI requirements the Service Provider’s AOCs actually tested.

Now the bad news about this great advancement forward.  The Council has no idea when the final version of the Reporting Template will be available.  Since v3 of the PCI DSS will not be finalized until November, they do not yet have a timeline on how quickly after the release of v3 that the Reporting Instructions will be released.  As a result, QSACs will likely not be rolling out PCI DSS v3 assessments until the Reporting Template is released.  One reason will be because of the major changes involved in v3 and, probably the larger reason, because none of the QSACs want to be put in remediation by the Council for creating ROCs that do not meet the Council’s Quality Assurance requirements.

On the new requirements front, there was a lot of discussion within the QSA ranks on requirement 6.5.6 regarding protection and secure deletion of device memory when cardholder data (CHD) or sensitive authentication data (SAD) were stored in memory during a transaction.  As the Council has defined it in the QSA/ASV session, the QSA will interview developers to discuss what they are doing to secure memory and properly delete CHD/SAD.  QSAs will also observe that developers are in fact coding for this condition.  While this requirement will bring the memory scraping threat to light, most of the QSAs I talked with agreed that the proposed testing is not going to actually have any significant impact on the memory scraping threat of vSkimmer, BlackPOS and the like.  And for PA-DSS certified software (v3 of the PA-DSS addresses this threat as well), the horse has already left the barn for those applications.  The bottom line is that memory scraping attacks will continue to proliferate until the software vendors address the problem.

I asked a question about PCI DSS v3 requirement 1.3.7 and how it would impact merchants that have their card terminals connected to the Internet for transaction processing.  Requirement 1.3.7 states:

“Place system components that store cardholder data (such as a database) in an internal network zone, segregated from the DMZ and other untrusted networks.

System components that store cardholder data must not be located in a DMZ or be accessible from the Internet or other untrusted networks”

My confusion was over how all of the merchants I encounter with card terminals that directly connect to the Internet would be able to comply with 1.3.7.  After all, in most instances, these card terminals store SAD in memory until the transaction is completed.  The Council representatives admitted that the wording in 1.3.7 was a problem and that they would be looking at rewording it because of this and other similar issues that create a conflict.

Finally, the other big discussion topic was ‘Business As Usual’ or BAU.  For v3 of the PCI DSS, the Council wants organizations to integrate the requirements of the PCI DSS into their day-to-day processes and then periodically test that the integration has occurred and confirm that the integrated controls are working.  At the QSA/ASV session BAU generated a number of questions as to what QSAs were expected to test to assess BAU.  The Council admitted that BAU is only a suggested ‘best practice’, not a requirement, which placated a lot of the audience.  But if you think about it, if an organization desires to be PCI compliance, BAU should already be occurring in the organization.  This is obviously a reminder to organizations that do not get security to provide an incentive for further integrating security practices into their day-to-day processes.

As usual, there was a lot of information sharing between QSAs, ASVs, POs, card brands and the Council at this year’s Community Meeting.  The Council members seemed more relaxed than at the roll out of v2 three years ago.  Hopefully the roll out of v3 will go much less eventfully than v2.

04
Jul
13

Is It The Standard Or Process?

I question whether or not there really is a need to change the PCI DSS this Fall.  You look at the requirements and if they were truly followed as designed; the vast majority of breaches would either not occur or would be fairly limited and contained.  But when you look at the breach reports from Verizon, Trustwave and the like, they tell a different story.  Most breaches were the result of one or more PCI requirements that just were not working properly and a breach occurred as a result.

I think where the changes need to be made is in the assessment process as I believe this is where the most impact could be realized.  The current assessment process falls way short in confirming that an organization is complying with the PCI DSS all of the time, not just when the QSA is around.  This results in the very valid complaint by PCI critics that the process is a “check the box” approach.

If you look at the current PCI assessment process, it tests a very limited number of requirements over the period of time of the assessment.  QSAs only ensure that an organization has had: four quarters of passing external and internal vulnerability scans (11.2), quarterly facility wireless scanning (11.1), and that any changes to the cardholder data environment have been appropriately documented and approved (6.4).  All of the other requirements are assessed whenever the QSA assesses them at that point in time.  The rest of the time the organization could be non-compliant and the QSA would be none the wiser.

If we are truly going to address the findings in these breach reports, we need to be ensuring that organizations comply with the PCI DSS all of the time.  That means that QSAs need to be testing more requirements over the assessment time period to ensure that an organization is actually complying with requirements.

Based on that premise, here are my recommendations.

  • Change controls.  The PCI DSS requires QSAs to request all changes to the CDE so that they can determine if more than quarterly vulnerability scanning and annual penetration testing was required during the assessment period.  The breach reports indicate that attackers are consistently finding simple misconfigurations as a way into networks which means that change control is likely not being followed consistently.  QSAs should be testing the entire change control process to ensure that changes to infrastructure are being appropriately tracked, reviewed and approved throughout the assessment period.  The reason is that we regularly encounter information security personnel that are only involved in evaluating and reviewing changes that affect PCI compliance and nothing else.  We also encounter instances where only changes that affect PCI compliance are tracked.  You have to wonder how changes are determined to affect PCI compliance.  Obvious changes to the CDE are easy to identify.  But other changes could implicitly affect the CDE but not necessarily be identified as such because the people reviewing them do not see the connection.  As a result, organizations have no idea if changes outside of the CDE could impact their PCI compliance because there is either no record of those changes or information security has not been consulted.  These organizations are typically relying only on luck to protect them.
  • Mitigation of vulnerabilities.  Most QSAs assess patching by reviewing the quarterly vulnerability scans and making sure that vulnerabilities do not appear on the next quarterly scan.  If any vulnerabilities appear on subsequent scanning reports, then QSAs are supposed to assess what mitigating controls were put in place while the vulnerability was unpatched.  QSAs typically do a pretty good job proving that organizations’ patching processes work reliably.  But when it comes to mitigation, QSAs do not necessarily do a great job determining that open vulnerabilities are truly mitigated.  This is not always the QSA’s fault as the organization may not be keeping the necessary documentation to support that open vulnerability risks are being mitigated.  The bottom line here is that the assessment process needs to assess all vulnerabilities that were left unpatched during the assessment period to ensure that they were mitigated while they remained unpatched.
  • Access controls.  As with change controls, the current PCI assessment process only requires the QSA to test the accounts of those accounts that have access to the cardholder data environment (CDE).  To add insult to injury, most organizations have multiple access control systems in use for securing servers, infrastructure, third party monitoring, etc.  All of these systems are typically in-scope for assessment, but a lot of QSAs focus only on those in-house.  Access control is an all or nothing proposition, you are either doing it consistently across the board or you are not.  Based on the breach reports, attackers are using whatever accounts they can to get a foothold and then work their way to accounts that provide more privileges.  If that is the case, then how will testing only accounts that have access to cardholder data (CHD) help this situation?  It does not.  Therefore, the assessment of access controls needs to look at the whole picture, not just a privileged few.  The bulk of testing of the process may be relegated to those with access to CHD, but the entire process of granting and revoking access needs to be assessed to ensure that controls are being followed for everyone.  Testing of privileged accounts needs to be enhanced to address what those accounts have access.  Do the DBAs have administrative access to the OS?  Do network administrators have administrative access to servers?  Do system administrators have administrative access to network devices?  These are just examples of questions that a lot QSAs do not answer during their assessments.  A lot of QSAs are only interested in determining that privileged access is controlled, not who has access, why they have access and is that access justified.
  • Monitoring and Alerts.  QSAs are to ensure that logging and alerting is enabled as part of their testing.  QSAs are to ensure that an organization has log data online for at least three months and offline for a year.  QSAs then move to observing the various monitoring consoles to ensure that alerts are generated.  However no testing is done for the period of the reporting period to ensure that alerting functioned as configured by sampling the alerts generated and then ensuring that those alerts were properly investigated to ensure they were not serious or required further investigative techniques.  Based on the breach reports, anomalies are not being researched and this is resulting in breaches taking months to identify or, worse, going unnoticed.
  • Sampling.  Finally, testing needs to be more than just testing three or four items in each of these categories.  There needs to be sampling over the entire assessment period not just the day the QSA is investigating the requirement or one item per quarter.  A random statistic sample would be best but given populations that might be not feasible in the time frame required to produce a report.  However it is not inconceivable that this could result in at least 50 to 100 items being tested in some of these categories.

These are the key tests that would go a long way in improving the assessment process and address the findings from the breach reports.  The trouble is that this sort of enhanced testing is obviously going to drive up the cost of a PCI assessment whether the QSA does the testing or an organization’s internal audit function does the testing.  If the PCI SSC and the card brands are truly serious about making the standards meaningful, then this is where I would focus, not on making changes to the standard.

Remember my mantra, “Security is not perfect.”  So while these changes are not going to absolutely, positively prevent all breaches, they will go a long way in ensuring that organizations are as secure as they can be all of the time rather than just when the QSA is around.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031