Posts Tagged ‘Data Breach

14
Mar
15

The 2015 Verizon PCI Report

A lot has been written about this year’s Verizon PCI Compliance Report particularly about how 80% of organizations cannot maintain their compliance. And at the very end of the report are a number of issues raised by Verizon regarding why maintaining compliance is so difficult for most organizations. It is those issues that I would like to discuss.

Scale and Complexity of Requirements

“I just don’t understand why this ERP upgrade is going to take 18 months to complete. Can’t we just put the DVD in the drive and upgrade it like Microsoft Office?” – Anonymous Executive to IT Management

The same could be said about any security framework. If organizations are struggling with PCI compliance, imagine how they are struggling with HIPAA, FISMA or ISO 27K compliance. Compliance with any of the security frameworks is not easy.

I disagree with Verizon’s claim that it is related to the fact that most organizations do not know the PCI DSS. After six years and three versions, I rarely run into an organization today that does not have a basic, overall understanding of the PCI DSS. These organizations may have some interesting ideas on what sections and requirements of the DSS mean, but they have definitely studied it and read about it. Therefore the idea that organizations are ignorant on the subject is far from the truth in my experience.

In my opinion, where the problem lies is that most organizations have not truly managed their technology environments thanks to interference with mergers and acquisitions, partially implemented applications, bring your own device (BYOD), the Cloud and the plethora of other disruptions that complicate organizations. Today, IT is a very important part of any organization, but it is not managed like it was in the “good old days”. There are too many stakeholders and the consumerization of technology has not helped the situation by making everyone an IT “expert”.

Most organization’s IT operations these days are a hodge-podge of technologies, applications and networks. I would equate it to the technological equivalent of a house’s attic and garage combined. We all know we should clean and straighten them out, but that project always sits on the back burner as there are other, more important or fun things to do.

As a result, for most organizations, there is just no easy way to simplify, segregate and isolate cardholder data (CHD) and comply with the PCI DSS without making the environment even more complex. Starting over is not an option for a lot of organizations.

That said I have encountered a few very brave organizations that have done just that, started over. Management at these organizations came to the realization that fixing the problem was too complex and expensive and that starting over was the cheaper, safer and easier way to go.

Uncertainty about Scope and Impact

“I don’t know much about PCI, but I do know my scope.” – Anonymous Manager to QSA

When application developers cannot explain how their applications work on a technical level. When anyone in any department can be in the IT business. When security personnel are order takers for firewall configuration changes reviewed and approved by management that have no clue as to the implications of those changes. When network people are providing a communications utility for communications traffic but have no idea how that traffic traverses the network.

Is it any wonder we have no idea how to scope a PCI assessment?

But there are larger problems as to why scoping is difficult. The root cause of why scoping is such an issue is that everyone’s risk tolerance is different. I drive race cars at very obscene speeds on race tracks (mostly) that I am sure a lot of people would view as insane. However, I think that people that skydive and do rock climbing are the insane ones. All of this points to everyone’s acceptance and avoidance of risk based on their own views.

There is a sidebar in the Verizon report calling the PCI SSC to provide guidance about scoping. Good luck with that. The Council had a scoping SIG a number of years ago that imploded due to the aforementioned issues with everyone’s risk tolerance. The result was a small band of people from the SIG that published the PCI Open Scoping Toolkit. The PCI Open Scoping Toolkit is not perfect, but it provides a framework to have an intelligent discussion about how to go about scoping and determine what is in-scope and why.

The key to solving the scoping issue resides with the organization, not their QSA, acquiring bank or any other external entity. Organizations need to use the PCI Open Scoping Toolkit to come up with their scoping framework and definitions. Once that has been agreed, then an organization needs to map out their applications and networks to determine their true scope. This is where tools from vendors such as Tufin, FireMon, SolarWinds and the like can provide assistance by documenting the network and then simulating data flows over the network.

With that approach, it is incumbent on QSAs and other auditors to accept these definitions for their assessment unless there is some significant or gross error in the organizations definitions. This will address the complaint that organizations have with QSAs. How often have we heard something such as, “The last QSA told us this was compliant.” If we all play by the same risk definitions the client has provided, then statements like that should go away.

Once an organization truly understands and has defined its scope, it can then understand the impact of existing operations and any changes.

The Compliance Cycle

This is what the Council is attempting to address with business as usual (BAU). The idea is that with security practices and monitoring embedded within an organization’s operations, security issues can be quickly identified and addressed before they become serious.

However, for this to work, organizations need to have their scope known as well has how their IT environment actually works. Without that knowledge, embedding the PCI DSS into the organization is a futile exercise.

Lack of Resources

Every organization is running “lean and mean” these days. Cost control is king. As a result, resources are stretched, sometimes to the point that any additional activities just cannot be accommodated without hiring someone. And hiring is not allowed. So implementing BAU is not going to go well if it goes at all.

On the information security front, finding qualified people is nearly impossible, even for consultancies. Organizations are finding that most information security professionals are heading to consultancies because the pay is better. Since security is hard on both the mind and the body, most people want to be reimbursed as much as possible for their efforts. As a result, most organizations cannot pay for in-house security resources. And then, even if they do ante up, typically the person that takes the position either gets bored once they fix everything, or gets frustrated when the organization refused to make required changes to ensure or enhance security.

Enter the managed security services provider or MSSP. The concept is that the MSSP provides the security talent at a more reasonable price yet organizations get the quality personnel needed to enhance and stabilize their security.

Where this goes wrong is that the MSSP and the customer are not on the same page as to each other’s responsibilities. This is from a mixture of sales people over promising as well as prospective customers hearing what they want to hear. Never mind that it is all documented in a contract.

To address this situation, the PCI SSC has come up with a new requirement, 12.8.5, which states:

“Verify the entity maintains information about which PCI DSS requirements are managed by each service provider, and which are managed by the entity.”

Under the v3 Attestation Of Compliance (AOC) form, this will not be as big a problem for an organization to maintain. However, if an organization has a lot of service providers and/or the service providers have v2 AOCs; this could be a very daunting task.

Lack of Insight in Existing Business Processes

“I’ve only been in this position for [2, 3 or 4] months. So I’m not fully up to speed on everything we do.” – Anonymous Manager to QSA

“I’d give you an organization chart, but it would be out of date by the time I printed it.” – Anonymous Human Resources Manager to QSA

In today’s fast changing business world, people get shuffled out of departments and divisions faster than people can manage the changes. As a result, finding anyone with any sort of insight into an organization’s business processes can be extremely difficult, if not impossible.

Then we go back to my earlier comment about lack of IT management. With the advent of the Cloud, some business divisions and departments have totally sidestepped the formal IT organization and set up their own operations in the Cloud. Did they know what they were doing? No! But that was beside the point, they at least now have IT solutions, never mind if they are secure or implemented properly. The only way to find these rogue operations is to quiz everyone in the organization about how they operate and what they use to operate.

Even then, I have run into situations where a new payment channel pops out of the woodwork at the last moment. Next year’s assessment issue or we will not get the one we are currently doing out the door.

Misplaced Confidence in Existing Information Security Maturity

A lot of organizations that have been doing IT for years and years get caught in this trap. Just because you have been doing IT for an eternity does not mean that you have been doing it right for the same amount of time or that you are doing it correctly now.

In a lot of IT organizations it is an unfortunate fact of life that areas such as special projects, business continuity planning or information security were used as those “safe” places to put the former IT Vice President or Manager out to pasture so they could retire. It did not matter if the individual could handle the job; it was a place to park someone and provide a gentle way out of the organization.

A rare few individuals made the transition and actually took up the challenge of mastering their new responsibilities. However, the vast majority just checked out, collected their pay check and then retired. This left the organization with a very immature security operation compared to the rest of IT’s operations. Add into the mix the changing landscape of IT with business divisions and departments doing their own thing unbeknownst to anyone and you can see how the maturity of information security could be easily misunderstood.

Then along comes the QSA to do the PCI gap analysis and it all comes to a head as the organization comes to the rude awakening that all is not as good as they thought and that significant gaps exist. To add insult to injury, the organization finds that fixing the gaps is going to take a lot longer than the 90 days they had set aside for that activity so that they could get their Report On Compliance (ROC) done in the same year.

The Verizon report is a great read and provides a lot of insights. Everyone should get a copy and read it, take it to heart and address your organization’s security shortcomings.

Advertisement
21
Dec
14

Forensic Examinations And Facts

I am watching the news reports on the Sony breach and laughing at all of the “facts” that are being bandied about.  I want to use the Sony breach as a teachable moment and explain that the “facts” may not be as factual as represented by the media, forensic examiners or even the FBI.  I have done a number of forensic investigations and from my own experience there is a lot of effort required to prove conclusively that a particular device or actor is the actual attacker.

So let us take a look at the “evidence” we have at this point and see if the conclusions drawn should be treated as facts.

My first issue is how quickly the FBI and Mandiant have come out with the “fact” that North Korea is behind the attack.  According to the timelines I have seen, it was on November 21. 2014 when Sony was told by the attackers, GOP, that Sony had been hacked.  So in around three weeks of time the FBI and Mandiant have figured out, definitively, it was North Korea that was behind the attack.  Granted, Mandiant and the Bureau could have been investigating this long before, but given the way the news reports were written, I have to believe that Sony had no idea anything was wrong until November 21.

Why do I find this timeline spurious?  It took Mandiant over three years to trace things back to the Chinese for their report, APT1, last year and we are to believe that the FBI has the skill and manpower to trace a “sophisticated attack” (Kevin Mandia’s words to Sony) back to North Korea?  I find that hard to believe.  Not because the Bureau and Mandiant are not skilled, but that it is just impossible to cram a year’s worth of investigation into a few weeks, regardless of the manpower tossed at the investigation.

In my own experience, I typically had ideas as to what and how things happened within a few weeks, but now the difficult work of determining exactly how things went down began.  It can take months or even years to figure out an attack if it is ever figured out.  It is why NTSB investigations of airplane crashes take at least a year to have a report issued.  Any attack may not be as simple or uncomplicated as you initially think.

“Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.”

We do know for a fact that hackers reuse other attackers’ code.  Why reinvent the wheel if you do not need to?  Hence the variants of all of the attack code to not only evade anti-virus but to also enhance or improve techniques and methods.  Just because there are similarities in some lines of code, algorithms, methods, etc., does not mean that it was the North Koreans that were the actual actors.  It just means that the attackers used code attributed to North Korea.  Key word, “attributed”.  To me, a far better piece of evidence would have been if the code had been written in Korean or a North Korean dialect.

“The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack.”

Hard coded IP addresses are evidence?  So does that mean that everyone is guilty if I write their telephone number on a napkin and that turns up as evidence?  No.  A better piece of evidence would have been log data that actually can tie those IP addresses to the data that was exfiltrated out of Sony.  Just because IP addresses are hardcoded in an application does not necessarily imply that the IP end point was in fact the actual endpoint.  Hackers regularly own other organizations’ and governments’ servers to obfuscate their actual location.  Just because there’s a hardcoded IP address in a piece of code does not necessarily mean that is the endpoint.  It just means that a device could be involved.

“Separately, the tools used in the SPE attack have similarities to a cyber attack in March of last year against South Korean banks and media outlets, which was carried out by North Korea.”

The attack on certain South Korean banks and TV stations in 2013 was never definitively pinned on North Korea, it was just suspected.  The prime piece of evidence was a Chinese IP address that was assumed to implicate North Korea.  So using the South Korean attack as though it was definitively proved to be done by North Korea is not a fact.

While I had some issues with the Mandiant report on China and their investigation methods, the information being offered as “facts” that North Korea is behind the Sony breach are positively appalling.  People want an answer immediately and so one is given regardless of accuracy or even believability.  However, this is a technology issue and so it is easy to feed the public supposed “facts” since only the true technology people in the world will know the difference.

Unfortunately a breach such as the one at Sony will take time, probably a lot of time.  I would not be surprised if we end up with a lot of “suspicions” and “assumptions” when a final analysis is done and released, if we ever get a definitive answer.  The reason I believe that is that I do not think Sony had the kind of security implemented and working given the amount of information that has been supposedly gathered by the attackers.  The other clue in this is that it was November 21 when Sony was notified by the attackers they had been breached.

The key take away here is that forensic examinations very rarely prove WHO the bad actor was that caused the breach.  This is particularly true when the attacker is outside the organization.  There are just too many ways that an attacker can obfuscate their actual identity/location.

What forensic examinations do provide is a road map of improvements and enhancements in an organization’s security measures and procedures to minimize future attacks.  Note that I did not say “prevent” future attacks.  I use minimize because security is never an absolute.  Anyone with an extreme desire to attack an organization will do so regardless of how well your security program is constructed and executed.

Bruce Schneier points out this very fact about determined attackers in his post on the Sony breach.  I have always referred to this as the ‘98-2 Rule’.  Properly implemented and managed information security keeps 98% of attackers out.  However it is the remaining 2% that are determined enough to figure out how to work around even the best security.  All any organizations can do about that remaining 2% is to put controls in place so that when the 2% get through, they are detected as soon as possible and their impact minimized.  This is why security frameworks are so important because they provide organizations with guidance as to what it does take to only have the 2% to worry about.

Given the limited evidence provided thus far, could it be that this is all a sophisticated marketing ruse that went sideways?  Would it not be apropos if Seth Rogen and his production company did the attack as a promotional stunt and the attackers they hired found out that Sony was ripe for such an attack and then went further than what they were supposed to?

Something to think about.

04
Dec
14

It Is The QSA’s Fault

“Usually when PCI-compliant companies are breached, the real culprit is the assessor, the person who confirmed the company had met the PCI Requirements.” Jeff Multz, Dell SecureWorks

This is a very interesting approach for an employee at a qualified security assessor company (QSAC) to use to drum up business, toss all QSAs, including his own organization’s QSAs, under the bus.  I know that is not what he meant to do, but that is certainly what he did with this statement in his posting a few days ago.

I think most QSAs know where Mr. Multz is coming from.  He is more than likely venting over losses to QSACs that we all know are more interested in revenue generation than security.  They further that goal by incenting their QSAs to do as many PCI assessments as possible in the shortest amount of time as well as identify opportunities for selling the QSAC’s security appliances to solve compliance problems.  And to just pile on, they further their revenue generation by being the low cost provider through a focus on volume of work over quality.  As Kurt Vonnegut said in Cat’s Cradle, “In this world, you get what you pay for.”

Getting back though to Mr. Multz and his statement that QSAs are responsible for all breaches, let us see how that plays out with a few breaches.

During the Target breach, it was the QSA that was socially engineered and gave away the keys to the kingdom and missed all of the alerts generated by the FireEye software.  At Neiman Marcus, it was the QSA that missed the alerts for 60+ days that the malware was reinstalling nightly.  It was the QSA that swapped out the points of interaction (POI) at Barnes & Noble for malware infested POI.

Sorry Mr. Multz, but it was employees and/or contractors at all of these organizations, not the QSA that had a part in these breaches and all breaches for that matter.  I really do not see how you can hold a QSA responsible for the inaction and errors of employees/contractors.  Organizations are not going to pay to have QSAs on site, 24×7, to babysit all of their employees to maintain compliance with PCI or any other compliance program.  Not only that, no security framework is ever going to stop breaches, all they do is hopefully minimizing the impact when a breach occurs.

However, Mr. Multz was not done.

“The PCI Requirements were created so that organizations would focus on securing their networks, but many assessors only focus on meeting the requirements rather than security.”

From this statement it is painfully obvious that Mr. Multz does not understand what an assessment is about and how the assessment process works.  The job of a QSA is to execute the tests as defined in the PCI DSS Reporting Template and report the results of that testing – nothing more, nothing less.  Organizations are judged by a QSA as compliant with the PCI DSS whether they are just squeaking by or if they have a full on security program next to none.  Organizations do not get “extra credit” or “atta boys” if they have gone beyond the requirements.

While the original intent of the standards was to focus on securing cardholder data, that got morphed by the wonderfully misdirected marketing job that was done by certain card brands before the PCI standards came together.  For those of us around the security industry more than a decade ago, we advised Visa and MasterCard to stop pushing their cardholder information security program (CISP) and site data protection (SDP) standards as “The Way” that was going to stop breaches.  We explained that, properly implemented, CISP and SDP should minimize the number of PANs obtained, but it would not completely stop breaches.  It was only recently that the card brands started to realize this fact and stop pushing the PCI standards as a panacea of security.  If you have noticed with the rollout of EMV, Visa, MasterCard and the PCI SSC have stated that EMV is not a “silver bullet” solution and in other statements stated there are no “silver bullet” solutions.  That is a long way from a decade ago when their security standards were sold as the “be all to end all” for stopping breaches.  Unfortunately for QSAs everywhere, that message is out there and we have to deal with it every day.

All of this is not to say that QSAs cannot and do not make recommendations to organizations regarding their security programs and how and where it needs to improve.  I constantly make suggestions during my PCI assessments on how my client needs to improve their security posture.  However, it is ultimately up to the organization to put such changes in place, not the QSA’s responsibility.  If an organization chooses inaction, I will bring it up again and again.  But as the old proverb states, “you can lead a horse to water, but you cannot make them drink”.

Where the PCI DSS assessment process truly fails is the point in time approach (with the exception of vulnerability scanning and a few other select requirements).  To address that shortcoming, the Council has introduced the concept of business as usual (BAU) and it is my guess that we will see that concept placed into the standard in the next version.  It will be then that QSAs will have to test PCI compliance over a 12 month period similar to testing procedures financial auditors perform for annual financial audits.

As a result, the inclusion of BAU as part of the PCI DSS will likely be the straw that breaks the camel’s back for a lot of organizations.  This is because BAU will require organizations to track their compliance with the PCI DSS 24x7x365 as they should have been doing all along.  But from experience, I can tell you that there is no organization I have ever encountered that was compliant with any standard all of the time because people make mistakes.  As such, BAU is designed to shed light on those mistakes and require organizations to identify them and remediate them.  For organizations just squeaking by, this will probably make PCI compliance truly impossible to achieve.  If you are one of those organizations complaining about compliance with the current PCI DSS, just wait until BAU gets added.  Organizations that are truly interested in security are already implementing BAU because they see the operational value in integrating security controls with their other business controls.  BAU will show the true colors of those organizations that want security versus those that are checking a box.

And that gets me to Mr. Multz’s actual reason for his post, what makes a good QSA?  Good QSAs understand that the world is not perfect nor is security.  Good QSAs know that compliance with the PCI DSS does not and will not eliminate breaches.  Good QSAs know that the goal of PCI compliance is to minimize security control errors, provide an ability to recognize security control errors as soon as possible and then remediate those security control errors such that the security controls are only non-compliant for the shortest possible amount of time.

But just because a company has such errors does not automatically mean that they are not PCI compliant.  A good QSA only judges an organization non-compliant when the QSA has evidence that problems are consistently recurring and are not being corrected in a timely manner or corrected at all.

I appreciate Mr. Multz’s frustration but as a QSA I do not appreciate him tossing me under the bus with the QSAs that are doing a disservice to PCI compliance.  Like any industry, there are good service providers and there are bad service providers.  Those of us in this industry all know who the bad ones are and we hope they will get weeded out.  But from my own long experience in consulting, that does not always happen.

So in my very humble opinion, Mr. Multz needs to suck it up and deal with it, but stop tossing QSAs under the bus in the process.  QSAs are only the messengers.

01
Jul
14

The Flaw In Requirement 8.5.1

Today it was unceremoniously announced that a number of major restaurant chains’ franchisees had been potentially hacked between February 28, 2014 and April 18, 2014 because their point of sale (POS) vendor’s remote access account had been compromised.  I say franchisees because I know a couple of these restaurant chains’ corporate operations and they were not using a third party to manage POS.

In a nutshell, the attackers gained access to the POS vendor’s LogMeIn account.  LogMeIn, like a lot of similar remote access facilities, has an address book where you can store remote access credentials.  So with access to LogMeIn, by default, the attackers had access to the address book that contained credentials for any customer environments in the address book (likely all customers, but possibly not).

To remind everyone, requirement 8.5.1 of the PCI DSS v3 states:

 “Additional requirement for service providers: Service providers with remote access to customer premises (for example, for support of POS systems or servers) must use a unique authentication credential (such as a password/phrase) for each customer.

Note: This requirement is not intended to apply to shared hosting providers accessing their own hosting environment, where multiple customer environments are hosted.

Note: Requirement 8.5.1 is a best practice until June 30, 2015, after which it becomes a requirement.”

The PCI SSC guidance for requirement 8.5.1 states:

 “To prevent the compromise of multiple customers through the use of a single set of credentials, vendors with remote access accounts to customer environments should use a different authentication credential for each customer. Technologies, such as two-factor mechanisms, that provide a unique credential for each connection (for example, via a single-use password) could also meet the intent of this requirement.”

It is likely that the vendor was trying to get a jump on complying with requirement 8.5.1 in the PCI DSS v3.  However, this vendor may have been using such an approach all along to manage customer remote access which is also not uncommon with technology companies.

The first thing to note is that requirement 8.5.1 is a best practice until June 30, 2015 after which it becomes a full requirement.  However, as I pointed out in an earlier post, a lot of vendors will likely have to start rolling out a remote access solution as soon as possible to minimize service level agreement (SLA) issues.

One of the most likely ways vendors are addressing compliance with 8.5.1 is through services such as LogMeIn, GoToMyPC and similar services.  These are inexpensive services available to any organization or anyone.  There are also enterprise solutions such as those from Bomgar and the like that purport to have better security.  However, all of these solutions share the concept of an address book to make gaining remote access easier for the vendors’ users that rely upon them.  And that is their Achilles’s heel.  If an attacker gains access to the remote access service, they gain access to the address book and, therefore, to the customers’ credentials stored in that address book.  Game over.

It is important to note though that what this vendor was doing fully complies with requirement 8.5.1.  But even though this service provider was complying with the intent of 8.5.1, the implementation was flawed.  This is just another example of how PCI compliance does not mean that security issues cannot still occur.

How easy is this to happen?  Think a spear phishing attack against any vendor that does remote support and maintenance.  Regardless of the customer credential management solution (in-house or cloud based), once access to the credential management solution is compromised any concept of customer security is over.

So what should vendors being doing to mitigate this situation?  Exactly what our vendor who was breached did, implement two-factor authentication on the credential management system.  Spear phishing attacks will not be successful because even with the credentials to LogMeIn or similar, the attacker will need the second factor.  Yes, the attacker can still compromise the support person’s desktop, but they will not have access to customer credentials.

Trouble is, some vendors will want a cheap two-factor solution meaning something that sends out codes via SMS, email or telephone, versus RSA SecurID or SafeNet to name a few.  Solutions over SMS, telephone and email have a variety of known vulnerabilities and can easily be intercepted or even redirected.  In the case of LogMeIn, they indicate that they only support SecurID.

Regardless, all of you service providers out there that have remote access to your customers managed by some enterprise credential management solution, please implement a strong two-factor authentication solution on your customer credential management solution before you too become a newspaper headline.

I would like to believe that this vendor thought they were doing the right thing and got burned because of how they implemented their solution.  At least they stepped up and fixed it the right way.  Unfortunately, this is how we sometimes learn, from our mistakes.

26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

01
Mar
14

How Did It Happen?

This is just my supposition on how the Target breach occurred, but it is based on what has been released to date plus what limited knowledge I have of Target’s environment, the environments of other large retailers and my many years of penetration testing.

Fazio Mechanical Services

According to the latest reporting, Fazio Mechanical Services (Fazio) is believed to be the starting point of the Target breach.  From what has been reported, a Phishing attack on Fazio yielded access to Fazio’s computer systems and network.  In their statement regarding the breach, Fazio says:

 “Fazio Mechanical does not perform remote monitoring or control of heating, cooling or refrigeration systems for Target.”

“Our data connection with Target was exclusively for electronic billing, contract submission and project management, and Target is the only customer for whom we manage these processes on a remote basis. No other customers have been affected by the breach.”

If we take Fazio at their word, Fazio did not have direct access to Target’s network.  That means if Fazio was breached, that breach did not result in a direct path to Target’s network.  Brian Krebs reported that he spoke with an ex-Target employee who told him that Target uses the Ariba Supplier Management solution for managing its external vendors.  The Ariba system is available publicly on the Internet but it requires credentials in order to gain access to the application.

Based on these facts, my guess is that the Fazio attackers were likely separate from the Target attackers.  Therefore, the Fazio breach is like most breaches; the attackers get in, probe around and then leave if nothing of value can be identified.  That is not to say that they were not necessarily targeted for a way into Target, but I find it unlikely that Fazio was specifically targeted for the Target breach.

The Fazio attackers likely advertised the information and credentials that they gathered to other attackers on the Internet “underground” and sold them to whoever was willing to pay including the Target attackers.

The Russians

In my opinion, the Russians that eventually sold the card information were probably not the actual attackers that retrieved the cardholder data from Target.  However, they likely could have been behind the attack as the folks that instigated it and funded it.  Other than selling the cardholder information, until these individuals admit their role, we will probably never know if they were just a fence for the information retrieved or if they were behind the attack.

In my scenario, the Russians began scoping out likely candidates for compromise and picked Target because they found information on the Internet “underground” and determined that it was likely possible to successfully get in and get information.  Once the research was done, they then assembled a team to get the actual attack done.

The Malware

In reading the various news accounts, the Secret Service indicated that the attack was sophisticated.  A review of the infamous Microsoft case study, Target had implemented Microsoft Windows Server Update Services (WSUS) now part of Microsoft Service Center Operations Manager (SCOM) at all of their stores so that they could rapidly deploy updates to their stores in the smallest possible time frame.  In the retail business, IT people get very small windows of opportunity to perform updates so this architecture would provide IT with the ability to stage updates and then deploy those updates as quickly as possible.

A lot of people have commented throughout the numerous discussions of the breach on Google+, LinkedIn and Twitter questioning how the attackers could have compromised so many POS systems so quickly.  It is my opinion that this was done through SCOM.

But there is a huge problem with using SCOM when the software is not Microsoft’s – SCOM can be somewhat to very temperamental when it comes to deploying non-Microsoft software and updates.  Over the years it has gotten better with some non-Microsoft solutions, but considering the deployment of malware via SCOM and having it work right the first time requires knowledge of not only SCOM but the Target computing environment.

This brings me to the fact that I believe an insider had to have been involved in the breach.  Not necessarily an actual Target employee, although that cannot necessarily be ruled out, but more likely a knowledgeable contractor.  Like all large corporations, Target outsources development to contractors that have offices and staff located all over the world.  Those contractors also have their own contractors that are located all over the world.  It is my opinion that the Russians compromised one or more contractors with development knowledge of Target’s POS application and deployment of the POS software.  This was required to develop the malware from the BlackPOS code and develop a one-time successful deployment capability using SCOM.  Whether or not these individuals were actually part of the attack team is debatable.  They would only be needed to develop the solution and the SCOM deployment scripts and possibly procedures to avoid Target’s QA process.

Outsourced contractors in third world countries can be readily bought.  People in the West forget that these developers can be making anywhere from cents per hour to only a few dollars an hour.  That is why development work is outsourced to them as it is more cost effective than using developers where they are making one hundred dollars per hour or even more.

But that brings up an interesting conundrum in this breach.  If a contractor was compromised, could they not still be involved in Target’s development efforts and just deliver the malware directly as part of their deliverable?  I think that could have been a possibility, but it would have risked being discovered in Target’s code review, quality assurance and testing processes which is probably why the malware was not delivered by that method.

The Attackers

The attackers could have come from anywhere, but most likely are from Russia or one of the former Russian states such as Ukraine or Belarus.  The reason this is most likely is that the people that sold the Target cardholder data were Russians and they would want people with their same background to execute the attack as well as having some amount of control over the attack team.

The Attack

The attackers that broke into Target likely went shopping for ways into Target and found the Fazio Ariba credentials for Target as well as probably other credentials to other publicly available Target applications.  The attackers either bought those credentials or had their Russian bosses purchase those credentials.

I had to put my penetration testing hat on to figure out how the Ariba credentials came into play.  The reason is that if Ariba is available from the Internet to anyone, why would an attacker need credentials?  Then it dawned on me.  They needed the credentials in order to compromise Target quietly.

My rationale for this is that Target does a decent job at securing publicly facing applications, particularly since their 2007 breach.  Assuming the Ariba application was properly implemented, doing an attack without the credentials would have alerted Target’s information security personnel and it would have been game over.

As a result, the attackers needed the credentials so that they could gain access to Ariba so that they then could compromise it with a cross site scripting attack, SQL injection or whatever they used to gain access to one or more of the Ariba servers so that they could then breach the rest of Target’s network, specifically the SCOM system(s).  The reason this approach would be more likely to be ignored is that the attackers would have valid credentials and any anomalous activity would likely be written off by Target personnel.

This brings us to the next reason I believe an insider is involved.  The timeline discussed thus far gives the impression that the breach was a fairly quick operation.  The only way the breach could have been conducted so quickly is if the attackers had knowledge of where they needed to go to compromise the SCOM system.

That said, the database of Target guests that was also retrieved was likely collateral damage in that it was encountered during the attack and was taken so that the attackers did not walk away empty handed.  The other possibility is that the database was used to test the data exfiltration process to ensure it would go undetected.

Once the attackers owned the Ariba system, they would then have had access to the administrators of Ariba.  The insider would have given the attackers an idea of where the SCOM system was located and probably who had access.  It then became a process of compromising one of those Administrators to gain access to SCOM.  Because they were inside Target’s network, the administrators were likely compromised using an extremely targeted phishing attack using the internal email system.  As a result, the phishing message would have looked even more than legitimate because it was internally generated and delivered.  The message likely contained some sort of Word or Excel document that had backdoor software that would not be detected by the anti-virus solution.

However another option could have been used once the attackers were inside.  They could have approached any of the administrators and pretended to be a contractor and asked for access to SCOM in the test environment.  From there they could have staged their malware and then sent it through the QA process.  Regardless of how they gained access to SCOM, the attackers had to have used the SCOM system to deploy their malware with the speed that they deployed it.

Creating the data dispersal server was a straight forward problem.  With the insider’s knowledge, they knew where FTP was implemented and merely compromised the server to be their own collection point so as not to arouse suspicion.  To get the data out of Target they used DNS as every system needs access to DNS.  A lot of people have argued that Target should have seen the exfiltration of the data via DNS and have pilloried Target for their ineptitude.  However, if the attackers were as sophisticated as they have been portrayed, they likely constructed their exfiltration system to mimic the size of valid DNS packets and thus only traffic volume would have been a possible trigger.

Is this scenario correct?  We will not know until a final report is released if we ever see a final report that gives actionable information.

That said, I am sure there are a lot of you reading this and are shaking in your boots based on this scenario.  That fear is likely based on the fact that you realize how ill equipped your organization is to deal with this sort of attack.  And you should be scared.  This is a war of escalation that we are waging.  Organizations step up their game and the attackers up the ante on their side.  Like the famous saying, “When chased by a bear, I only have to outrun the last person to save myself” is very true in this situation as well.  Your organization’s security game only has to be better than the other organizations.  But when an organization like Target is breached and they were considered to be at the top of the security game, what chance does an organization with mediocre security have?

The only saving grace might be is that your organization is flying under the radar.  I say “might be” because, according to the majority of reports on the state of information security, most organizations have no idea that they have been compromised.  That is because people rely on anti-virus and other technologies that have a poor track record of identifying malware and sophisticated attacks.  And then, as we learned in this past week’s report on the Neiman Marcus breach, you can have information security personnel write off malware discovered as false positive results and let it re-infect for months without investigating or even worrying about what was going on.

It is easy to pillory the guy that got breached.  However, a lot of you should look inside your own organizations before tossing stones.  I would guess that most of you tossing those stones would not fair any better and likely worse than Target should your organization be breached.

18
Jan
14

Why The Paradigm Must Change

The Target, Neiman Marcus and the potential other breaches of retailers to come should be a learning moment for all of us to demand that the card brands change their business paradigm to one that is more secure.

Bolt-Ons Do Not Cut It

For all intents and purposes, how a credit card works has not changed since the late 1950s when they were introduced.  Yes, there have been advancements such as EMV, 3D Secure and end-to end encryption (E2EE), but those are all things that just bolt onto the original concept.  The trouble is that, given today’s technologies and their capabilities, the card and the bolt-ons are just no longer providing the security they once did.

With the Target breach there has been a call to get the US to finally convert to EMV.  The trouble is that EMV would have leaked enough information for fraud to be committed as well, so it is not an answer.

Trade association spokespeople trotted out 3D Secure and other methods of securing online transactions.  The trouble is that most merchants eschew 3D Secure and its kind.  In addition, there are known vulnerabilities with these supposedly secure payment methods so they also have potential issues that could be exploited.

Then there is E2EE also known as point-to-point encryption (P2PE) from a PCI perspective.  These also can be exploited.  It may be more difficult, but when you are determined to gain access to sensitive information, that does not matter.

After the release of the PCI DSS in 2008, a lot of retailers implemented a variety of E2EE solutions.  Unfortunately, the endpoint at the retail location was the POS register and not the terminal.  This was not due to merchants’ negligence; this was due to how their POS applications operated.  This allowed for attacks such as that used in the Target breach to succeed.  All the attacker has to do is insert their malware into the POS process so that the malware can “see” the cardholder data before it gets encrypted.

Even in solutions that do E2EE/P2PE to the terminal can be defeated by taking the same approach and inserting the malware into the terminal process before the terminal can encrypt the data.  Worse yet, if the terminal is breached, the attacker can capture PINs if they also have malware that captures the keystrokes on the terminal before the PIN is encrypted.  There are a number of methods to minimize these risks at the terminal, but if the terminal supply chain is compromised as it was over a year ago in the Barnes & Noble breach, there is little a merchant can do to stop such attacks.

The bottom line is that all of these solutions are bolt-ons to the existing card paradigm and all still have risks that a breach could occur.

Using Complexity Against Us

Brian Krebs and others have wondered aloud how a sophisticated organization such as Target that has information security and forensic resources second possibly only to the government could have been compromised.  Particularly after the 2007 compromise by Albert Gonzales when Target totally revamped and increased their security posture to minimize the likelihood of another event.

The first clue to me came when I read the iSIGHT PARTNERS report on the Target breach.  The theme that comes through loud and clear is that the attackers are using the complexity of Target’s technology infrastructure against Target.  I mean how could FTP activity and huge data transfers (internal and external) go so unnoticed?

Actually, that was likely fairly easy.  The attackers used existing network traffic to mask their own network traffic.  They sought out servers that already had large volumes of traffic and put their data collection server on one of those servers that already had a lot of traffic.  Better yet, a server that was already running as an FTP server.  As a result, even with diligent monitoring, the increase in traffic likely did not raise any alarms.

People assume that such breaches are like a “snatch and grab” in the real world.  The attackers break into an organization’s network, quickly take what they can off of the computers they encounter and leave.  That was the modus operandi (MO) in the past, but not today.  Sophisticated and organized attackers such as those that breached Target, do what they can to remain unseen while they learn more about their victim.  They take their time mapping out the network and determining what devices they want to compromise to further their efforts to gain access to the sensitive information they seek.  Because of this, it is highly likely that the Target attackers encountered the Target customer database during their investigation of the Target network and took it first so that they would have at least something for all of their efforts.

The most insidious thing I think the attackers did was that they likely used Target’s software distribution system to disseminate their malware.  Given the number of POS systems compromised (around 51,000); I find it hard to believe that the attackers manually installed their malware on those POS systems.  It would have placed their operation at extreme risk likely resulting in its discovery.  By using Target’s software distribution system, the attackers got an added benefit of legitimacy to their malware because they Target themselves did the installation.  As such, the malware would appear as valid because Target’s software management system initiated the change.

Now What?

All of this brings up an interesting conundrum.  If attackers are stepping up their game and using such techniques, how do we detect them?  It is a very good question with no good answers.  The iSIGHT report offers methods to stop and eradicate this particular attack.  However, the next attack and the attack after that will all likely use different malware and different techniques to get the data out of your network.

We are in is a war of escalation with no end in sight.  Merchants step up their efforts to stop such attacks and the attackers adapt and adopt new techniques to breach organizations and gain access to their sensitive information.  What we need is a solution that stops the escalation and gets us out of this vicious circle.

That is why I am pushing the 15 – 16 character single use transaction code as that solution.  My reasons are as follows.

  •  The algorithms already exist as a number of the card brands experimented with them a decade or more ago.
  • It will work with existing POS technology and applications.
  • It will work with existing eCommerce sites.
  • It can be implemented into eWallet applications.
  • It can be processed, stored and transmitted without encryption.
  • It can be generated by PCs, smartphones, tablets, credit card sized devices and any other devices that have computational capabilities.
  • It can be displayed on devices in a character format for manual entry or as one or 2D bar codes for scanning.
  • It can be transmitted via swipe, EMV, near field communication (NFC), Wi-Fi or even Bluetooth.
  • And best of all, it is secure by the very nature that it can only be used once.

There will be some changes that would be required at the transaction processors and acquiring banks to handle such a solution.  But given that some of the card brands already have experience with this solution, there is a body of knowledge that already exists as to how it needs to be implemented.

Let the discussion begin on how we move ahead with a better, more secure solution.

12
Jan
14

More Retailers Supposedly Breached

Reuters is reporting that Target and Neiman Marcus are not the only retailers that were breached during the holidays.  There are at least three more retailers have also been breached.  What makes this announcement interesting is some of the information disclosed in this article.

 “Law enforcement sources have said they suspect the ring leaders are from Eastern Europe, which is where most big cyber crime cases have been hatched over the past decade.”

This was reported by Brian Krebs on Christmas Eve.  However, based on Brian Krebs’ reporting, it is the Eastern Europeans that are marketing the cards obtained, but they are not necessarily the perpetrators of the actual crime nor are they necessarily be behind the crime.  So whether or not Eastern Europeans are the perpetrators is pure speculation at this point.  At one point there were reports that the attackers are from Southeast Asia, but those reports are also unconfirmed.

I really do not care who did these attacks.  I am more interested in understanding how they were done so that I can advise my clients as to what they need to do to minimize the likelihood that they end up in the news.

“One of the pieces of malware they used was something known as a RAM scraper, or memory-parsing software, which enables cyber criminals to grab encrypted data by capturing it when it travels through the live memory of a computer, where it appears in plain text, the sources said.”

“Yet a law enforcement source familiar with the breach said that even if the retailer had implemented those steps, the efforts may not have succeeded in stopping the attack.”

We now have an idea of how the crime was committed.  The attackers were taking card data out of memory.  It also appears that the attackers were using a memory scraper that was already available such as vSkimmer or BlackPOS.  However, based on the unnamed law enforcement source, the attackers either modified the malware or used it as a basis for their own malware such that anti-malware solutions would not recognize it as malware.

“One of the sources who told Reuters about the recent rash of attacks said the memory parsing malware cited in the Visa reports was among the tools that the hackers had used, but said they used other techniques as well.”

I found this information the most interesting as it seems to lend credence to my theory that the software was part of an update to the card handling application installed on the POS.

 “Avivah Litan, a security analyst for Stamford, Connecticut -based Gartner information technology research firm, said she learned about a separate set of breaches, dating back no more than a few months before the November 28 Thanksgiving Day start of the holiday shopping season, from a forensics investigator. She declined to provide his name.”

“Investigators believe that the early series of attacks on retailers staged before late November were mostly used as trial attacks to help the hackers perfect new techniques they then used against Target, stealing payment cards at unprecedented speed, Litan said.”

These quotes imply that these were attacks that were traditional hacks of the retailers’ networks from the outside.  The problem I have with that is that this speculation does not square with my knowledge of the changes that Target implemented after they were a victim of Albert Gonzalez back in 2007.  Target made significant changes that minimized the ability of an outsider being successful in breaching their card processing environment.  Not only that, but the PCI DSS push isolating cardholder data environments (CDE) from the Internet.  Assuming that all of the retailers involved followed the requirements of the PCI DSS, then they should have properly isolated their CDE and were monitoring it for such attacks.  Not that every retailer might have identified an attack on their CDE, but I know that a security aware organization such as Target should have identified such an attack.

Not only that, but we are no longer talking about a single retailer.  We now have at least five retailers that are potentially in play and possibly even more.  It seems to be awful long odds in my book that we have five retailers all hacked in one way or another and then had the same malware installed.  As a former penetration tester, I could see getting one retailer in this way, maybe two retailers.  But not five or possibly more with the same or similar methods in the same time frame.  Again, it can be done, but would require a lot of time, coordination, people and effort.

Hackers may be sophisticated, but they are like water and typically want to find the path of least resistance to accomplish their goals.  Attacking networks with firewalls and monitoring are to be avoided as they take lots of time and effort and the likelihood of getting caught in the process is too high, particularly when we are talking multiple organizations.  That is why I go back to compromising the software at the source.

If I were constructing such an attack, I would either infiltrate the POS application vendors for large retailers or coerce an existing employee of those companies to insert my malware in their code.  That way my exploit comes directly from the source.  The good news for attackers is that there are a limited number of companies that develop the code that most retailers use to handle card transactions, so an attacker would just have to look for the vendor with the customers that would provide the best results.

Since these vendors issue precious few updates, their customers are typically chomping at the bit to obtain those updates and get them rolled out before the holiday season.  They are going to be tested heavily, but a smart attacker would have set their malware up to know they are being tested and have the malware remain silent during testing.  Once placed into production, the malware would activate and begin collecting card data and sending it back to wherever the attacker decided they wanted to collect it.

Easy peasy.  And a lot simpler and easier than hacking networks.

Again, this is all speculation on my part.  But knowing how attackers work, I feel my scenario makes much more sense than what is being discussed.

22
Dec
13

How About We Fix The Problem?

As I pointed out in my last post, EMV would have not stemmed the loss of data in the Target breach.  All EMV would have done is restricted where the thieves could use the card data obtained.  Even though the thieves can supposedly clone cards from the data gathered, as far as anyone has reported at this point, cloned cards do not appear to be the method of fraud.  So the assumption I have is that all, or the vast majority, of the fraud committed to this point has been through card not present transactions.

In response to people clamoring for a solution to the breach problem, Visa and MasterCard have curiously remained silent.  I would have assumed that the card brands would have trotted out their press releases touting EMV as the savior.  Yet they have said nothing.  Could it be that the card brands are actually acknowledging that EMV would have not been the answer?  One can only hope.

So what is the answer?

To me the answer is single use transaction codes of 15 to 16 characters in length.  With the advent of smartphones and miniaturization of electronics, the ability to create a card or an application that generates such a code is not only possible, but has been demonstrated in recent years.  Not only that, but the card brands and banks themselves dabbled with such solutions over 10 years ago but for some reason backed off on pushing such a solution.  My best guess is that without a portable method of using the single use code system, there was no point to pushing such a system.  But times and technology change.

With the capabilities of today’s technology, the single use codes could be displayed as bar codes so that existing merchant POS systems could scan them and avoid data entry errors.  Since they are no more than 16 characters in length, the codes can be stored in applications’ existing fields used to store card numbers without modification.  Since the card brands and banks have already developed the algorithms for this approach, they only have to agree on which algorithms to use.  But best of all, since the code can only be used once, it can be processed, stored and transmitted wherever and however without fear of a compromise because it can only be used once.

This is just my thought for a solution but there are other people and organizations that have their own solutions to fix this problem.  The bottom line is that it is time to fix the problem, not keep kicking the can down the road with a known format that is at the end of its life.

21
Dec
13

EMV And The Target Breach

There are a lot of people now pointing to the Europay MasterCard Visa (EMV) card (aka “Chip and PIN”) as the savior from breaches such as those at Target and I am sure Visa and MasterCard are very pleased with that fact. Well, I hate to burst your bubble, but if the US was only using EMV like Europe and Canada, it probably would have had only a minor impact.

Are you stunned by that statement? After all, that is not how Visa and MasterCard are portraying EMV. If you read their media statements, they imply that EMV is the answer to these breaches.

To make sure I was describing the security features of EMV correctly, I reached out to my friend and EMV expert Andrew Jamieson, Security Laboratories Manager, at Underwriters Laboratories – Transaction Security in Kew, Australia. Underwriters Laboratories tests and certifies a lot of things, one of which is card terminals (magnetic stripe and EMV) to the PCI standards. As such Andrew has a lot of knowledge in the area of EMV and how it works.

I asked whether or not EMV cards are encrypted.

“EMV cards are not encrypted, per se, but instead store a couple of secret keys which are used as part of the authentication of the entire transaction. All card data can be output from the card in the clear – PAN, CVV, etc – except for the customer PIN and the secret keys. The CVV will also be different from that on a magnetic stripe, either static (called an iCVV) or can also be a dynamic value that changes with each transaction (dCVV).”

Well there is a piece of interesting news. While the transaction gets encrypted with the secret keys, an EMV card would still provide some information in a Target-like breach.

Then I asked if there is a risk even with EMV.

“So, any chip based transactions from an exposure such as the Target one would only have exposed the PAN (technically, the PAN on the card can be different from the PAN on the face/track, but in reality this never happens), not the full track. As the CVV would not have been exposed, the PAN would have limited value.”

If the magnetic stripe was not present, the CVV would not be required or recorded in the chip, so only the iCVV or dCVV would be available and those would not be usable as the code printed on the card would not match either of those values. Therefore the information gathered would not allow for the cloning of cards because the information recorded in the chip is not the same as the information that is printed on the physical card. But this should not be a surprise because that was what the EMV standard was designed to do, prevent the cloning of cards.

However in a Target-like breach where the terminal and/or POS system were compromised, the chip would have still given up enough information to be used in card not present transactions such as those conducted via eCommerce. As a result, the attackers would be limited to only defrauding online merchants but that is where most card fraud is being committed.

EMV is not a “silver bullet” such as the card brands like to imply. Yes, it is better than the magnetic stripe, but it does nothing to stem the tide of the growing fraud in online transactions. There are a number of new technologies on the horizon that will minimize the fraud risk of using credit/debit cards in both card present and card not present situations. But until the card brands get behind those solutions, they will continue to push their old solutions and not address the current problems.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031