Posts Tagged ‘security

12
Oct
14

Lawyer Or Security Professional?

“It depends upon what the meaning of the word ‘is’ is. If ‘is’ means ‘is and never has been’ that’s one thing – if it means ‘there is none’, that was a completely true statement.” –President of The United States of America, William Clinton

It has been an interesting time as the December 31, 2014 deadline approaches and version 2 of the PCI DSS comes to its end of life.  I have started to notice that there are a lot of security professionals and others that are closet lawyers based on the discussions I have had with some of you regarding compliance with the PCI DSS.

The first thing I want to remind people of is that if you do not want to comply with one or more of the PCI DSS requirements, all you have to do is write a position paper defining for each requirement you find onerous, why it is not relevant or not applicable for your environment and get your management and acquiring bank to sign off on that paper.  But stop wasting your QSA’s or ISA’s time with your arguments.  It is not that we do not care, but without such approval from your management and acquiring bank, QSAs and ISAs cannot let you off the hook for any requirement.

With that said, the first lawyerly argument we are dealing with these days revolves around the December deadline.  We continue to get into arguments over what the deadline actually means.

It appears that the PCI SSC and card brands’ repeatedly saying that version 2 is done as of December 31, 2014 was not clear enough for some of you.  And further clarifications from them that any reports submitted after that date must be under version 3 are also apparently too much for some of you to handle.  I do not know how there could be any misinterpretation of ‘DEADLINE’, ‘DONE’ or “AFTER THAT DATE’ but apparently, there are a lot of people out in the world that do not understand such words and phrases.  Then there are the amazing contortions that some people will go to in a twisted dance to the death to get around this deadline.

Where have you been?  How could you have missed this deadline?  It has been known since the PCI SSC announced their change when standard updates would be issued back with the release of the PCI DSS v2 more than three years ago.  But even assuming you were not involved back then, the PCI SSC announced the deadline over a year ago with the release of PCI DSS v3.  Either way, it certainly should not have been a surprise as there has been plenty of warning.

But then do not take this out on your QSA.  QSAs are just the messenger in this process and had nothing to do with setting the deadline.  The PCI SSC and the card brands set that deadline.  You have a problem with the deadline, complain to them.  But if you are willing to listen, I can save you that discussion.  They will politely tell you the deadline is the deadline.  You are out of luck.  If you do not like that answer, then stop taking credit/debit cards for payment for your organization’s goods and services.

The next lawyerly argument is around the June 30, 2015 deadlines for requirements 6.5.10, 8.5.1, 9.9, 11.3 and 12.9.  Again, it is as though these dates were kept from you, which they were not.  I even wrote a post about these requirements titled ‘Coming Attractions’ back in September 2013.

For those that are calendar challenged, June 30, 2015 is practically just around the corner in business terms.  If you had years to get ready for the PCI DSS v3, what makes you think that you can just turn something on in a year and a half?  Yet we continually see people arguing that until that date, they are not going to address any of these requirements.  All as though, like a light switch, something magical will occur on July 1, 2015 that will meet those requirements.

For merchants, requirements 9.9 and 11.3 are going to be huge issues particularly for those of you with large networks and lots of retail outlets.  If you have not gotten started on these requirements now, there is no way you will be compliant with these requirements by July 1.  Both of these require thought, planning and training.  They cannot just be started overnight resulting in compliance.

For requirement 11.3, the new approach required for penetration testing is resulting in vulnerabilities being uncovered.  Organizations that did not want to get caught flat footed are finding that their network segmentation is not as segmented as they once believed.  They are also finding new “old” vulnerabilities because of these network segmentation issues.  The bottom line is that these early adopters are scrambling to address their penetration testing issues.  In some cases ACLs need to be adjusted, but I have a few that have found they need to re-architect their networks in order to get back to compliance.  Obviously the latter is not an overnight kind of fix.

Requirement 9.9 is all about ensuring the security of points of interaction (POI) as card terminals are referred.  Because of all of the POI tampering and hacks that have occurred, the Council has added the requirements in 9.9 to minimize that threat.  The biggest problems early adopters are running into is getting their retail management and cashiers trained so that they understand the threats and know how to deal with those threats.  This requires creating new procedures for daily or more often inventorying of the POIs and visually inspecting them to ensure they have not been tampered with.  Companies are rolling out serialized security tape that must be applied to the seams of POIs so that any opening of the case can be visually determined.  Locking cradles are being installed for every POI to secure them to the counter.  Let alone implementing those new procedures for doing at least daily inspections and what to do if you suspect tampering and how to inform corporate of potential issues.  Again, not something that just happens and works day one.

For service providers, besides 11.3, requirement 8.5.1 is going to be their biggest issue.  This requires the service provider to use different remote access credentials for every customer.  This is in response to the breaches that occurred at a number of restaurants in Louisiana a few years ago as well as more recent breaches.

The problem that early adopters of 8.5.1 are finding is that implementing enterprise-wide credential vaults is not as simple as it appears.  The biggest impact with these implementations is that service providers start missing their service level agreements (SLA).  Missing SLAs typically costs money.  So these service providers are not only incurring the costs related to implementing the credential vault solution, but they are suffering SLA issues that just pile on the injuries.

But the final straw is all of the people that closely parse the PCI DSS and only the DSS.  You saw this with some of the questions asked at the latest Community Meeting.  You also see it in the questions I get on this blog and the prospects and I clients I deal with daily.  These people are hunting for a way to get around complying with a particular requirement.

This occurs because people only read the DSS and not the Glossary, information supplements and other documents provided by the Council.  At least with v3 of the DSS the Council included the Guidance for each of the requirements.  Not that adding Guidance makes a whole lot of difference based on the arguments laid out by some people.  The Council could do us all a favor if they generally published the Reporting Template with all of the other documents.  Not so much that people would necessarily read it, but it would give QSAs and ISAs more ammunition to use when these discussions come up.

Successful security professionals understand the purpose of security frameworks.  These frameworks are meant to share the collective knowledge and lessons learned regarding security with everyone so that everyone can have a leg up and know ways of detecting and mitigating threats.  Successful security professionals use these frameworks to get things done, not waste their time developing scholarly legal arguments or twisting the English language as to why they do not need to meet some security requirement.  They put their heads down, review the frameworks, develop plans to implement the changes necessary to improve security, work the plan and deliver results.  Do those plans always meet requirement deadline dates?  Not always, but they are close or as close as they can get given other business issues.

The bottom line is that security professionals are not lawyers and good security professionals certainly do not sound like lawyers.  But if you constantly find yourself sounding like a lawyer digging so deep to split legal hairs, in my very humble opinion, you really need to re-examine your career or lack thereof.  I say lack thereof because, in my experience, security professionals that operate like lawyers do not have long careers.  They move around a lot because once people realize that they cannot deliver, they are forced to move on.  Eventually a reputation is developed and after that point these people end up forced to find a new career because the security community knows their modus operandi.

08
Aug
14

The Dilemma Of PCI Scoping – Part 3

In part 2 we discussed the criticality of a risk assessment and started on implementing the framework with fixing monitoring and alerting so that we can properly manage the risk we will be accepting.  In this part I will deal with Category 2 and 3 systems and how to manage their risk.

One of the big problems I have with the security purists is with their arguments over Category 3 systems.  Do not get me wrong, I understand their reasons for their statements.  However to me, their arguments seem to imply that Category 3 systems are somehow not secure or not as secure as other systems and therefore cannot be trusted.  This further implies that organizations are implementing some sort of multi-tier security structure which should not be the case.  Taking such an approach would create an operational and support nightmare, let alone increase the risk of compromise.

In today’s world, all devices need to have a certain basic level of security comprised of, at a minimum, hardened OS, minimized ports and services, access control, anti-virus and anti-malware (as appropriate and necessary), personal firewall (if mobile), logging and other similar security measures as necessary.  All of this is something I like to refer to as “Security 101”.  If you are not doing Security 101 across the board, then you likely have bigger issues than just PCI compliance.  But if you are following a Security 101 approach, then all of your devices under your direct control should have a solid, basic level of security and should be able to be trusted when on your network.  As a result, with the proper logical access controls in place, you should be able to trust Category 3 systems coexisting with certain Category 2 systems.

Notice I said “certain Category 2 systems” not “all” Category 2 systems.

Category 2b systems that have inbound connectivity to the CDE are the most risky because of that direct inbound connectivity.  As such, should any Category 2b system become compromised, the CDE is also compromised by default.  Most Category 2b systems are gateways to the CDE such as with out-of-band management systems or “jump boxes” for network and system administration and Citrix or other virtual desktop technologies that you can use to grant business user access to CDE applications.

These gateways need to be configured such that if they become compromised they are immediately taken out of service until they can be rebuilt and/or reconfigured as secure.  This is typically accomplished not just with anti-virus, but also through file integrity monitoring, application whitelisting and other similar measures.  But this needs to be monitored at the virtual desktop infrastructure level (aka, Citrix, VMware, Terminal Services, etc. servers), as well as the virtual desktop image level.  If a file changes in the virtual environment or virtual desktop image that should not have changed, access to the virtual desktop environment is suspended until the infrastructure or image can be recertified as secure.  But that means very tight monitoring on the critical files, however this should be relatively easy to do as long as the virtual desktop image is restricted to only a single cardholder data (CHD) application.

The other thing that needs to happen with virtual desktop is that organizations must be extremely diligent about who has access to the virtual desktop.  This means very timely user management, something a lot of organizations are very lax about.  The PCI DSS states in requirement 8.1.4 that organizations must “Remove/disable inactive user accounts at least every 90 days.”  90 days is a very long time when you look at what attackers are able to do in such a timeframe.  As a result, organizations need to step up their game in this regard and when someone’s job function changes, that needs to have a help desk ticket generated for changing that person’s access.  A 90 day review process is fine, but if you are relying on that sort of thing as a control, you are not being diligent enough in managing your users.

But what about Category 2a systems you might ask?  Category 2a systems are directory servers, authentication servers, DNS and DHCP servers, anti-virus master servers, system update servers and other network and system management/monitoring support systems.  These systems are those that only system or network administrators should ever have direct access.  These systems should be configured as “bastion” servers, security hardened so that they are not easily compromised by vulnerabilities.  Most vendors of these systems understand this and have set up their installation routines accordingly – restricting services, allowing only necessary applications, etc.  The bottom line with Category 2a systems is that, if any of them become compromised, it is game over because the attacker has the privileges of an administrator.

I am not a fan of separate directory systems.  These systems double as the access control management system and the more complicated access control becomes, the more likely that access control becomes less reliable and secure.  As such, I prefer a common access control system managed through an integrated directory system such as Microsoft’s Active Directory (AD).  Over the years, the amount of ports required to make AD work between domain controllers have gotten much better and easier to implement.  Plus there is an ability starting with Windows Server 2008 to even limit and control the dynamic ports used.  As such, I recommend using a common domain approach rather than separate domains with trust relationships as it just makes management so much easier and more reliable.  Whether or not you choose to put domain controllers inside your CDE is up to you to decide.

However, there is a larger consideration here that needs to be put on the table.  Think about what the security purists are saying with their statements about Category 3 systems.  If an attacker can jump from a Category 3 system to owning a Category 2a system such as a directory server, DNS server, DHCP server, etc. what does that say about an organization’s security posture to begin with?  If an attacker gains control of a system/network administrator’s credentials, it is game over.  That is not a device failure issue, that is a social engineering failure.  As a result, you can have whatever controls of separation all you want, but in the end if an administrator gets owed, what did all of that separation do for you?  Nothing.  The best we can do is to manage and mitigate the risks and monitor for any attempts to leverage those risks and deal with any attacks as quickly as possible.

At the end of the day, organizations need to assess the risk that their Category 2 systems provide to the CDE.  If it is higher than desired, those systems need to be put into the CDE.  Others can share networks with other Category 2 and 3 systems if necessary.

There are obviously enhancements that you can make to further improve on this strategy.  But this is just what I would consider the minimum to get by which seems to be what a lot of you desire.

01
Aug
14

The Dilemma Of PCI Scoping – Part 2

In Part 1 I discussed how isolation is an answer, but probably not a viable answer for almost all but the most security conscientious of organizations such as the military, defense contractors or those that can afford that sort of painstaking luxury.  So unless you consider everything in scope for PCI compliance, is there a viable way to reduce scope?

Before we get to that though, we need a quick discussion on risk management as the solution is all predicated on the identification and management of risk.  Because if you cannot do an appropriate risk assessment, then the only choice you really have is to consider everything in scope I know the vast majority of you do not like that approach.

Assessing Risk

In order for my proposed solution to have a chance at working properly, an organization needs to understand its risks, what risks will be accepted and managed and what it will take mitigate the residual risks.  Doing a risk assessment is the way to do that, but most organizations avoid such an assessment for a variety of reasons.  The most common reasons I have heard are:

  • The risk assessment framework is too complex,
  • We tried this once before and never got any meaningful results,
  • We were never able to agree on the risks and their scores, or my personal favorite,
  • This spreadsheet is our risk assessment (it is not, but see Tom Benhave’s blog post on the topic as he does a much better job than I could explaining this topic).

The reason why risk is such a tough topic is that everyone has their own perspective on risk; good, bad or otherwise.  There are numerous articles and presentations on this phenomena but my favorite one is from Lance Spitzner of SANS who wrapped his around security awareness training and discusses risk at the start of his presentation describing why people are such poor judges of risk.  He uses various statistics regarding events that can happen in people’s lives to illustrate this fact.  My personal favorite example of just such a statistic is that people have a greater chance of dating a supermodel than of winning the PowerBall lottery.  Granted, both have tremendous odds, but the odds of dating a supermodel are still significantly less than the odds of winning the PowerBall.

The bottom line is that, without a decent risk assessment, an organization has no way to know the amount of risk the organization is willing to accept and how they will manage that risk.  The Council has repeatedly said that PCI compliance is supposed to consider risk and take a “risk-based” approach.  However, the problem is that we each have our own opinion of risk and what risks we are willing to take on.  But at the end of the day, no matter what an organization does, there is going to be risk.  The question is, “Are these risks my organization willing to take on?”  That question can only be answered by a risk assessment and an understanding of how risks can be managed and mitigated.

How your organization chooses which risks it is will to accept and how to manage those remaining risks are up to your organization to decide.  This is why the PCI DSS and all other security frameworks require an annual risk assessment to be performed.  The risk assessment process provides a framework for an organization to document their risks, understand those risks (size, frequency of occurrence, costs, etc.), how the risks can be managed or mitigated, then agree to what risks they will take on and how they will manage and/or mitigate those risks.

From here on we will assume that the organization has a valid risk assessment and that they are willing to take on the risks presented by the example I will discuss.

Managing Risk

Today’s integrated and connected world just does not lend itself to an isolationist approach due to the volume of information involved, business efficiencies lost and/or the operational costs such an approach incurs.  As a result, organizations need to take a hybrid approach of heavily protecting some components and taking on and managing the risks inherent to such an approach.

When it comes to the IT side of risk management and mitigation, most organizations rely on some form of near real time monitoring through collected system/event log data and other sources to monitor their environment(s).  Unfortunately, where this approach comes up short is that there are too many alerts to follow up and so alerts go unaddressed.  Almost every QSA can tell you about a discussion with operations personnel where the statement, “Oh, that’s a false positive alert, so I don’t have to worry about it” has been made.

This is the first problem you must address and make sure that this attitude never creeps back into your people that monitor alerts.  Anyone in operations that “knows” an alert is a false positive needs either: (1) re-education, or (2) your organization needs to seriously re-tune your alerting mechanism(s).  All you have to do is read the Target and Neiman Marcus press reports if you need examples of how bad things can get if your personnel are blowing off alerts because they believe they are not accurate.

In my experience, a lot of these problems are the result of bad or incomplete implementations of these systems.  Unfortunately, there are a lot of people out there that think that these solutions are more like a Ronco Rotisserie Oven where, as they famously say in the ads, “you can set it and forget it.”  Yes these solutions may be “appliances”, but that is where the comparison ends.

System incident and event management (SIEM) systems require fairly constant tuning and tweaking, beyond their own software and signature updates, to minimize false positive alerts in response to the changes to an organization’s networks and systems.  Yet time and again, I encounter monitoring and alerting systems that were put in place years ago (typically to meet PCI compliance) and have not been adjusted/changed since then while all around them changes have been occurring that affect their operation.

When interviewing the people responsible for these systems I hear statements such as, “Yeah, that alert started to appear when we implemented [name of change].  We were told to just ignore it.”  When asked why they have not tuned it out of the SIEM, you get either they do not have time, they do not know how, they do not have the rights to do that or, my personal favorite, the head of security or the security committee will not let us change that.

The reason this issue does not get addressed is that it has no visibility since alerts are tucked into the various monitoring tools.  So, the best way to address this situation is to give it visibility by automatically feeding all alerts into an organization’s help desk system.  This gives all alerts immediate visibility by putting them in an automated tracking and escalation process.  It also allows for triage and investigation activities to be documented and, based on the results of those triage and investigation activities, having the alert assigned to the right people/groups to address the alerts.

“Whoa, let’s not get crazy here,” I am sure some of you are YELLING at the screen.  There is no doubt this is a very brave step to take because this will potentially uncover something you probably did not want to advertise given the state of your existing alerting.  But that is typically only be a short term problem.  Unfortunately, it may be the only way to get the underlying problem of tuning and tweaking of the alerting systems completed and constantly addressed.

But taking such a step is not entirely a bad thing, at least in the long run.  A side benefit of doing such a thing is that it will focus an organization on triage activities for classifying the urgency of the alerts.  Not all alerts need immediate action, but a lot of them can require immediate investigation and then can be put on a back burner.  It will also give visibility to the number of alerts being dealt with on a daily basis after triage.  That typically results in identifying and justify why more staff are required to deal with the onslaught of alerts that need to be researched.

Another change that organizations should make is adding a box into their change control form that indicates what the impact of a change will have on the SIEM environment.  At least these three basic questions need to be answered in regards to SIEM or other monitoring systems.

  • Do new alerts need to be added and, if so, what do they need to monitor and what are the alerting thresholds?
  • Do existing alerts need to be modified and, if so, what modifications are needed?
  • Are there alerts that are no longer needed?

If you address these areas, you should have monitoring and alerting taken care of with a built in feedback loop to keep it that way.

In Part 3, I am going to wrap up my discussion on PCI scoping with a discussion of Category 2 and 3 systems.

22
Jun
14

Keep It Simple

I keep reading articles and blog posts about all sorts of security solutions and how to secure an organization from the onslaught of network attacks.  However, all of these discussions seem to assume that everyone is a Fortune 500 company.  Threat intelligence, hack backs, APT, etc. are discussed as though everyone has the ability to implement the recommendations presented.

Important statistics that all of us in the security profession need to remember is that the vast majority of organizations are not Fortune 500 organizations.  As of 2008 (the latest statistics I could find from the US Census Bureau), there are almost 6 million businesses in the United States.  The Fortune 500 therefore comprises 0.0084% of the total businesses in the US.  To make matters worse, organizations that employ less than 100 employees make up 98.1644% of all employers in the US.  I would guess that these statistics would be relatively consistent around the world.

The reason these statistics are important is that security professionals need to pull their collective heads out of their posteriors and stop making security so hard that it is impossible for the 98.1644% to implement.

Do you now understand the frustration of most business people?

They do not have a security staff of tens or even hundreds to tackle security issues.  They are lucky if they have an IT person or two.  If they can afford it, they outsource and do everything possible to make themselves secure, but they only can do so much and their resources are extremely limited.  Particularly so given the Great Recession that they just survived.

Margins for small businesses are very slim.  You can argue all you want that today’s businesses are only competitive if they leverage technology and that technology comes with prerequisites.  However, have we created an environment where the hurdle to be in business is now so high that small businesses are just going to be targets regardless?

As a result, I challenge the security community to come up with realistic solutions to security.  We need to develop common sense, simple but effective security methods so that the 98.1644% of organizations are reasonably protected.  Granted, security is not perfect, but we have got to stop discussing security and privacy as though every business is a Fortune 500.  They are not and our solutions and recommendations need to reflect that fact.

This brings me back to the PCI DSS.  If all of the requirements of the PCI DSS could be executed 99.9999% of the time every day, would it keep an organization secure (recognizing that security is not perfect)?  I believe it would.  But it’s that consistency of execution that is the problem regardless of organizational size.

So let us refocus our priorities and help the vast majority of the world get secure.

I have tried to do that with this blog, but I too have been seduced by the big dollars that the Fortune 500 toss my direction.  In my very humble opinion, I think we need to get back to our roots and do more for the vast majority that are struggling with security.

That said, in the process of simplifying, maybe there will be some opportunities for the fortune 500 to find solutions that are less complex.  It could be a win-win for everyone.

01
Mar
14

How Did It Happen?

This is just my supposition on how the Target breach occurred, but it is based on what has been released to date plus what limited knowledge I have of Target’s environment, the environments of other large retailers and my many years of penetration testing.

Fazio Mechanical Services

According to the latest reporting, Fazio Mechanical Services (Fazio) is believed to be the starting point of the Target breach.  From what has been reported, a Phishing attack on Fazio yielded access to Fazio’s computer systems and network.  In their statement regarding the breach, Fazio says:

 “Fazio Mechanical does not perform remote monitoring or control of heating, cooling or refrigeration systems for Target.”

“Our data connection with Target was exclusively for electronic billing, contract submission and project management, and Target is the only customer for whom we manage these processes on a remote basis. No other customers have been affected by the breach.”

If we take Fazio at their word, Fazio did not have direct access to Target’s network.  That means if Fazio was breached, that breach did not result in a direct path to Target’s network.  Brian Krebs reported that he spoke with an ex-Target employee who told him that Target uses the Ariba Supplier Management solution for managing its external vendors.  The Ariba system is available publicly on the Internet but it requires credentials in order to gain access to the application.

Based on these facts, my guess is that the Fazio attackers were likely separate from the Target attackers.  Therefore, the Fazio breach is like most breaches; the attackers get in, probe around and then leave if nothing of value can be identified.  That is not to say that they were not necessarily targeted for a way into Target, but I find it unlikely that Fazio was specifically targeted for the Target breach.

The Fazio attackers likely advertised the information and credentials that they gathered to other attackers on the Internet “underground” and sold them to whoever was willing to pay including the Target attackers.

The Russians

In my opinion, the Russians that eventually sold the card information were probably not the actual attackers that retrieved the cardholder data from Target.  However, they likely could have been behind the attack as the folks that instigated it and funded it.  Other than selling the cardholder information, until these individuals admit their role, we will probably never know if they were just a fence for the information retrieved or if they were behind the attack.

In my scenario, the Russians began scoping out likely candidates for compromise and picked Target because they found information on the Internet “underground” and determined that it was likely possible to successfully get in and get information.  Once the research was done, they then assembled a team to get the actual attack done.

The Malware

In reading the various news accounts, the Secret Service indicated that the attack was sophisticated.  A review of the infamous Microsoft case study, Target had implemented Microsoft Windows Server Update Services (WSUS) now part of Microsoft Service Center Operations Manager (SCOM) at all of their stores so that they could rapidly deploy updates to their stores in the smallest possible time frame.  In the retail business, IT people get very small windows of opportunity to perform updates so this architecture would provide IT with the ability to stage updates and then deploy those updates as quickly as possible.

A lot of people have commented throughout the numerous discussions of the breach on Google+, LinkedIn and Twitter questioning how the attackers could have compromised so many POS systems so quickly.  It is my opinion that this was done through SCOM.

But there is a huge problem with using SCOM when the software is not Microsoft’s – SCOM can be somewhat to very temperamental when it comes to deploying non-Microsoft software and updates.  Over the years it has gotten better with some non-Microsoft solutions, but considering the deployment of malware via SCOM and having it work right the first time requires knowledge of not only SCOM but the Target computing environment.

This brings me to the fact that I believe an insider had to have been involved in the breach.  Not necessarily an actual Target employee, although that cannot necessarily be ruled out, but more likely a knowledgeable contractor.  Like all large corporations, Target outsources development to contractors that have offices and staff located all over the world.  Those contractors also have their own contractors that are located all over the world.  It is my opinion that the Russians compromised one or more contractors with development knowledge of Target’s POS application and deployment of the POS software.  This was required to develop the malware from the BlackPOS code and develop a one-time successful deployment capability using SCOM.  Whether or not these individuals were actually part of the attack team is debatable.  They would only be needed to develop the solution and the SCOM deployment scripts and possibly procedures to avoid Target’s QA process.

Outsourced contractors in third world countries can be readily bought.  People in the West forget that these developers can be making anywhere from cents per hour to only a few dollars an hour.  That is why development work is outsourced to them as it is more cost effective than using developers where they are making one hundred dollars per hour or even more.

But that brings up an interesting conundrum in this breach.  If a contractor was compromised, could they not still be involved in Target’s development efforts and just deliver the malware directly as part of their deliverable?  I think that could have been a possibility, but it would have risked being discovered in Target’s code review, quality assurance and testing processes which is probably why the malware was not delivered by that method.

The Attackers

The attackers could have come from anywhere, but most likely are from Russia or one of the former Russian states such as Ukraine or Belarus.  The reason this is most likely is that the people that sold the Target cardholder data were Russians and they would want people with their same background to execute the attack as well as having some amount of control over the attack team.

The Attack

The attackers that broke into Target likely went shopping for ways into Target and found the Fazio Ariba credentials for Target as well as probably other credentials to other publicly available Target applications.  The attackers either bought those credentials or had their Russian bosses purchase those credentials.

I had to put my penetration testing hat on to figure out how the Ariba credentials came into play.  The reason is that if Ariba is available from the Internet to anyone, why would an attacker need credentials?  Then it dawned on me.  They needed the credentials in order to compromise Target quietly.

My rationale for this is that Target does a decent job at securing publicly facing applications, particularly since their 2007 breach.  Assuming the Ariba application was properly implemented, doing an attack without the credentials would have alerted Target’s information security personnel and it would have been game over.

As a result, the attackers needed the credentials so that they could gain access to Ariba so that they then could compromise it with a cross site scripting attack, SQL injection or whatever they used to gain access to one or more of the Ariba servers so that they could then breach the rest of Target’s network, specifically the SCOM system(s).  The reason this approach would be more likely to be ignored is that the attackers would have valid credentials and any anomalous activity would likely be written off by Target personnel.

This brings us to the next reason I believe an insider is involved.  The timeline discussed thus far gives the impression that the breach was a fairly quick operation.  The only way the breach could have been conducted so quickly is if the attackers had knowledge of where they needed to go to compromise the SCOM system.

That said, the database of Target guests that was also retrieved was likely collateral damage in that it was encountered during the attack and was taken so that the attackers did not walk away empty handed.  The other possibility is that the database was used to test the data exfiltration process to ensure it would go undetected.

Once the attackers owned the Ariba system, they would then have had access to the administrators of Ariba.  The insider would have given the attackers an idea of where the SCOM system was located and probably who had access.  It then became a process of compromising one of those Administrators to gain access to SCOM.  Because they were inside Target’s network, the administrators were likely compromised using an extremely targeted phishing attack using the internal email system.  As a result, the phishing message would have looked even more than legitimate because it was internally generated and delivered.  The message likely contained some sort of Word or Excel document that had backdoor software that would not be detected by the anti-virus solution.

However another option could have been used once the attackers were inside.  They could have approached any of the administrators and pretended to be a contractor and asked for access to SCOM in the test environment.  From there they could have staged their malware and then sent it through the QA process.  Regardless of how they gained access to SCOM, the attackers had to have used the SCOM system to deploy their malware with the speed that they deployed it.

Creating the data dispersal server was a straight forward problem.  With the insider’s knowledge, they knew where FTP was implemented and merely compromised the server to be their own collection point so as not to arouse suspicion.  To get the data out of Target they used DNS as every system needs access to DNS.  A lot of people have argued that Target should have seen the exfiltration of the data via DNS and have pilloried Target for their ineptitude.  However, if the attackers were as sophisticated as they have been portrayed, they likely constructed their exfiltration system to mimic the size of valid DNS packets and thus only traffic volume would have been a possible trigger.

Is this scenario correct?  We will not know until a final report is released if we ever see a final report that gives actionable information.

That said, I am sure there are a lot of you reading this and are shaking in your boots based on this scenario.  That fear is likely based on the fact that you realize how ill equipped your organization is to deal with this sort of attack.  And you should be scared.  This is a war of escalation that we are waging.  Organizations step up their game and the attackers up the ante on their side.  Like the famous saying, “When chased by a bear, I only have to outrun the last person to save myself” is very true in this situation as well.  Your organization’s security game only has to be better than the other organizations.  But when an organization like Target is breached and they were considered to be at the top of the security game, what chance does an organization with mediocre security have?

The only saving grace might be is that your organization is flying under the radar.  I say “might be” because, according to the majority of reports on the state of information security, most organizations have no idea that they have been compromised.  That is because people rely on anti-virus and other technologies that have a poor track record of identifying malware and sophisticated attacks.  And then, as we learned in this past week’s report on the Neiman Marcus breach, you can have information security personnel write off malware discovered as false positive results and let it re-infect for months without investigating or even worrying about what was going on.

It is easy to pillory the guy that got breached.  However, a lot of you should look inside your own organizations before tossing stones.  I would guess that most of you tossing those stones would not fair any better and likely worse than Target should your organization be breached.

18
Jan
14

Why The Paradigm Must Change

The Target, Neiman Marcus and the potential other breaches of retailers to come should be a learning moment for all of us to demand that the card brands change their business paradigm to one that is more secure.

Bolt-Ons Do Not Cut It

For all intents and purposes, how a credit card works has not changed since the late 1950s when they were introduced.  Yes, there have been advancements such as EMV, 3D Secure and end-to end encryption (E2EE), but those are all things that just bolt onto the original concept.  The trouble is that, given today’s technologies and their capabilities, the card and the bolt-ons are just no longer providing the security they once did.

With the Target breach there has been a call to get the US to finally convert to EMV.  The trouble is that EMV would have leaked enough information for fraud to be committed as well, so it is not an answer.

Trade association spokespeople trotted out 3D Secure and other methods of securing online transactions.  The trouble is that most merchants eschew 3D Secure and its kind.  In addition, there are known vulnerabilities with these supposedly secure payment methods so they also have potential issues that could be exploited.

Then there is E2EE also known as point-to-point encryption (P2PE) from a PCI perspective.  These also can be exploited.  It may be more difficult, but when you are determined to gain access to sensitive information, that does not matter.

After the release of the PCI DSS in 2008, a lot of retailers implemented a variety of E2EE solutions.  Unfortunately, the endpoint at the retail location was the POS register and not the terminal.  This was not due to merchants’ negligence; this was due to how their POS applications operated.  This allowed for attacks such as that used in the Target breach to succeed.  All the attacker has to do is insert their malware into the POS process so that the malware can “see” the cardholder data before it gets encrypted.

Even in solutions that do E2EE/P2PE to the terminal can be defeated by taking the same approach and inserting the malware into the terminal process before the terminal can encrypt the data.  Worse yet, if the terminal is breached, the attacker can capture PINs if they also have malware that captures the keystrokes on the terminal before the PIN is encrypted.  There are a number of methods to minimize these risks at the terminal, but if the terminal supply chain is compromised as it was over a year ago in the Barnes & Noble breach, there is little a merchant can do to stop such attacks.

The bottom line is that all of these solutions are bolt-ons to the existing card paradigm and all still have risks that a breach could occur.

Using Complexity Against Us

Brian Krebs and others have wondered aloud how a sophisticated organization such as Target that has information security and forensic resources second possibly only to the government could have been compromised.  Particularly after the 2007 compromise by Albert Gonzales when Target totally revamped and increased their security posture to minimize the likelihood of another event.

The first clue to me came when I read the iSIGHT PARTNERS report on the Target breach.  The theme that comes through loud and clear is that the attackers are using the complexity of Target’s technology infrastructure against Target.  I mean how could FTP activity and huge data transfers (internal and external) go so unnoticed?

Actually, that was likely fairly easy.  The attackers used existing network traffic to mask their own network traffic.  They sought out servers that already had large volumes of traffic and put their data collection server on one of those servers that already had a lot of traffic.  Better yet, a server that was already running as an FTP server.  As a result, even with diligent monitoring, the increase in traffic likely did not raise any alarms.

People assume that such breaches are like a “snatch and grab” in the real world.  The attackers break into an organization’s network, quickly take what they can off of the computers they encounter and leave.  That was the modus operandi (MO) in the past, but not today.  Sophisticated and organized attackers such as those that breached Target, do what they can to remain unseen while they learn more about their victim.  They take their time mapping out the network and determining what devices they want to compromise to further their efforts to gain access to the sensitive information they seek.  Because of this, it is highly likely that the Target attackers encountered the Target customer database during their investigation of the Target network and took it first so that they would have at least something for all of their efforts.

The most insidious thing I think the attackers did was that they likely used Target’s software distribution system to disseminate their malware.  Given the number of POS systems compromised (around 51,000); I find it hard to believe that the attackers manually installed their malware on those POS systems.  It would have placed their operation at extreme risk likely resulting in its discovery.  By using Target’s software distribution system, the attackers got an added benefit of legitimacy to their malware because they Target themselves did the installation.  As such, the malware would appear as valid because Target’s software management system initiated the change.

Now What?

All of this brings up an interesting conundrum.  If attackers are stepping up their game and using such techniques, how do we detect them?  It is a very good question with no good answers.  The iSIGHT report offers methods to stop and eradicate this particular attack.  However, the next attack and the attack after that will all likely use different malware and different techniques to get the data out of your network.

We are in is a war of escalation with no end in sight.  Merchants step up their efforts to stop such attacks and the attackers adapt and adopt new techniques to breach organizations and gain access to their sensitive information.  What we need is a solution that stops the escalation and gets us out of this vicious circle.

That is why I am pushing the 15 – 16 character single use transaction code as that solution.  My reasons are as follows.

  •  The algorithms already exist as a number of the card brands experimented with them a decade or more ago.
  • It will work with existing POS technology and applications.
  • It will work with existing eCommerce sites.
  • It can be implemented into eWallet applications.
  • It can be processed, stored and transmitted without encryption.
  • It can be generated by PCs, smartphones, tablets, credit card sized devices and any other devices that have computational capabilities.
  • It can be displayed on devices in a character format for manual entry or as one or 2D bar codes for scanning.
  • It can be transmitted via swipe, EMV, near field communication (NFC), Wi-Fi or even Bluetooth.
  • And best of all, it is secure by the very nature that it can only be used once.

There will be some changes that would be required at the transaction processors and acquiring banks to handle such a solution.  But given that some of the card brands already have experience with this solution, there is a body of knowledge that already exists as to how it needs to be implemented.

Let the discussion begin on how we move ahead with a better, more secure solution.

07
Dec
13

POS Botnets

Just in time for the holidays.

An article came out this past week regarding botnets that are specifically targeting point of sale (POS) systems.  The reason I bring this up is because of this quote.

“StarDust developers have intimate knowledge of the inner workings of PoS applications such as Clearview PoS.  As a result, the malware can ferret out where in computer memory sensitive data, in some cases in cleartext form, is stored.  StarDust can also sniff network traffic and is able to extract Track1 and Track2 card data.  To remain covert, the software transfers card details only when the terminal is inactive and the screensaver is on. It also uses the RC4 cipher to encrypt data before sending it to the control server.”

Obviously, if your organization uses Clearview POS software you should probably be examining your systems and networks to ensure that they have not been compromised by StarDust.

However, the larger issue is that most merchants do not see themselves as targets of such attacks, let alone have they constructed a secure environment for their POS systems.  Some of this is not entirely the merchant’s fault.  A lot of merchants outsource the maintenance and management of their POS systems to a value added reseller (VAR) and that VAR is the one responsible for the POS network configuration.  Regardless of responsibility, a merchant needs to be aware of these threats and take appropriate action either internally or with their VAR to address these threats and minimize risk.

Regardless of whether it is StarDust or a similar threat, here are some steps you can take to minimize and detect such threats.

  • Segment your POS network from the rest of your internal network and limit POS network segment traffic to only communication to your processor and internal network and system support and operations systems.  This will require the development of network access rules so that traffic can only reach your processor or internal system support and operations systems.  This will limit the number of systems that could compromise your POS environment.
  • Monitor your POS network segment for any traffic that is directed to an external network other than your processor or system support and operations systems.  Your firewall rules should only allow secure connections between your POS network and your processor or your system support and operations systems.  Network traffic going anywhere else should be stopped and reported for further investigation.
  • Monitor your POS systems for any file or configuration changes.  Most anti-virus solutions can provide this capability, but there are also solutions that are specifically engineered for this task.  Regardless of which you choose, configure the tool to alert you as soon as it identifies a potential change to files or configuration of the POS system.  If approved changes were not made to the POS systems and you received an alert, you likely have been compromised.
  • Develop an incident response plan should you receive an alert indicating that your POS systems have been compromised.  An incident response plan provides the organization with a “battle plan” should a compromise occur.  This type of plan is key to minimize the potential reputational impact to the organization should such an attack be confirmed.  A good incident response plan can keep you from making mistakes as you navigate the mine field that is the media circus that comes with a breach.

Three straight forward and simple steps that can minimize the threat of StarDust and a documented incident response process should you unfortunately be breached.

Security does not have to be rocket science.




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

October 2014
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 968 other followers


Follow

Get every new post delivered to your Inbox.

Join 968 other followers