Archive for the 'Uncategorized' Category

05
Feb
14

Celebrating Five Years

Wow! Time really does fly when you are having fun.

Believe it or not, the PCI Guru has been doing this for five years.

Thank you to my readers for continuing to support the blog and I look forward to serving you with even more blog entries in the future.

27
Oct
13

Atom AMPD And PCI Compliance

Here is a relatively new player in network security space for small and mid-sized businesses (SMBs).  A friend of mine that does a lot of work with SMBs is encountering this solution more and more.  And is there any wonder why when it is portrayed as a God send for SMBs.  On the Atom AMPD Web site, they explain their Kwick Key solution.

“Kwick Key™, a USB flash drive, is the delivery device behind the AtomOS.  The Kwick Key is bootable and plugs into nearly all servers.  Kwick Key users experience a significant savings achieved with a high quality “one key” solution for their networking needs.

Simply install the Kwick Key into an internet-connected server, display the web interface, configure the features and you’re done.  The server is transformed into a multi-functional networking and communication device.

The underlying operating system behind the Kwick Key AtomOS is Linux. The content stored on the server is also backed up on the Kwick Key.  Once configured, the Kwick Key can be transferred to new equipment while maintaining its configuration, providing portability in the event of equipment failure.  A redundant option is also available.”

What is wrong with this picture?

If you said, “Too good to be true,” you would be correct.  There are no silver bullet solutions to security.  However these sorts of “all in one” security solutions are being marketed to SMBs all of the time as a cost saving way to be secure.  And since SMBs do not typically have any significant IT personnel, they are always looking for ways to reduce IT workload and save money.  However, if you need to be PCI compliant, this is not a solution for your organization.  Why?

If you read the Savings page on their Web site, they state:

“Your current IT infrastructure is likely requiring multiple boxes to serve your network and communication needs.  This likely includes multiple boxes supporting firewalls, content filters, routing and VoIP applications; each requiring individual training, maintenance, and ongoing licensing fees.  The AtomOS provides just one platform, one interface, one operating system. It brings to bear the BEST practices via a convergent technology.  All modules are tied together by our proprietary user interface.”

That “all in one” solution approach violates PCI DSS requirement 2.2.1 which states:

“Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)”

The reason for requirement 2.2.1 is to leverage the concept of “defense in depth”.  Defense in depth relies on multiple layers of defense such that if one layer develops vulnerability, the other layers still can provide some security and mitigate for the vulnerability until the vulnerability is fixed.  Under the Atom solution, vulnerability anywhere potentially creates a situation where the whole solution is at risk because of one part’s failure.

As a result, in order to be PCI compliant, it will require you to purchase multiple Kwick Keys.  I would assume that multiple keys will result in costs that negate Atom’s cost advantage over other PCI compliant solutions.

Then go to the solution’s product page for Kwick Key.  Take a look at all of the firewall features that are available.  Looks pretty good until you realize there is one notable feature missing – stateful packet inspection (SPI).  Basically, Atom has implemented port filtering which comes standard on Linux distributions.  Not that this is not secure, but it does not comply with requirement 1.3.6 which explicitly requires that SPI be implemented.

There are ways to add SPI to this solution.  However, that will mean you will have to support it yourself and the whole point of the Atom solution is to get out from under supporting such a solution for your organization.

My assumption is that with an appropriate wireless adapter in the system running Kwick Key that the solution will serve as a wireless access point.  Under requirement 1.2.3, wireless is required to be segregated from an organization’s cardholder data environment (CDE) by a firewall.  Given that the wireless is operating on the same device, it is questionable if compliance with this requirement could be truly accomplished.

The same concerns with wireless would exist with the virtual private network (VPN) solution.  Having the remote access to the internal network also running on the same system is not a best practice.  And how secure such a situation would be on this device is questionable.

You need to remember, this is not a purpose built networking device, this is a repurposed computer running Linux.  It is potentially susceptible to any number of Linux-based attacks and vulnerabilities depending on the services running.  And the more services you pile onto this device, the more potential for vulnerabilities.

Then there is the ability to add a voice over IP (VoIP) call manager solution.  Seriously?  What a silly and very dangerous idea.  Why?  VoIP protocols are primarily stateless (i.e., UDP) which means that they cannot be protected by today’s firewall technology which only work with stateful protocols (i.e., TCP).  I have actually had vendors correct me on this because VoIP call set up (pick up the handset) and tear down (hang up the handset) are conducted using TCP.  What these folks always miss is that the actual conversation is conducted over UDP so that the conversation can be streamed between the phones in use which is the bulk of the activity with a telephone call.  And it is not just one or a few UDP ports that can be open; it is typically a range of thousands of UDP ports that are open to support telephony.  Talk about a target rich environment.

Adding a VoIP call manager on top of your firewall is probably the most dangerous thing an organization could do because VoIP is so easy to attack due to the stateless nature of its protocols.  By implementing VoIP on a firewall you are essentially negating the firewall.  Running VoIP on anything but its own dedicated server on its own dedicated network is the only way VoIP should be configured for security, regardless of a need to be PCI compliant.

Finally, there is no pricing provided for the USB “key”.  I always get concerned about “wonder” solutions that do not provide pricing without contacting the vendor’s sales operation.  Nine times out of ten, all this does is force potential customers to then be contacted relentlessly by sales people until they purchase the solution which is likely overpriced.

This post is not to say that this solution is not appropriate for other organizations.  However, if you need to be PCI compliant, this solution is not for your organization if it is implemented as the vendor describes.

29
Jun
13

Remembering Walt Conway

It is with deep regret that I report the passing of my friend, Walt Conway, on Tuesday, June 26, 2013.

Walt and I met at the first PCI Community Meeting in Toronto back in the fall of 2007.  We both attended the infamous card brand session on pre-authorization data at that meeting.  We bantered back and forth periodically about what were the ‘take aways’ from that session.  For the record, I do not think we never completely agreed on our interpretations of the ‘take aways’.  However we did agree that pre-authorization data was to be protected just the same as post-authorization data.

Over the years Walt would contact me about mistakes, misinterpretations or misunderstandings regarding postings.  A number of times, I sent Walt the posting and he would edit it so that he felt it better represented the PCI SSC’s interpretation.  I always appreciated his help in that regard because the point of this blog is to provide people with accurate information.

Walt took over as the PCI commentator at StoreFrontBackTalk after the sudden passing of David Taylor a number of years back.  I always enjoyed Walt’s columns and that is why I posted the link on my blog to StoreFrontBackTalk.  While a number of times we covered the same topics, I always appreciated Walt’s take on those same topics.

Walt’s family has requested that donations be made to the Walter T. Conway, Jr. Fund at Episcopal Community Services.  You can make a donation either on-line at http://www.ecs-sf.org or by mail at Episcopal Community Services, 165 Eighth Street, 3rd Floor, San Francisco, CA 94103 to the Skills Center/CHEFS program.

Rest in peace my friend.  I will miss you.

13
Feb
13

What If?

Here is a thought provoking question that was posed to me recently by a former accomplice in the PCI world.

What if PCI DSS assessments were only required until a merchant proved they were PCI compliant or if a merchant had been breached?

The premise behind this idea is simple.  There are going to be merchants that get information security and there are those merchants that are never going to get information security no matter the carrots or sticks employed.  Not that merchants that get information security cannot be breached, but the likelihood is significantly lower than merchants that do not get information security.

Merchants would go through the PCI DSS assessment process, clean up their act and ensure they are compliant and then they would only have to go back through the process if they were breached.  As a best practice, merchants could chose to periodically assess themselves after significant changes to their cardholder data environments to make sure no new security issues had been created or at annual intervals of say three to five years.

In the event that a merchant were breached, the PCI assessment process would be required annually for the next three years or until all of the card brands involved agreed to drop the reporting requirement, whichever comes first.  For Level 1 merchants, they would go through the Report On Compliance (ROC) process performed by a QSA or an ISA.  For all other merchant levels, they could use the appropriate self-assessment questionnaire (SAQ) but that SAQ would have to be reviewed and signed off by a QSA.

For high risk organizations that process, store or transmit large volumes of cardholder data such as processors or service providers that do transaction reporting, statement rendering or other similar services, they would still have to go through the ROC or SAQ D as they do today based on the levels defined by the card brands.

For service providers such as managed security service providers (MSSP) or cloud service providers (CSP), they would be required to go through an annual SAQ D, at a minimum, or a ROC, if they desired, for all services that are required to be PCI compliant.  The ROC would have to be prepared by a QSA or ISA and the SAQ D would have to be reviewed and signed off by a QSA.

As with merchants, if a service provider suffers a breach, all bets are off and they must do a ROC by a QSA or ISA for the next three years or until the card brands tell you to stop.

Visa and MasterCard currently maintain lists of PCI compliant service providers.  Service providers pay to be listed on those lists and the qualifications to get on those lists would also not change.

Regardless of whether an organization is a merchant or service provider, the quarterly external and internal vulnerability scanning and annual external and internal penetration testing requirements would remain the same.  Merchants would be required to file their results with the merchants’ acquiring bank or processor(s).  For service providers, their scanning and penetration testing results would be filed with the relevant card brands.  The scanning and penetration testing just help to keep everyone honest in their efforts to maintain their security.

I have to say, it sounds like a rational process to me if you accept the original premise that organizations will either do what it takes to be secure or will not.  Thoughts?

06
Jan
13

Security And Compliance

I have written a lot about this topic over the years and was recently reviewing my Compliance Is Not Security – Busted! post and the comments that came in regarding it.

A theme of a number of the comments was that compliance does not equal security.  DUH!

I have never once said or even implied that compliance equaled security as – yes, here it comes – security is not perfect!  However, if you are complying with any security program/framework such as the PCI DSS, ISO 27K, etc., then you are likely more secure than those who are not.

Security technology such as firewalls, routers, servers, applications, etc. can all be set up with rules that are complied with 100% of the time, day in and day out, no exceptions.  The problem comes down to people who are fallible.  Their compliance is never 100% and you are probably lucky to have anyone above 90%, no matter how much security awareness training you do.  As a result, in organizations that are truly complying with the PCI standards, this is where the security breach starts, with people for one reason or another.

No, I am not necessarily talking about social engineering, although social engineering is growing because of the fact that organizations have invested a lot in security technologies yet people are fallible.  People can be the root cause because of any or all of the following.

  • How dare you do that to me!  This is the most obvious of the people issues that comes to mind.  Face it, when backed into a corner, people lash out just like a trapped animal.  The supposedly wronged party wants their proverbial “pound of flesh.”  They get that pound of flesh by hurting the organization that has just hurt them.  This can be as minimal as taking office supplies to downloading databases to a USB drive as they empty their desk.  Obviously, a database, network or system administrator’s access is much different than a clerk’s.  However, if your security is minimal on the inside as it is in most organizations, the clerk may actually have better access than the administrators when it comes to sensitive information.  Such a situation may not be the fault of the administrators, that old version of POS or ERP may not have the ability to be more granular regarding access to information.
  • Over inundated with alerts and cannot identify real alerts from false positives.  This typically occurs when an automated tool is implemented but never tuned to the organization’s environment.  In this sort of an environment, finding real alerts can be like finding a needle in a haystack when there are thousands of alerts an hour scrolling by on the screen.  This usually makes management wonder why the tool was needed in the first place.
  • Saw an alert and ignored it.  We see this most often coupled with the aforementioned inundation issue.  The other most common version of this issue is with internally used SSL certificates that were generated incorrectly or use a default certificate supplied by the application.  Users then see the “There is a problem with this Website’s security certificate” or similar error message in their browser whenever these flawed certificates are encountered and become conditioned to ignore the error message.  Over time, they become conditioned to ignore all of these sorts of messages, including those for malware infected Web sites and, surprise, you have been compromised.  I have lost count how many people have said to me, “We just ignore those alerts because we know they are false positives.”
  • Saw the alert but got side tracked and never came back to it.  This is a problem we see all of the time.  For example, the person that monitors the network is also the person that manages the network and configures the network.  An alert comes in and the person begins a root cause analysis (RCA) only to get pulled away because a remote facility is offline.  The offline issue gets resolved, but other issues come up as well as meetings and telephone calls and the person never gets back to the RCA for the alert because there is no “tickler” to remind them to go back and complete the RCA.  In the meantime, the attacker has gained their beachhead and is probing the network for whatever value it may contain.
  • Just did not put together all of the pieces to know they were compromised.  Like the reasons 9/11 occurred, most organizations do not correlate all of the potential incidents occurring in their networks and therefore do not understand that there is an active effort to compromise their network or that they have already been compromised until well after the incident has caused damage.  The reason this is important is that once an attacker is inside your organization’s security perimeter, it is typically game over because there are few controls to prevent access and identify that data is being taken.

If you have read the Verizon Business Services Data Breach Investigations Reports (DBIR) over the years you know how the bulk of attacks get inside, they are the result of people.  For the last two years, the DBIR has used the VERIS Event Threat Grid to show how breaches occur.  Across the top of the grid are the categories; Malware, Hacking, Social, Misuse, Physical, Error and Environmental.  The Social, Misuse and Error categories imply mistakes or deliberate acts of people.  If you read the definitions on the VERIS Web site, Malware is also very people centric as is hacking.  Surprisingly to some will be that the Physical and Environmental categories also have a good number of people errors.  Based on just a quick read, it looks to be that about 60% to even 70% of all of the incidents categorized by VERIS has some form of people error component.

Since we are not going to get rid of people in our organizations any time soon, what are you to do?

  • Admit that people are the problem and focus your security measures accordingly.  Every 12 step program says the first step is to admit the problem which, in this case, is that people are fallible.  As a result, we need to construct our security measures such that this fallibility is minimized as much as possible.  One of the best solutions is to integrate alerts into your help desk or change management system so that a ticket is generated.  Those tickets need to have an escalation process behind them so that if they are not investigated within a period of time, they are bumped up to the next higher rung of management and that escalation continues until the tickets are finally addressed.  This way there is visibility for the alerts should they slip through the cracks.  As a side benefit of this approach, you gain statistics to reinforce why you need more staff and/or more/better tools.
  • Strengthen your internal security measures.  As things stand, once inside most organization’s security perimeter, there is very little that stands in the way of an experienced attacker getting the data they desire.  Regardless of whether it is an insider attack or an attacker has managed to get inside, there is already justification for organizations to beef up their internal security measures.  To address this problem, I would recommend the security architectures as documented in my Fort Knox approach, Forrester’s Zero Trust Model or McGladrey’s Ultra Secure Network.  But most organizations do not have the infrastructure architecture, the application architecture or even the will to take such approaches.  But that does not excuse an organization from just saying they cannot do anything.  If anything, most organizations could vastly improve the monitoring they do on their internal networks.  Monitoring needs to be coupled with reducing the total number of ports that are open between network segments.  Most internal networks do a terrible job of this because of a variety of factors including applications people that cannot tell what ports need to be open to avoiding operational issues by just leaving things open.  Another area of improvement is reviewing user access rights on all systems and applications, not just those in-scope for PCI compliance.
  • Constantly tune your alerting system(s).  Just as attack methods are not static, neither are networks, systems and applications.  Changes are occurring all of the time in an organization’s IT environment, yet if you ask the people running the SIEM about changes, nine times out of ten, nothing seems to be changing other than requests to look for a new signature or anomaly.  There is a belief in the SIEM user community that a SIEM’s update process is making the necessary changes in the policies that ship with the SIEM.  To a certain extent SIEM solutions are similar to anti-virus and malware solutions.  However, because a SIEM monitors log data and the log data provided varies greatly from organization to organization, each organization needs to periodically review and adjust their alerting criteria to make sure that it reflects the organization’s operating environment and not just some template from the SIEM vendor.  If an organization is not reviewing its SIEM alerting rules based on the changes made, at least quarterly, then it is highly likely that the SIEM is not alerting properly.
  • Establish separate consoles from your SIEM for network, system, security and application administrators.  What a network administrator is looking for is vastly different from what an application administrator is looking for and what any particular group might be looking for to generate an alert.  As a result, to have only one console is really silly and non-productive.  Yet time and again, we see SIEM implementations with just that, one console and everyone being driven by email or SMS alerts.  The people alerted then have to get to the SIEM to find out what exactly triggered the alert and then determine what to do about it.  Having your own console view simplified things by only listing that viewer’s alerts and no one else’s alerts.  This allows people to focus on their problems and not the whole organizations problems.  The idea behind the single console is that if everyone knows what is going on overall, then correlation would occur because everyone sees everything.  While you would think that would be the case, in reality, people just want to fix their problem and move on, not the entire organization.  Which leads to my last point.
  • Watch the overall alerting picture so that correlations can be made.  According to most sources, today’s attacks are becoming more sophisticated and multi-pronged in their approach.  For example, while most DDoS attacks are just to be a pain in the posterior to the target and disrupt access to the target’s Web site, there are those DDoS attacks that are used as cover so that people inside are blinded to the real attack(s).  Whether or not the DDoS was a decoy depends on what other events or incidents occurred during the DDoS attack, if your alerting system did its work.  Higher end SIEM solutions can provide basic correlation rules, but most SIEM solutions require the end user to develop those correlation rules.  It is these correlation rules that help organization identify these more sophisticated attacks.  That said, these correlation rules do not have to be very sophisticated.  For example, during a DDoS attack, you really only need to look for malware attacks, failed authentication attempts and other anomalies that would be likely indicators of the DDoS attack being used to mask the real attack.

Is all of this going to address your security issues?  Sorry, not a chance.  None of the above stops all breaches, it merely minimizes the possibility that a breach goes on for months or years.  Hopefully it minimizes a breach down to weeks, days, maybe even hours in some cases but it will never totally eliminate them.  Security is not perfect.

There is a side benefit to all of this and that is it will assist you in doing RCA.  RCA is very effective in getting rid of those nagging operation issues that occur from time to time and mess up the delivery of your organization’s goods and services.  All of the information you collect for security purposes can also be used to find the needle in the haystack that is causing a database to corrupt, a network connection to drop or a server to fail because now you have information as to what was going on that led up to the problem.

The reason an organization is not secure is that there are so many areas of improvement needed that the full control triad is no longer functioning and holes exist that will allow an attacker to operate without the knowledge of the organization.  Until the controls are implemented and operating properly, it will be impossible to determine if they are secure or not.  The recommendations I have made will hopefully give you a better picture of what you face and reacting to issues that need attention before your organization is the next one to be breached.

22
Dec
12

What To Focus On In 2013

It is the end of the year and, like all other pundits, here is another idea on what 2013 will bring in the way of security issues.  After reading a lot of the other predictions out there, I tend to agree with those from Verizon Business Services’ Data Breach Investigation Report researchers.  While everyone else is predicting cyber-Armageddon as the biggest threat, the researchers at Verizon Business Services see a lot more of the same for 2013.

The biggest threat Verizon identifies is more attacks on authentication systems.  This is most likely because your vendor or your developers talked you into storing authentication information in your database that is Internet facing.  We see this all of the time with eCommerce and Internet banking solutions.  The external user credentials end up being stored in the database along with order entry, inventory and pricing data.  This is typically done because using a directory system for such purposes is difficult and, at times, not as functional as when authentication data is stored in and used from a database.  Given the prevalence of SQL attacks, all of that information results in being available for the taking through a SQL injection attack.  As a result, the attackers compromise the authentication system, gain access to everyone’s credentials, including administrators, and it is likely ‘game over’ regarding the rest of your security measures.

I want to touch next on social engineering because it is typically directly related to compromising authentication systems even though the second place attack method most concerning Verizon researchers is application attacks.  Social engineering is all about tricking your end users into giving up key information so that an attacker can compromise your environment.  The most common piece of information an attacker tries to obtain is an end user’s credentials for logging onto the network.  Hence the reason why I wanted to discuss this after the authentication system attacks.  Social engineering is the most insidious of attack methods because it does not involve any of an organization’s security technology.  And worst of all, if social engineering is successful, all or most of your organization’s security technology is effectively neutralized as a result.  That is because most organizations have little or no security once someone is on the inside.

Now let us look at the second most concerning attack to Verizon which is application attacks.  Verizon is saying that application attacks are more of a threat to governments and large applications.  Regardless of the target, any organization with an application presence on the Internet is a potential target such as with eCommerce or Internet banking.  A lot of these applications are based on on-line frameworks such as IBM’s Websphere or Oracle’s Application Framework.  It is not that these frameworks are insecure, it is that they still require development effort and it is those custom development efforts do not guarantee a secure application.  The problem comes from the fact that a lot of people believe that using a framework means that little to know security testing even though the amount of custom development done in these frameworks can be more extensive than starting from scratch.  As a result, we see a lot of organizations tossing Internet applications into production with little or no security testing and then ending up with breaches as a result.

In addition, there are third party applications served up by application service providers (ASP).  A lot of small and mid-sized businesses (SMB) use these sorts of applications to have an online presence.  As a result, a lot of SMBs believe that these solutions do not require any security testing because the vendor and the ASP do that for them.  However, we are encountering more and more attacks on SMBs, particularly those that have wealthy clientele such as country clubs and exclusive financial institutions because their applications are notorious for not being secure.  SMBs are constantly amazed that; (1) they were targeted and, (2) the application was not bettered secured.  Yet attackers know that while the take out of SMBs could be significantly less than a large organization, an SMB is usually easier to compromise because they do not have the security and monitoring resources of a large organization.  As a result,

Finally, something that concerns me as the previously discussed threats is mobile devices and devices not under the organization’s control, also known as ‘bring your own device’ or BYOD.  I think these devices will surpass the other three threats over the next few years because most organizations have difficulty maintaining security on their servers, desktops and notebooks, let alone something like an iPhone or an Android tablet.  The worst thing about mobile devices is that they are so easily lost and it fascinates me how many people lose their mobile devices.  The bottom line about mobile devices and BYOD is that you must be very, very careful as to what you allow these devices to access and how you grant that access.  And you must make sure that these devices are not allowed to download information, even if encrypted, as that information is highly likely to be lost.

So what should you be doing regarding these threats?  Here are my top things organizations should be doing to minimize the risks presented by these threats.

  • Trust no one.  This is particularly true of mobile devices or BYODs, but it also applies to your own internal systems.  Forrester has promoted this in their ‘Zero Trust Model’ as well as me in the ‘Fort Knox Approach’.  This is not as easy as one might think, but the approach makes sense in these days of attacks on authentication systems and social engineering approaches.
  • Classify your data.  This is usually a difficult project, but they pay dividends at the end because everyone understands why certain information cannot be allowed out of the control of the organization.  It also allows people to justify to others why data cannot be allowed to be accessed by people that do not need access as well as via mobile or BYOD.
  • Require encryption on mobile devices and BYOD.  Even if you do not allow data to end up on these devices, you do not even want the memory or other information that might be inadvertently stored on these devices out of your control.  As a result, if you encrypt these devices, there is a high likelihood that, if they are lost, the person finding them will just wipe them and start over.
  • When possible, use a directory system for authentication.  This is always painful for systems that operate outside of the traditional control environment of internal users.  Directory systems are usually designed to be more secure than any sort of database authentication system because they are assumed to be at risk from the start.  However, just because they are designed to be secure does not mean they cannot be implemented in an insecure manner.  Windows Active Directory takes a lot of heat for being insecure; however a lot of that heat is due to silly implementations to support insecure authentication methods for compatibility.
  • Conduct security awareness training.  The only thing that minimizes social engineering is consistent and regular security awareness training.  However, do not kid yourself or management.  Everyone has their ‘moments’ and does something they should not.  That said there are always those that just never seem to get it which is why you need other controls and monitoring to ensure you maintain your security.  However, to just throw up your hands and say it is pointless is also not a position to take.
  • Secure your applications.  This means conducting application code reviews and testing applications before they are put into production not after the fact.  Unlike networks where you need to put them into production before testing them, applications can be tested before going into production.  It amazes me how many organizations put their applications into production and, by the time they finally get around to testing them, they have already been compromised.  And while automated application code testing solutions are all of the rage, we still find that the best results come from the more traditional human code review not automated tools.
  • Monitor your network and applications.  This is a double edge sword.  You know what to look for, however, you have so many ports open that it is near to impossible to recognize bad traffic from good traffic.  And it is not necessarily the fault of your IT department as most packaged applications require an inordinate amount of ports open to function properly.  However, the key thing to monitor, more than anything, is any traffic going outside of your network to an unknown location.  When you see traffic going to Eastern Europe, China or any unexpected IP address, your monitoring system should generate an alert as that is typically a key indicator that you have been compromised.

Have a happy holiday season and I will “see” you all next year.

24
Oct
12

The Barnes & Noble Breach Take Aways

On October 24, 2012 it was announced that Barnes & Noble had a credit card breach that was the result of tampered credit card terminals.  As a result of the breach, Barnes & Noble pulled all of the credit card terminals out of their stores so that they can be examined.  The story published in the New York Times has some points that should be interesting to other large merchants.

“We have acted at the direction of the U.S. government and they have specifically told us not to disclose it, and there we have complied.”

This is probably the most important take away you should have because a lot of incident response plans miss this point.  While the credit card companies want to the notified immediately of a breach, law enforcement should be the first outside entity notified and then the card companies, if approved by law enforcement.  The reason is that law enforcement may want the breach to continue in an effort to more easily identify and apprehend the perpetrators and that may include allowing the perpetrators to use the stolen cards for purchases.

But the next question that typically comes up is who in law enforcement should be notified?  If you are not a large or regional entity, then you should notify your local police department or county sheriff.  If you are a regional or large sized merchant, you should contact the United States Secret Service and/or the Federal Bureau of Investigation.  In either case, whatever law enforcement entity you contact should be consulted with before notifying anyone else outside the organization and that includes notifying the card brands.

“The company determined that only one keypad in each of the 63 stores had been hacked.  Nevertheless, the company has not reinstalled the devices.”

The 63 stores involved were all across the country from San Diego, Miami, Chicago, New York and other locations in between.  This implies either a very organized criminal group that operates in a lot of locations or to a localized group that was able to infiltrate the operation that configures and ships out the terminals for Barnes & Noble.  Based on investigations similar to this, it is most likely that a criminal operation infiltrated a centralized location that is responsible for the configuration, repair and replacement of credit card terminals for Barnes & Noble.

So what can a merchant do to minimize this sort of attack?  Here are some actions to consider.

  • Contract with only a reliable terminal supplier.  In this age of lowest cost providers, there is a big temptation to use anyone as a supplier, particularly if their costs are the lowest.  However, the old adage of “you get what you pay for” is very relevant in these situations.  As part of your vendor selection process, you should be asking a supplier of terminals what they do to ensure that terminals do not get tampered with.  If you cannot get an answer or the answer you get is “trust us,” then you should probably not consider them as a vendor.  At a minimum, vendors should put their employees through periodic background checks (at least every three to five years), track which employees work on what units, do random physical internal inspections of units and random testing of units to ensure that they are not tampered with before they are sent out.  If you are doing this activity in-house, you should also be following this process.
  • Lock down your terminals.  Anyone that has been into a Barnes & Noble might recall that terminals just sat on the counter.  As a result, they were easy to quickly swap out with a doctored unit.  I have been involved in a number of situations where merchants had terminals doctored because they were easy to swap out.  If terminals are locked in a cradle and only the manager on duty has the key, anyone trying to swap terminals is going to have to have a key to free the device.  This prevents swaps that occur after hours when only the cleaning people are present.  In addition, the keys to these terminal cradles needs to be different for each location so that one key does not open every cradle at every location.  The common key is a lesson the gas station industry has only recently addressed.
  • Use tamper-proof serialized security tape or stickers over the seams of the terminal and check them daily.  This is a trick that has been used for quite a while with gas pumps.  The key is to at least daily (I recommend at each manager shift change); have the stickers checked to make sure that they are still in place and log that activity.  If they have been tampered with or are missing, the lane should be immediately taken out of service and your loss prevention unit contacted.
  • Confirm a terminal swap.  A lot of merchants are very lax in their terminal swap procedures.  If a terminal turns up with instructions to swap it with another or a technician appears at the location with a new terminal, the store personnel do it, no questions asked.  That is wrong.  At a minimum, a good terminal swap procedure should involve the generation of a trouble ticket in a help desk system or similar and having the store manager confirm the swap with the help desk or POS support.  No ticket, no swap, no exceptions.
  • Put video monitoring on all your POS locations.  This does not stop such a swap from occurring, but it does at least record such an event if it does occur.  This is particularly important in situations where the customer also acts as cashier as with any self checkout situation.
  • Use MAC address filtering on your store location networks.  If a device is unplugged and a new device is plugged in with a different MAC address it will not work.  Yes, I know for some of you this creates a bad situation.  But I always ask people in response, “Why should store personnel be swapping equipment in the first place?”
  • Monitor your sensitive devices.  If a credit card terminal or POS gets unplugged from your network, you should generate an alert.  That alert should then be correlated to a help desk ticket.  If there is no ticket, then someone should immediately notify loss prevention and also follow up with store management to find out why the device was unplugged.
  • Monitor your network.  Terminals or POS should only be communicating with your service provider for transaction authorization and your routers(s) and/or firewall(s) should be configured accordingly.  If a terminal or POS attempts to communicate with any other external IP address, that should generate an alert to corporate IT and security that should then be investigated immediately.  This will catch those devices that are tampered with and then transfer data to a server outside of your network.  It is highly likely that the communication will be encrypted, but the traffic will be directed to an external IP address that should be blocked if your firewall(s) or router(s) are configured properly.

UPDATE:

This post got me my “fifteen minutes of fame” by allowing me to be interviewed and end up in a variety of the following media sources.

Financial Times

http://www.ft.com/intl/cms/s/0/919e3292-1dd7-11e2-8e1d-00144feabdc0.html?siteedition=intl

CNBC

http://www.cnbc.com/id/49624444

USA Today

http://www.usatoday.com/story/tech/personal/2012/11/05/debit-card–numbers-pins-stolen-at-pos-terminals/1675795/

http://www.usatoday.com/story/tech/2012/11/05/debit-card-account-theft-qa/1677537/

 

19
Jul
12

Security Awareness Training

With the growth in social engineering used to breach organizations, there has been a growing chorus of security professionals that are pushing for more and better security awareness programs.  However, Dave Aitel of Immunity, Inc. recently published an article that basically states that employee security awareness training is worthless and should not be done.  While I understand Mr. Aitel’s frustration with employees’ being a security issue, to stop security awareness training is extremely foolish.

“The clients we typically consult with are large enterprises in financial services or manufacturing.  All of them have sophisticated employee awareness and security training programs in place – and yet even with these programs, they still have an average click-through rate on client-side attacks of at least 5 to 10 percent.”

As someone in a consulting firm that also does social engineering assessments, I can confirm Mr. Aitel’s observation of 5 to 10 percent.  However, I can also tell you that organizations we test that do not have a security awareness program or have limited security awareness training are averaging in the 20 to 30 percent failure rate.  Based on our work in social engineering and discussions with other professionals that do social engineering, the 5 to 10 percent click through rate is unfortunately about the best you will get out of people.  People are fallible, some more than others.  So to just drop security awareness training is not a good idea unless you think doubling or tripling your risk is a good idea.

“Because they’re going to do so anyway, so you might as well plan for it.”

This statement is why Forrester recommends the “Zero Trust” security approach and why I developed the Ultra Secure Network.  But while I whole heartedly agree with Mr. Aitel’s statement, I differ with Mr. Aitel on what that statement means.

Mr. Aitel implies that by improving all of your other security measures you can eliminate the potential that employees will screw up.  Mr. Aitel naively believes that by auditing your periphery, improving monitoring, isolating and protecting critical data, segmenting your network, auditing employee access, improving incident response and instituting strong security leadership, organizations can prevent network threats and limit their potential range.  As I always like to say, “In theory, theory works.”

Yes, there is no doubt that organizations need to improve their security posture.  But Mr. Aitel seems to forget that employees are part and parcel of that security posture.  Ultimately, employees, as well as contractors, business partners and others, need to interact with an organization’s information.  Even if you significantly improve all of your other security controls, people still need to access and interact with an organization’s information assets.  The bad news for Mr. Aitel is that people are fallible.  To ignore that fact is foolish and to bury your head in the sand in the belief that you can prevent every social engineering attack with your other controls is sheer folly.

Security awareness training has its place, but it is not a silver bullet nor is any other security control or approach.  The world is full of risks and a security professional’s job is to minimize those risks and manage the remaining residual risk.  Any security professional that believes they can eliminate risk and sells management on that fact is not going to have a career for very long.

The ugly fact of life is that every security control only minimizes security risk and sometimes you get very lucky and the risk is minimized to zero.  In the vast majority of cases there is some amount of residual risk even when a security control is in place.  If your organization is unwilling to accept the remaining residual risk, then the business function causing that risk needs to be not performed.  As I like to tell people that complain about the PCI DSS, “If you don’t want to comply with the PCI DSS and want to totally avoid a card breach, then don’t accept credit/debit cards for payment.”

So continue to conduct security awareness training, but do not mistakenly believe that it will stop people from creating an incident.  Security awareness training only minimizes the risk that people will make a mistake, not eliminate that risk.  This is why security is done in layers, so that when people make that mistake, your other security controls catch the mistake quickly and minimize the impact.

06
May
12

How Many People Does It Take?

There are a lot of jokes that start with the phrase, “How many people does it take …”  But this post is no joke.  I have been taking some heat over my comment that you do not need to know who is attacking you, you only need to focus on what you need to protect.  As such, I felt the need to further explain myself.

The first complaint I get is that it is important for security professionals to know the tactics used by the attacker.

So my first question to all of you is, “How many people does it take to analyze attack vectors?”

We have forensic security organizations such as Verizon, Trustwave and Security Metrics that analyze attacks.  We have security companies such as IBM/ISS, Symantec, Kaspersky and McAfee that analyze attacks.  We have hardware/software vendors such as Checkpoint, Microsoft, Cisco and Palo Alto that analyze attacks.  I would venture to say there are hundreds of reliable sources for the analysis of attacks.  And yet, I am taken to task that you need to have your own analysis of attacks.  These hundreds of other sources just are not enough for you to rely on?  Really?  If you are doing the correct analysis of your vulnerability scanning and penetration testing reports, your attack vector risks should be known and you should have either patched or developed mitigations for those risks.

And while they might be put together in a slightly different sequence, DDoS is still DDoS and a SQL Injection is still a SQL Injection.  The bottom line is that the library of exploits available to an attacker is essentially finite.  This is proven out by the statistics that the forensic firms publish year after year.  As such, you should be able to monitor for all of these attacks fairly easily because they are all known quantities.  Yes, there is the rare Zero-Day that turns up every so often.  But, even those can be picked up if you have things configured and implemented properly.  If you think about it, unless an attacker is someone that can develop their own exploit code (and 99% do not), they are limited to whatever exploits are available in the public domain of exploits and that is a known quantity.  Take an inventory of what is available in Metasploit or Core Impact at any fixed point in time and you will see what I mean.

Then there is the group that argues that if you do not do analysis of the attacker, you cannot understand why you are being attacked.

So my second question is, “How many people does it take to give you an idea of why you are being attacked?”

This is pretty straight forward to figure out without some extensive and intensive analysis.  In 99% of cases, you are being attacked for one or more of the following reasons.

  • Your organization has sensitive information such as credit card numbers, bank account numbers, intellectual property or customer information that the attackers want.
  • Your organization has produced a product or service that has been perceived to be a safety hazard, overpriced or other detriment to society.
  • Your organization or an employee has publicly taken a stance on some issue(s) that has irritated some group(s) of people.
  • Your organization has donated money, time, products or services to an organization viewed by some group(s) of people as questionable.

Read the reports published by the forensic firms.  Read the news reports in the media.  If you distill down that information, the reasons for attacks break down into these four basic reasons.  Yet, security professionals continue to worry about the motivations of the attacker.  If you think your attack is unique, you are wasting your time.  The likelihood of your attack not being covered by these four primary reasons is slim to none.

I think these complaints just come down to the fact that doing the actual grunt work of security is just not very sexy work.  There is no doubt about that fact.  Ensuring the security of networks 24x7x365 is very, very monotonous work.  And it is that very monotony that is one of the primary reasons why organizations get breached.  People get bored with the monotony and they start to cut corners on procedures because, in their view, nothing is going on and, therefore, nothing will go on.  Only rotation of people and tasks will address the monotony, but that only works for so long.

This is why security professionals turn to automated tools to minimize reliance on people to flag potential anomalies.  Without tools, people get bored very quickly searching for the “needle in the haystack” through all of the data produced by all of the devices on your network.  However, even with all of the necessary tools, correlation of information still requires people to bring all of the anomalies recognized by the tools together and determine if all of these anomalies warrant further investigation.

Even with the necessary tools, you are not out of the woods.  One of the more common problems that we encounter is that organizations have not completely implemented those tools.  How many of you invested in the cool intrusion prevention system and still run it in notification mode?  Even then, those organizations that do completely implement the tools, do not always keep up on the “care and feeding” of the tools to ensure that the tools recognize the anomalies.  The tools are current and up to date, but anomalies are not recognized because the tools are not properly configured and tuned to the organization’s current network configuration.  Networks are not the static environments that a lot of people think they are.  As a result, either the number of false positives is so high that personnel ignore the voluminous number of alerts generated or anomalies are just never identified by the tools.

It is not until someone finally recognizes an anomaly for a breach that it finally gets interesting.  Then things become very interesting in a hurry.  Unfortunately, the statistics from the forensic firms point to the fact that, if an anomaly does get recognized, it is often many months to years down the road from the original compromise.

And that is where security professionals need to get better.  If you look at how long it took TJX to recognize their breach (years) versus how long it took Global Payments (months, but still counting), we are headed in the right direction.  But when it takes attackers only minutes, hours or even days to get your information, months still does not cut it.  We need to get to days or, better yet, minutes.  That is the challenge security professionals face and that is where we need to focus our efforts.

The PCI DSS is a good foundation, but the requirements of the PCI DSS are not going to get us to our goal.  We must go beyond the PCI DSS to get to our goal and that is a message that the PCI SSC and the card brands have consistently delivered.  The PCI DSS is only a security baseline, the ante into the game.  If you really want to be the best, you need to take your security game beyond the PCI DSS.

So let us start using the PCI DSS properly.  If your organization can execute the requirements of the PCI DSS 24x7x365 at almost 100% compliance, then you are ready to take things to the next level.  If you cannot achieve almost 100% compliance, then you need to work with your organization to get to that level.  Breaches and data loss are never going to go away, but if all organizations followed this approach, the number of breaches and amount of data lost would significantly drop.

12
Feb
12

Cannot Say It Better

If you read nothing else this week, you need to read this posting by Daniel E. Geer, Jr., Sc.D.

People in the Loop: Are They a Failsafe or a Liability?




Announcements

FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (http://www.fishnetsecurity.com/company/careers), search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response to too "sales-ee", I reserve the right to edit or not even authorize the response.

Calendar

July 2014
M T W T F S S
« Jun    
 123456
78910111213
14151617181920
21222324252627
28293031  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 915 other followers


Follow

Get every new post delivered to your Inbox.

Join 915 other followers