Posts Tagged ‘compliance

01
Mar
14

How Did It Happen?

This is just my supposition on how the Target breach occurred, but it is based on what has been released to date plus what limited knowledge I have of Target’s environment, the environments of other large retailers and my many years of penetration testing.

Fazio Mechanical Services

According to the latest reporting, Fazio Mechanical Services (Fazio) is believed to be the starting point of the Target breach.  From what has been reported, a Phishing attack on Fazio yielded access to Fazio’s computer systems and network.  In their statement regarding the breach, Fazio says:

 “Fazio Mechanical does not perform remote monitoring or control of heating, cooling or refrigeration systems for Target.”

“Our data connection with Target was exclusively for electronic billing, contract submission and project management, and Target is the only customer for whom we manage these processes on a remote basis. No other customers have been affected by the breach.”

If we take Fazio at their word, Fazio did not have direct access to Target’s network.  That means if Fazio was breached, that breach did not result in a direct path to Target’s network.  Brian Krebs reported that he spoke with an ex-Target employee who told him that Target uses the Ariba Supplier Management solution for managing its external vendors.  The Ariba system is available publicly on the Internet but it requires credentials in order to gain access to the application.

Based on these facts, my guess is that the Fazio attackers were likely separate from the Target attackers.  Therefore, the Fazio breach is like most breaches; the attackers get in, probe around and then leave if nothing of value can be identified.  That is not to say that they were not necessarily targeted for a way into Target, but I find it unlikely that Fazio was specifically targeted for the Target breach.

The Fazio attackers likely advertised the information and credentials that they gathered to other attackers on the Internet “underground” and sold them to whoever was willing to pay including the Target attackers.

The Russians

In my opinion, the Russians that eventually sold the card information were probably not the actual attackers that retrieved the cardholder data from Target.  However, they likely could have been behind the attack as the folks that instigated it and funded it.  Other than selling the cardholder information, until these individuals admit their role, we will probably never know if they were just a fence for the information retrieved or if they were behind the attack.

In my scenario, the Russians began scoping out likely candidates for compromise and picked Target because they found information on the Internet “underground” and determined that it was likely possible to successfully get in and get information.  Once the research was done, they then assembled a team to get the actual attack done.

The Malware

In reading the various news accounts, the Secret Service indicated that the attack was sophisticated.  A review of the infamous Microsoft case study, Target had implemented Microsoft Windows Server Update Services (WSUS) now part of Microsoft Service Center Operations Manager (SCOM) at all of their stores so that they could rapidly deploy updates to their stores in the smallest possible time frame.  In the retail business, IT people get very small windows of opportunity to perform updates so this architecture would provide IT with the ability to stage updates and then deploy those updates as quickly as possible.

A lot of people have commented throughout the numerous discussions of the breach on Google+, LinkedIn and Twitter questioning how the attackers could have compromised so many POS systems so quickly.  It is my opinion that this was done through SCOM.

But there is a huge problem with using SCOM when the software is not Microsoft’s – SCOM can be somewhat to very temperamental when it comes to deploying non-Microsoft software and updates.  Over the years it has gotten better with some non-Microsoft solutions, but considering the deployment of malware via SCOM and having it work right the first time requires knowledge of not only SCOM but the Target computing environment.

This brings me to the fact that I believe an insider had to have been involved in the breach.  Not necessarily an actual Target employee, although that cannot necessarily be ruled out, but more likely a knowledgeable contractor.  Like all large corporations, Target outsources development to contractors that have offices and staff located all over the world.  Those contractors also have their own contractors that are located all over the world.  It is my opinion that the Russians compromised one or more contractors with development knowledge of Target’s POS application and deployment of the POS software.  This was required to develop the malware from the BlackPOS code and develop a one-time successful deployment capability using SCOM.  Whether or not these individuals were actually part of the attack team is debatable.  They would only be needed to develop the solution and the SCOM deployment scripts and possibly procedures to avoid Target’s QA process.

Outsourced contractors in third world countries can be readily bought.  People in the West forget that these developers can be making anywhere from cents per hour to only a few dollars an hour.  That is why development work is outsourced to them as it is more cost effective than using developers where they are making one hundred dollars per hour or even more.

But that brings up an interesting conundrum in this breach.  If a contractor was compromised, could they not still be involved in Target’s development efforts and just deliver the malware directly as part of their deliverable?  I think that could have been a possibility, but it would have risked being discovered in Target’s code review, quality assurance and testing processes which is probably why the malware was not delivered by that method.

The Attackers

The attackers could have come from anywhere, but most likely are from Russia or one of the former Russian states such as Ukraine or Belarus.  The reason this is most likely is that the people that sold the Target cardholder data were Russians and they would want people with their same background to execute the attack as well as having some amount of control over the attack team.

The Attack

The attackers that broke into Target likely went shopping for ways into Target and found the Fazio Ariba credentials for Target as well as probably other credentials to other publicly available Target applications.  The attackers either bought those credentials or had their Russian bosses purchase those credentials.

I had to put my penetration testing hat on to figure out how the Ariba credentials came into play.  The reason is that if Ariba is available from the Internet to anyone, why would an attacker need credentials?  Then it dawned on me.  They needed the credentials in order to compromise Target quietly.

My rationale for this is that Target does a decent job at securing publicly facing applications, particularly since their 2007 breach.  Assuming the Ariba application was properly implemented, doing an attack without the credentials would have alerted Target’s information security personnel and it would have been game over.

As a result, the attackers needed the credentials so that they could gain access to Ariba so that they then could compromise it with a cross site scripting attack, SQL injection or whatever they used to gain access to one or more of the Ariba servers so that they could then breach the rest of Target’s network, specifically the SCOM system(s).  The reason this approach would be more likely to be ignored is that the attackers would have valid credentials and any anomalous activity would likely be written off by Target personnel.

This brings us to the next reason I believe an insider is involved.  The timeline discussed thus far gives the impression that the breach was a fairly quick operation.  The only way the breach could have been conducted so quickly is if the attackers had knowledge of where they needed to go to compromise the SCOM system.

That said, the database of Target guests that was also retrieved was likely collateral damage in that it was encountered during the attack and was taken so that the attackers did not walk away empty handed.  The other possibility is that the database was used to test the data exfiltration process to ensure it would go undetected.

Once the attackers owned the Ariba system, they would then have had access to the administrators of Ariba.  The insider would have given the attackers an idea of where the SCOM system was located and probably who had access.  It then became a process of compromising one of those Administrators to gain access to SCOM.  Because they were inside Target’s network, the administrators were likely compromised using an extremely targeted phishing attack using the internal email system.  As a result, the phishing message would have looked even more than legitimate because it was internally generated and delivered.  The message likely contained some sort of Word or Excel document that had backdoor software that would not be detected by the anti-virus solution.

However another option could have been used once the attackers were inside.  They could have approached any of the administrators and pretended to be a contractor and asked for access to SCOM in the test environment.  From there they could have staged their malware and then sent it through the QA process.  Regardless of how they gained access to SCOM, the attackers had to have used the SCOM system to deploy their malware with the speed that they deployed it.

Creating the data dispersal server was a straight forward problem.  With the insider’s knowledge, they knew where FTP was implemented and merely compromised the server to be their own collection point so as not to arouse suspicion.  To get the data out of Target they used DNS as every system needs access to DNS.  A lot of people have argued that Target should have seen the exfiltration of the data via DNS and have pilloried Target for their ineptitude.  However, if the attackers were as sophisticated as they have been portrayed, they likely constructed their exfiltration system to mimic the size of valid DNS packets and thus only traffic volume would have been a possible trigger.

Is this scenario correct?  We will not know until a final report is released if we ever see a final report that gives actionable information.

That said, I am sure there are a lot of you reading this and are shaking in your boots based on this scenario.  That fear is likely based on the fact that you realize how ill equipped your organization is to deal with this sort of attack.  And you should be scared.  This is a war of escalation that we are waging.  Organizations step up their game and the attackers up the ante on their side.  Like the famous saying, “When chased by a bear, I only have to outrun the last person to save myself” is very true in this situation as well.  Your organization’s security game only has to be better than the other organizations.  But when an organization like Target is breached and they were considered to be at the top of the security game, what chance does an organization with mediocre security have?

The only saving grace might be is that your organization is flying under the radar.  I say “might be” because, according to the majority of reports on the state of information security, most organizations have no idea that they have been compromised.  That is because people rely on anti-virus and other technologies that have a poor track record of identifying malware and sophisticated attacks.  And then, as we learned in this past week’s report on the Neiman Marcus breach, you can have information security personnel write off malware discovered as false positive results and let it re-infect for months without investigating or even worrying about what was going on.

It is easy to pillory the guy that got breached.  However, a lot of you should look inside your own organizations before tossing stones.  I would guess that most of you tossing those stones would not fair any better and likely worse than Target should your organization be breached.

08
Feb
14

Pre-Authorization Data

After a number of interactions with a variety of people over the last few weeks, it has become obvious that the concept of pre-authorization data is not clear to a lot of people.  And just because it is pre-authorization data does not mean that you are not required to protect it.  The Council has made it very clear that it is to be protected with the same rigor as post-authorization data.

Pre-authorization is defined as that time when an organization has a customer’s sensitive authentication data (SAD) but has not yet processed it for payment.  For most merchants, the time between collecting the SAD and processing it is measured in seconds.  For card present transactions, the SAD can be track or chip data.  For card not present transactions, it typically includes the cardholder name, primary account number (PAN), expiration date and CVV2/CVC2/CID2.

Here are some situations where that does not always happen that quickly.

Phone, facsimile and mail orders can extend the amount of time between receipt of the SAD and processing by seconds, minutes or even hours.  On the bizarre side of things, I encountered at one client people sending their physical cards in the mail for processing of their mail order.

At the fuel pump when a customer swipes their card, the system that manages the payment process will pre-authorize the purchase and then store the SAD until the customer finishes pumping fuel and hangs up the hose to complete the transaction.  That could be five to 10 minutes depending on fuel tank size.

In some industries the time of pre-authorization can be weeks, months and in some cases even a year or more.  Where this typically occurs is in the airline and hospitality industries.

When a customer makes an airline reservation, the airline will likely pre-authorize your airfare but may not actually charge your card until 7/14/60/90 days before you check in or even until you check in.  This can result in your SAD being stored for weeks or even months.

In the hospitality industry, a reservation typically does not cause a charge until a customer checks out even though they are required to have a card on file to hold the reservation.  When a customer checks into the property, the hotel’s billing system records the SAD and may also pre-authorize charges, but the actual card transaction is not processed until the customer checks out.  I have encountered SAD in hospitality systems that have been stored for more than a year due to reservations for special occasions such as graduations, birthdays, family reunions and anniversaries.  New versions of hospitality management systems encrypt pre-authorization data, however older systems did not and the security of the pre-authorization data may not be a rigorous as it should.

eWallet applications are another great example of pre-authorization data.  eWallets are just an electronic version of a consumer’s physical wallet.  eWallets can be applications on a smartphone/tablet or a specialized device such as Coin.  eWallets are not regulated by the Council or the card brands and never will be just as traditional wallets are not regulated.  That said, developers of eWallet applications should follow the PA-DSS for developing secure eWallet applications.

The most confusion over pre-authorization data typically occurs over SAD that an organization receives via email.  A lot of QSAs get their “undies in a bunch” over this and point to requirement 4.2 as the reason why this is unacceptable.  Requirement 4.2 states:

 “Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, chat, etc.).”

The operative word in 4.2 is “send”.  Requirement 4.2 says nothing about receiving PANs by these methods and the reason is that it is pre-authorization data.  That does not mean that the Council recommends receiving PANs via email, IM or similar methods.  It is only recognition of what goes on in the real world.  There will always be a small percentage of people that will send their cardholder data insecurely and there is little an organization can do to stop it.  But even before you start the argument, you need to acknowledge that this data is pre-authorization data because a transaction has yet to be processed.

To keep your email system out of scope, the Council has told QSAs to make sure that organizations have a process to remove the SAD as soon as possible from the system(s) in question.  That typically involves the printing of the message, deleting the message from the system and then securely destroying the printed message once the transaction is processed.  This is all considered “incidental contact” in the eyes of the Council and the QSA can then consider the system out of scope as long as they can satisfy themselves that the manual process is reliable and consistently executed.

Web sites that store customers’ SAD for future purchases are also dealing in pre-authorization data.  Merchants providing such a service should contractually acknowledge their responsibility to protect that data with the same rigor as post-authorization data and that the stored data is only for use for purchases by the customer on the merchant’s Web site.

My final example of pre-authorization data is when an organization’s travel department stores employees’ card data for company travel reservations.  The systems used by these departments typically store the cardholder name, PAN, expiration date and the CVV2/CVC2/CID2.  When the PCI DSS originally was published, the card brands instructed QSAs that these sorts of systems were not in scope for an organization’s Report On Compliance (ROC).  However, as time has passed, some of the card brands now want those systems included in the ROC, so organizations need to contact the card brands to determine if their ROC should include such systems.  These systems can be interesting in how little protection they can provide for SAD, so regardless of whether they are included in the organization’s assessment, the organization should review the security of the application against the PCI DSS and mitigate any gaps.  As with the aforementioned Web sites that store pre-authorization data, an organization storing SAD for their employees should legally acknowledge their responsibility to protect that data as well as document how it may be used.

The bottom line is that all of these situations involve pre-authorization data and pre-authorization data can include everything recorded on a card’s magnetic stripe or chip.  Merchants can store pre-authorization data until a payment transaction is processed.  If they do store the pre-authorization data, they are obligated under the PCI DSS to protect it with the same rigor as post-authorization data, i.e., encrypted, extremely limited access, logging, monitoring, etc.

Hopefully we are now all on the same page.

07
Dec
13

POS Botnets

Just in time for the holidays.

An article came out this past week regarding botnets that are specifically targeting point of sale (POS) systems.  The reason I bring this up is because of this quote.

“StarDust developers have intimate knowledge of the inner workings of PoS applications such as Clearview PoS.  As a result, the malware can ferret out where in computer memory sensitive data, in some cases in cleartext form, is stored.  StarDust can also sniff network traffic and is able to extract Track1 and Track2 card data.  To remain covert, the software transfers card details only when the terminal is inactive and the screensaver is on. It also uses the RC4 cipher to encrypt data before sending it to the control server.”

Obviously, if your organization uses Clearview POS software you should probably be examining your systems and networks to ensure that they have not been compromised by StarDust.

However, the larger issue is that most merchants do not see themselves as targets of such attacks, let alone have they constructed a secure environment for their POS systems.  Some of this is not entirely the merchant’s fault.  A lot of merchants outsource the maintenance and management of their POS systems to a value added reseller (VAR) and that VAR is the one responsible for the POS network configuration.  Regardless of responsibility, a merchant needs to be aware of these threats and take appropriate action either internally or with their VAR to address these threats and minimize risk.

Regardless of whether it is StarDust or a similar threat, here are some steps you can take to minimize and detect such threats.

  • Segment your POS network from the rest of your internal network and limit POS network segment traffic to only communication to your processor and internal network and system support and operations systems.  This will require the development of network access rules so that traffic can only reach your processor or internal system support and operations systems.  This will limit the number of systems that could compromise your POS environment.
  • Monitor your POS network segment for any traffic that is directed to an external network other than your processor or system support and operations systems.  Your firewall rules should only allow secure connections between your POS network and your processor or your system support and operations systems.  Network traffic going anywhere else should be stopped and reported for further investigation.
  • Monitor your POS systems for any file or configuration changes.  Most anti-virus solutions can provide this capability, but there are also solutions that are specifically engineered for this task.  Regardless of which you choose, configure the tool to alert you as soon as it identifies a potential change to files or configuration of the POS system.  If approved changes were not made to the POS systems and you received an alert, you likely have been compromised.
  • Develop an incident response plan should you receive an alert indicating that your POS systems have been compromised.  An incident response plan provides the organization with a “battle plan” should a compromise occur.  This type of plan is key to minimize the potential reputational impact to the organization should such an attack be confirmed.  A good incident response plan can keep you from making mistakes as you navigate the mine field that is the media circus that comes with a breach.

Three straight forward and simple steps that can minimize the threat of StarDust and a documented incident response process should you unfortunately be breached.

Security does not have to be rocket science.

02
Dec
13

PCI DSS v3 and PA-DSS v3 – Wait For It

There are all sorts of QSAs and other experts who are weighing in on the new versions of the PCI DSS and PA-DSS that were released around the first part of November.  In my very humble opinion, all of this discussion is speculation, at best, because the PCI SSC has not released the final pieces of the standards puzzles.  Those final pieces are the respective Reporting Templates for those standards.  Without those templates, QSAs and PA-QSAs have no idea of what the true testing and reporting requirements they will be held.  As a result, the impact of the changes to the standards will not truly be known until QSAs and PA-QSAs have the templates and can review the new testing and evidence requirements.  While I am not expecting the sort of major changes that resulted when the version 2 Report Instructions were released, there still could be some surprises that could impact the amount of work and evidence collected.  The PCI SSC has not committed officially to a release date for the Reporting Templates, but the rumor is that they will be available around the first part of March 2014.

There are some comments that can be made and I will be covering some of those points on a Webinar sponsored by Tripwire on Monday, December 16, at 7PM UTC / 1PM CST.  If you care to attend, you can register for the session here.  “See” you there.

09
Nov
13

Removing The Drama Of A PCI DSS Assessment

I had to prepare a presentation for a client a while back giving them some tips on how to prepare and get through a PCI assessment as easy as possible.  I thought it might be good to share those thoughts.

Trust But Verify

This famous quote from US President Ronald Reagan is the mantra of a PCI assessment.

The PCI DSS is based on the “trust” that organizations are complying with the PCI DSS.  However self-assessment processes and QSAs are used to “verify” that the organization is, in fact, complying with the PCI DSS.  As a result, the organization being assessed not only has to produce documentation to that effect, but the QSA must also observe that the PCI DSS requirements are being followed.

The net is that, just because you say something is fact, your QSA must substantiate your statements so that they, too, will treat them as fact.  If you remember nothing else but this simple truth, you will understand why a QSA must do what they do.

Scope

If PCI assessments go wrong for any reason, this is probably the primary reason.  It fascinates me that people often profess ignorance of the PCI DSS, yet somehow become experts on the subject when it comes to scoping.

Remember point number one, trust but verify.  Under that premise, the PCI SSC makes a QSA’s primary responsibility to confirm the scope of the PCI assessment as they verify the facts.  As a result, in order to confirm that scope, the QSA must look at everything and then, through investigation and evaluation, determine that the areas you deem out of scope are, in fact, truly out of scope.

Let your QSA ask their questions and conduct their observations without arguing with them about scope.  They are only doing this because they are required to confirm the facts and your fighting with them about scope is only going to making them wonder what you are trying to hide.  The bottom line is that arguing with your QSA about scope only makes your assessment all the more painful and time consuming.

If you truly want to avoid arguing over scoping, get a copy of the Open Source PCI Scoping Toolkit.  Go through your environment and determine the categories of all of your systems and networks.  This is a good annual exercise because you need to prove your scope every year.

Applicability

According to the PCI SSC, there are five PCI DSS requirements that can never, ever be marked as ‘Not Applicable’: 1.2.3, 3.2.1, 3.2.2, 3.2.3 and 11.1.  I have discussed these all before but they deserve another quick discussion here.

Clients will argue ad nauseam that wireless is not implemented or is out of scope and therefore refuse to discuss wireless.  For requirement 1.2.3, a QSA is required to document the procedures they followed to rule wireless in or out of scope.  That of course means the QSA must investigate any wireless networks and evaluate if the controls are rigorous enough to keep wireless out of scope.  For requirement 11.1, the QSA must investigate and evaluate if the organization’s controls surrounding the detection of rogue wireless are appropriate regardless of whether or not the organization has implemented wireless networking.

3.2.1, 3.2.2 and 3.2.3 are all related to the securing of cardholder data when it is stored.  Even if an organization is not storing cardholder data on their systems, a QSA must document the procedures they used to confirm that cardholder data is not stored on the organization’s systems.  This usually involves a review of flat files and database schemas and the running of utilities and queries against those systems and databases looking for cardholder data.

The bottom line is do not argue about something being ‘Not Applicable’ and then hinder the QSA’s investigation to prove it is ‘Not Applicable’.  Do not get me wrong, you need to keep your QSA on point, but remember that QSAs are required to evaluate the situation and then document the process used to determine that a particular requirement is ‘Not Applicable’.  All you do by complicating that investigation is add more time to your assessment and, potentially, cause a requirement to be marked as ‘Not In Place’ instead of ‘Not Applicable’.

Yes, I Did Kind Of Ask That Earlier

Like security, the PCI DSS also works from a ‘defense in depth’ approach.  A lot of the questions QSAs ask are very similar just asked from a different perspective.  The people that develop assessment and audit programs will tell you that this is the most effective way to uncover the level of compliance with a given program.  The reason is that organizations who have not integrated a compliance program into their day-to-day operations will typically provide inconsistent or confusing answers to the similar questions.  Not that this is a perfect technique mind you, but it does work the majority of the time.

Please be patient with your QSA.  They did not write these procedures, but they are required to execute them.

Answer The Question

Most people suck when being questioned, particularly in a legal proceeding, including yours truly.  Lawyers always instruct anyone that will be called to testify in a legal proceeding to take their time, focus on the question being asked and only answer the question being asked.  Never, ever, ever provide any information outside of the question, i.e., do not elaborate.  The trouble is that lawyers know that silence is a vacuum and it is human nature to fill that vacuum with extraneous information.  Hence why they typically have long pauses between questions.

QSAs and auditors tend to operate under the same principle as a lawyer.  People get into trouble when they start talking about things that are outside of the question, out of scope or not relevant to the assessment.  Such responses will at first confuse the QSA for a moment as they try to reconcile your remarks.  But then, the QSA may question whether they truly understand the environment and, possibly, the scope of the assessment.  It is then that they may start quizzing you and your staff as they go back and reconfirm their understanding of the environment.  All of this takes time, time away from the assessment process as you cover old ground while the QSA re-verifies the facts.

The lesson to be learned here is that there is nothing wrong with saying, “I do not know.”  Or “I will have to look into that question and get back to you.”  The worst thing you can do is try and “tap dance” around the question or never really answer the question.  If you do not have the answer, then find out who does have the answer and point the QSA to that person.

Prepare

And finally, the best thing you can do to avoid all of these issues is to walk through the PCI assessment process and requirements with those of your staff that will be interviewed/observed and make sure they understand the questions to be asked and how they should be answered.

If you really want to know what the QSA will ask, why they will ask and the evidence they will require, get a copy of the PCI DSS ROC Reporting Instructions from the PCI SSC Document Library.  The Reporting Instructions document is the “Bible” for QSAs as it documents how they will be assessed in a PCI SSC Quality Assurance review.  Reviewing and understanding this document will go a long way to minimizing the “What do you need that for?” questions that all QSAs encounter.

For each requirement’s tests, the Reporting Instructions will tell you:

  • What observations, if any, need to be performed and documented.
  • What documents, if any, need to be collected and reviewed and what information needs to be identified in those documents.
  • What people, if any, need to be interviewed and about what topic(s).
  • What processes, actions taken or states of equipment, if any, need to be observed and documented.
  • Whether or not sampling can be used.

Using the Reporting Instructions, you can also gather a lot of the observations ahead of time.  Your QSA will still have to conduct some observations such as that default passwords are not used, that timeouts occur, that change management operates and the like.  But by gathering screen shots and documenting what you used as testing conditions will go a long way to making your assessment go much more smoothly and quickly.

Hopefully this discussion will help you get through your next PCI assessment without all of the associated drama that can come from such an exercise.

29
Oct
13

If You Read Nothing Else About PCI – v1

David Froud has a great blog post out regarding his frustrations with PCI compliance and the industry’s lack of progress since he last did a Report On Compliance (ROC).  I have to say that some organizations have made a lot of progress in this area.  However there are, unfortunately, still way too many organizations that are putting more effort into figuring out how to dodge compliance or pawn it off on someone else than I would prefer.

 

 

27
Oct
13

Atom AMPD And PCI Compliance

Here is a relatively new player in network security space for small and mid-sized businesses (SMBs).  A friend of mine that does a lot of work with SMBs is encountering this solution more and more.  And is there any wonder why when it is portrayed as a God send for SMBs.  On the Atom AMPD Web site, they explain their Kwick Key solution.

“Kwick Key™, a USB flash drive, is the delivery device behind the AtomOS.  The Kwick Key is bootable and plugs into nearly all servers.  Kwick Key users experience a significant savings achieved with a high quality “one key” solution for their networking needs.

Simply install the Kwick Key into an internet-connected server, display the web interface, configure the features and you’re done.  The server is transformed into a multi-functional networking and communication device.

The underlying operating system behind the Kwick Key AtomOS is Linux. The content stored on the server is also backed up on the Kwick Key.  Once configured, the Kwick Key can be transferred to new equipment while maintaining its configuration, providing portability in the event of equipment failure.  A redundant option is also available.”

What is wrong with this picture?

If you said, “Too good to be true,” you would be correct.  There are no silver bullet solutions to security.  However these sorts of “all in one” security solutions are being marketed to SMBs all of the time as a cost saving way to be secure.  And since SMBs do not typically have any significant IT personnel, they are always looking for ways to reduce IT workload and save money.  However, if you need to be PCI compliant, this is not a solution for your organization.  Why?

If you read the Savings page on their Web site, they state:

“Your current IT infrastructure is likely requiring multiple boxes to serve your network and communication needs.  This likely includes multiple boxes supporting firewalls, content filters, routing and VoIP applications; each requiring individual training, maintenance, and ongoing licensing fees.  The AtomOS provides just one platform, one interface, one operating system. It brings to bear the BEST practices via a convergent technology.  All modules are tied together by our proprietary user interface.”

That “all in one” solution approach violates PCI DSS requirement 2.2.1 which states:

“Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)”

The reason for requirement 2.2.1 is to leverage the concept of “defense in depth”.  Defense in depth relies on multiple layers of defense such that if one layer develops vulnerability, the other layers still can provide some security and mitigate for the vulnerability until the vulnerability is fixed.  Under the Atom solution, vulnerability anywhere potentially creates a situation where the whole solution is at risk because of one part’s failure.

As a result, in order to be PCI compliant, it will require you to purchase multiple Kwick Keys.  I would assume that multiple keys will result in costs that negate Atom’s cost advantage over other PCI compliant solutions.

Then go to the solution’s product page for Kwick Key.  Take a look at all of the firewall features that are available.  Looks pretty good until you realize there is one notable feature missing – stateful packet inspection (SPI).  Basically, Atom has implemented port filtering which comes standard on Linux distributions.  Not that this is not secure, but it does not comply with requirement 1.3.6 which explicitly requires that SPI be implemented.

There are ways to add SPI to this solution.  However, that will mean you will have to support it yourself and the whole point of the Atom solution is to get out from under supporting such a solution for your organization.

My assumption is that with an appropriate wireless adapter in the system running Kwick Key that the solution will serve as a wireless access point.  Under requirement 1.2.3, wireless is required to be segregated from an organization’s cardholder data environment (CDE) by a firewall.  Given that the wireless is operating on the same device, it is questionable if compliance with this requirement could be truly accomplished.

The same concerns with wireless would exist with the virtual private network (VPN) solution.  Having the remote access to the internal network also running on the same system is not a best practice.  And how secure such a situation would be on this device is questionable.

You need to remember, this is not a purpose built networking device, this is a repurposed computer running Linux.  It is potentially susceptible to any number of Linux-based attacks and vulnerabilities depending on the services running.  And the more services you pile onto this device, the more potential for vulnerabilities.

Then there is the ability to add a voice over IP (VoIP) call manager solution.  Seriously?  What a silly and very dangerous idea.  Why?  VoIP protocols are primarily stateless (i.e., UDP) which means that they cannot be protected by today’s firewall technology which only work with stateful protocols (i.e., TCP).  I have actually had vendors correct me on this because VoIP call set up (pick up the handset) and tear down (hang up the handset) are conducted using TCP.  What these folks always miss is that the actual conversation is conducted over UDP so that the conversation can be streamed between the phones in use which is the bulk of the activity with a telephone call.  And it is not just one or a few UDP ports that can be open; it is typically a range of thousands of UDP ports that are open to support telephony.  Talk about a target rich environment.

Adding a VoIP call manager on top of your firewall is probably the most dangerous thing an organization could do because VoIP is so easy to attack due to the stateless nature of its protocols.  By implementing VoIP on a firewall you are essentially negating the firewall.  Running VoIP on anything but its own dedicated server on its own dedicated network is the only way VoIP should be configured for security, regardless of a need to be PCI compliant.

Finally, there is no pricing provided for the USB “key”.  I always get concerned about “wonder” solutions that do not provide pricing without contacting the vendor’s sales operation.  Nine times out of ten, all this does is force potential customers to then be contacted relentlessly by sales people until they purchase the solution which is likely overpriced.

This post is not to say that this solution is not appropriate for other organizations.  However, if you need to be PCI compliant, this solution is not for your organization if it is implemented as the vendor describes.

24
Sep
13

Coming Attractions

On September 12, 2013 the PCI SSC released the drafts of version 3 of the PCI DSS and PA-DSS.  In reviewing the PCI DSS, there are six new requirements that will be considered ‘best practices’ until July 1, 2015 when they will become requirements.

  • 6.5.6 – Insecure handling of PAN and SAD in memory.
  • 6.5.11 – Broken Authentication and Session Management
  • 8.5.1 – Service providers with access to customer environments must use a unique authentication credential (such as a password/phrase) for each customer environment.
  • 9.9 – Protect point-of-sale (POS) devices that capture payment card data via direct physical interaction with the card from tampering and substitution.
  • 11.3 – Develop and implement a methodology for penetration testing that: is based on industry-accepted penetration testing approaches (for example, NIST SP800-115), includes coverage for the entire CDE perimeter and critical systems, includes testing from both inside the network, and from outside of the network attempting to get in, includes testing to validate any segmentation and scope-reduction controls, defines application-layer penetration tests to include, at a minimum, the vulnerabilities listed in Requirement 6.5, defines network-layer penetration tests to include components that support network functions as well as operating systems, includes review and consideration of threats and vulnerabilities experienced in the last 12 months, and specifies retention of penetration testing results and remediation activities results.
  • 12.9 – Additional requirement for service providers: Service providers acknowledge in writing to customers that they will maintain all applicable PCI DSS requirements to the extent the service provider handles, has access to, or otherwise stores, processes, or transmits the customer’s cardholder data or sensitive authentication data, or manages the customer’s cardholder data environment on behalf of a customer.

I will discuss requirements 6.5.6 and 11.3 in separate posts.  I am not going to discuss 6.5.6 until I have a better understanding of how the PCI SSC expects QSAs to test that memory is being managed properly.  I am avoiding 11.3 because it contains enough for a post of its own.  But the others can be addressed in this post.

First, I have to say that I was amazed that these actually had to be codified as they are addressed through a number of other requirements.  But having run into numerous instances where I have encountered these situations, I understand why the PCI SSC felt the need to explicitly codify them.

For requirement 6.5.11, the guidance provided states:

“Secure authentication and session management prevents unauthorized individuals from compromising legitimate account credentials, keys, or session tokens that would otherwise enable the intruder to assume the identity of an authorized user. “

This requirement is targeting the botnets and Trojan attacks such as with Citadel and Zeus.  The problem here is that these are attacks on the end user, not the merchant.  As a result, what this new requirement is going to likely be looking for is for the merchant to be using methods to secure authentication and communications such that man-in-the-middle, man-in-the-browser and similar attacks are minimized or even eliminated.  It will be interesting to see how the PCI SSC expects this to be accomplished.

It has been a long time coming for 8.5.1.  Most QSAs have encountered this situation and we never liked it.  The situation that I speak of is managed service providers and software vendors using the same user identifier and password for all of their customers which they support.  While one can appreciate why this occurs, it does create a problem should those common credentials become known outside of the organization which has been the case in a number of breaches.  As a result, the PCI DSS has been changed to include this new requirement to require managed service providers and software vendors to use unique authentication credentials with each customer.

Requirement 9.9 is to explicitly address a best practice that has been used by a lot of merchants.  A number of merchants have experienced the tampering of card terminals over the years.  This typically was in the form of soldering a USB thumb drive or SD card into the terminal to collect track data and then replacing a good terminal with the doctored terminal at the merchant.  This threat is typically mitigated by video monitoring of terminals as well as the use of serialized security tape or tamper evident seals over a terminal’s case seams that is checked at least daily to ensure that terminals have not been changed out or tampered with.

And finally, requirement 12.9 calls out that service providers explicitly acknowledge in a document that they will maintain compliance with the PCI DSS for all relevant services.  Apparently the existing requirements in 12.8 were not providing enough assurance that service providers were complying with the PCI DSS.  So now we are going to require that all service providers acknowledge, in writing, that they will maintain compliance with all relevant PCI DSS requirements for all services provided to their customers.

30
Aug
13

Mobile Payments Update

This past week, Bob Russo, General Manager of the PCI SSC, held Webcasts to discuss the changes coming to version 3 of the PCI DSS and PA-DSS.  For the most part, these Webcasts were nothing special.  Just a scripted regurgitation of what was published in the Version 3.0 Change Highlights document release earlier this month.

However, at around 35 minutes into the Webcast I attended, things got very interesting and an actual useful piece of information was shared.  For it was at that point that Mr. Russo launched into a discussion about mobile payments and the use of consumer devices such as smartphones and tablets for the processing of card transactions.  Mr. Russo stated:

“The fact is that many consumer mobile devices simply cannot provide the level of security needed to adequately protect payment card data.  In other words, they cannot create a trusted environment equivalent to the PCI DSS compliant cardholder data environment.”

Whoa!  Now there is a piece of information I was not expecting.  I looked at the slide and these are the bullets on the slide.

  • PCI Standards focus on merchant acceptance
  • Mobile payment acceptance still evolving
  • Understand risk and use PCI SSC resources
  • PCI SSC is working with industry

Mr. Russo went on to say:

“We encourage merchants to understand the risk of using mobile, work with their acquirer and make their own decisions about mobile and whether they are willing to accept that risk.”

“We’re working with others in the industry including: standards bodies, vendors, banks and processors.  But we are unwilling to lower the bar for security by writing a standard that current mobile devices could meet.  If we wrote a secure standard for mobile now, no consumer devices would be able to meet it.”

The bottom line in all of this is that all of you merchants using iPads, iPhones, Android phones and tablets and similar devices are using them at your own risk.  They are not PCI compliant.  Not only that, you should have cleared that decision with your acquiring bank so that they also accepted the risk that you were using mobile devices to accept payments.

Mr. Russo referred Webcast attendees to reference the PCI SSC Information Supplement ‘Accepting Mobile Payments with a Smartphone or Tablet’ for additional information regarding the use of such devices.

Mr. Russo also reminded everyone that mobile payments from the consumer side are not within the purview of the PCI SSC and the PCI standards.  So wallet applications, near field communication (NFC) and the like will not be regulated by the PCI standards.

These statements though leave a lot of questions for QSAs.

  • How are QSAs supposed to reference these non-compliant devices in the merchant’s Report On Compliance (ROC)?
  • Do QSAs mark this in the ROC as non-compliant and which requirements do QSAs flag as non-compliant?  Is it because the devices are not PTS compliant or are their other requirements that are not met?
  • Do QSAs need a compensating control for these devices?  And exactly what would the controls be that would make these devices acceptable?
  • Do QSAs need to obtain a letter from the merchant’s acquiring bank(s) acknowledging the use of these devices and their approval of their use?

I am sure this will create many interesting discussions at the PCI Community Meeting that is just a few weeks out.  So stay tuned.

30
Jun
13

Developers Beware – Stripe

A reader pointed out this merchant services provider to me, so I checked it out.  I ran into some concerns as I reviewed their documentation that I want to share with you all so that you have a better understanding of what Stripe does and provides.

The first thing that I want to point out is why you need to carefully read a service provider’s Web site and understand what is what.  If you look at the Stripe General Questions page there is a discussion of PCI compliance.  Stripe states that they are a PCI certified Level 1 service provider.  What that means is that Stripe’s backend processing of card payments is PCI compliant.  This statement has nothing to do with their Javascript program or any other services that Stripe offers.

But, you say, Stripe provides a Javascript program for processing payments.  What about its PCI compliance?  Good question.  Here is a gray area with PA-DSS compliance of applications.  Because Stripe is not technically selling its Javascript file (it is provided free), I am sure they believe that the Javascript does not have to be certified.  However, the General Questions page is very clear that “Use Stripe.js as the only means by which you accept payment information and transmit it directly to Stripe’s servers.”  To me I would think the required use of the program would imply that the Javascript is required to be PA-DSS certified since developers have no choice but to use it.

That brings up how is Stripe’s Javascript maintained?  What ensures that someone does not change the script and now it starts securely communicating with another server as well as Stripe’s?  This is why PA-DSS certification is necessary for applications because there is too much risk if those applications are not properly maintained and controlled.  All it takes is a single successful attack and every one of Stripe’s customers will pick up a new Javascript that will do whatever the attacker desires.

The next concern I have regarding Stripe is their statement that by using their process, “you completely avoid handling sensitive card data, and keep your systems out of PCI scope.”  Wait a minute.  Stripe states just above that that you need to, “Serve your payment page over SSL.”  That implies that their Javascript runs on your server.  This gets confirmed on their SSL page where they state that their Javascript only communicates with their backend servers over an SSL connection.  It may be their code, but it is executing on your server and that means that your server is in-scope for PCI compliance.

The PCI SSC really needs to get on top of service providers and their claims.  It troubles me that this service provider is making claims that are totally incorrect.  It is these sorts of claims that get good people get into trouble.  Most developers and their organizations do not have the necessary detailed understanding of the PCI standards and therefore trust service providers to be the experts which, in the case of Stripe, they are obviously not experts and are just as clueless.




Announcements

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 837 other followers


Follow

Get every new post delivered to your Inbox.

Join 837 other followers