Posts Tagged ‘security



01
Mar
14

How Did It Happen?

This is just my supposition on how the Target breach occurred, but it is based on what has been released to date plus what limited knowledge I have of Target’s environment, the environments of other large retailers and my many years of penetration testing.

Fazio Mechanical Services

According to the latest reporting, Fazio Mechanical Services (Fazio) is believed to be the starting point of the Target breach.  From what has been reported, a Phishing attack on Fazio yielded access to Fazio’s computer systems and network.  In their statement regarding the breach, Fazio says:

 “Fazio Mechanical does not perform remote monitoring or control of heating, cooling or refrigeration systems for Target.”

“Our data connection with Target was exclusively for electronic billing, contract submission and project management, and Target is the only customer for whom we manage these processes on a remote basis. No other customers have been affected by the breach.”

If we take Fazio at their word, Fazio did not have direct access to Target’s network.  That means if Fazio was breached, that breach did not result in a direct path to Target’s network.  Brian Krebs reported that he spoke with an ex-Target employee who told him that Target uses the Ariba Supplier Management solution for managing its external vendors.  The Ariba system is available publicly on the Internet but it requires credentials in order to gain access to the application.

Based on these facts, my guess is that the Fazio attackers were likely separate from the Target attackers.  Therefore, the Fazio breach is like most breaches; the attackers get in, probe around and then leave if nothing of value can be identified.  That is not to say that they were not necessarily targeted for a way into Target, but I find it unlikely that Fazio was specifically targeted for the Target breach.

The Fazio attackers likely advertised the information and credentials that they gathered to other attackers on the Internet “underground” and sold them to whoever was willing to pay including the Target attackers.

The Russians

In my opinion, the Russians that eventually sold the card information were probably not the actual attackers that retrieved the cardholder data from Target.  However, they likely could have been behind the attack as the folks that instigated it and funded it.  Other than selling the cardholder information, until these individuals admit their role, we will probably never know if they were just a fence for the information retrieved or if they were behind the attack.

In my scenario, the Russians began scoping out likely candidates for compromise and picked Target because they found information on the Internet “underground” and determined that it was likely possible to successfully get in and get information.  Once the research was done, they then assembled a team to get the actual attack done.

The Malware

In reading the various news accounts, the Secret Service indicated that the attack was sophisticated.  A review of the infamous Microsoft case study, Target had implemented Microsoft Windows Server Update Services (WSUS) now part of Microsoft Service Center Operations Manager (SCOM) at all of their stores so that they could rapidly deploy updates to their stores in the smallest possible time frame.  In the retail business, IT people get very small windows of opportunity to perform updates so this architecture would provide IT with the ability to stage updates and then deploy those updates as quickly as possible.

A lot of people have commented throughout the numerous discussions of the breach on Google+, LinkedIn and Twitter questioning how the attackers could have compromised so many POS systems so quickly.  It is my opinion that this was done through SCOM.

But there is a huge problem with using SCOM when the software is not Microsoft’s – SCOM can be somewhat to very temperamental when it comes to deploying non-Microsoft software and updates.  Over the years it has gotten better with some non-Microsoft solutions, but considering the deployment of malware via SCOM and having it work right the first time requires knowledge of not only SCOM but the Target computing environment.

This brings me to the fact that I believe an insider had to have been involved in the breach.  Not necessarily an actual Target employee, although that cannot necessarily be ruled out, but more likely a knowledgeable contractor.  Like all large corporations, Target outsources development to contractors that have offices and staff located all over the world.  Those contractors also have their own contractors that are located all over the world.  It is my opinion that the Russians compromised one or more contractors with development knowledge of Target’s POS application and deployment of the POS software.  This was required to develop the malware from the BlackPOS code and develop a one-time successful deployment capability using SCOM.  Whether or not these individuals were actually part of the attack team is debatable.  They would only be needed to develop the solution and the SCOM deployment scripts and possibly procedures to avoid Target’s QA process.

Outsourced contractors in third world countries can be readily bought.  People in the West forget that these developers can be making anywhere from cents per hour to only a few dollars an hour.  That is why development work is outsourced to them as it is more cost effective than using developers where they are making one hundred dollars per hour or even more.

But that brings up an interesting conundrum in this breach.  If a contractor was compromised, could they not still be involved in Target’s development efforts and just deliver the malware directly as part of their deliverable?  I think that could have been a possibility, but it would have risked being discovered in Target’s code review, quality assurance and testing processes which is probably why the malware was not delivered by that method.

The Attackers

The attackers could have come from anywhere, but most likely are from Russia or one of the former Russian states such as Ukraine or Belarus.  The reason this is most likely is that the people that sold the Target cardholder data were Russians and they would want people with their same background to execute the attack as well as having some amount of control over the attack team.

The Attack

The attackers that broke into Target likely went shopping for ways into Target and found the Fazio Ariba credentials for Target as well as probably other credentials to other publicly available Target applications.  The attackers either bought those credentials or had their Russian bosses purchase those credentials.

I had to put my penetration testing hat on to figure out how the Ariba credentials came into play.  The reason is that if Ariba is available from the Internet to anyone, why would an attacker need credentials?  Then it dawned on me.  They needed the credentials in order to compromise Target quietly.

My rationale for this is that Target does a decent job at securing publicly facing applications, particularly since their 2007 breach.  Assuming the Ariba application was properly implemented, doing an attack without the credentials would have alerted Target’s information security personnel and it would have been game over.

As a result, the attackers needed the credentials so that they could gain access to Ariba so that they then could compromise it with a cross site scripting attack, SQL injection or whatever they used to gain access to one or more of the Ariba servers so that they could then breach the rest of Target’s network, specifically the SCOM system(s).  The reason this approach would be more likely to be ignored is that the attackers would have valid credentials and any anomalous activity would likely be written off by Target personnel.

This brings us to the next reason I believe an insider is involved.  The timeline discussed thus far gives the impression that the breach was a fairly quick operation.  The only way the breach could have been conducted so quickly is if the attackers had knowledge of where they needed to go to compromise the SCOM system.

That said, the database of Target guests that was also retrieved was likely collateral damage in that it was encountered during the attack and was taken so that the attackers did not walk away empty handed.  The other possibility is that the database was used to test the data exfiltration process to ensure it would go undetected.

Once the attackers owned the Ariba system, they would then have had access to the administrators of Ariba.  The insider would have given the attackers an idea of where the SCOM system was located and probably who had access.  It then became a process of compromising one of those Administrators to gain access to SCOM.  Because they were inside Target’s network, the administrators were likely compromised using an extremely targeted phishing attack using the internal email system.  As a result, the phishing message would have looked even more than legitimate because it was internally generated and delivered.  The message likely contained some sort of Word or Excel document that had backdoor software that would not be detected by the anti-virus solution.

However another option could have been used once the attackers were inside.  They could have approached any of the administrators and pretended to be a contractor and asked for access to SCOM in the test environment.  From there they could have staged their malware and then sent it through the QA process.  Regardless of how they gained access to SCOM, the attackers had to have used the SCOM system to deploy their malware with the speed that they deployed it.

Creating the data dispersal server was a straight forward problem.  With the insider’s knowledge, they knew where FTP was implemented and merely compromised the server to be their own collection point so as not to arouse suspicion.  To get the data out of Target they used DNS as every system needs access to DNS.  A lot of people have argued that Target should have seen the exfiltration of the data via DNS and have pilloried Target for their ineptitude.  However, if the attackers were as sophisticated as they have been portrayed, they likely constructed their exfiltration system to mimic the size of valid DNS packets and thus only traffic volume would have been a possible trigger.

Is this scenario correct?  We will not know until a final report is released if we ever see a final report that gives actionable information.

That said, I am sure there are a lot of you reading this and are shaking in your boots based on this scenario.  That fear is likely based on the fact that you realize how ill equipped your organization is to deal with this sort of attack.  And you should be scared.  This is a war of escalation that we are waging.  Organizations step up their game and the attackers up the ante on their side.  Like the famous saying, “When chased by a bear, I only have to outrun the last person to save myself” is very true in this situation as well.  Your organization’s security game only has to be better than the other organizations.  But when an organization like Target is breached and they were considered to be at the top of the security game, what chance does an organization with mediocre security have?

The only saving grace might be is that your organization is flying under the radar.  I say “might be” because, according to the majority of reports on the state of information security, most organizations have no idea that they have been compromised.  That is because people rely on anti-virus and other technologies that have a poor track record of identifying malware and sophisticated attacks.  And then, as we learned in this past week’s report on the Neiman Marcus breach, you can have information security personnel write off malware discovered as false positive results and let it re-infect for months without investigating or even worrying about what was going on.

It is easy to pillory the guy that got breached.  However, a lot of you should look inside your own organizations before tossing stones.  I would guess that most of you tossing those stones would not fair any better and likely worse than Target should your organization be breached.

Advertisement
18
Jan
14

Why The Paradigm Must Change

The Target, Neiman Marcus and the potential other breaches of retailers to come should be a learning moment for all of us to demand that the card brands change their business paradigm to one that is more secure.

Bolt-Ons Do Not Cut It

For all intents and purposes, how a credit card works has not changed since the late 1950s when they were introduced.  Yes, there have been advancements such as EMV, 3D Secure and end-to end encryption (E2EE), but those are all things that just bolt onto the original concept.  The trouble is that, given today’s technologies and their capabilities, the card and the bolt-ons are just no longer providing the security they once did.

With the Target breach there has been a call to get the US to finally convert to EMV.  The trouble is that EMV would have leaked enough information for fraud to be committed as well, so it is not an answer.

Trade association spokespeople trotted out 3D Secure and other methods of securing online transactions.  The trouble is that most merchants eschew 3D Secure and its kind.  In addition, there are known vulnerabilities with these supposedly secure payment methods so they also have potential issues that could be exploited.

Then there is E2EE also known as point-to-point encryption (P2PE) from a PCI perspective.  These also can be exploited.  It may be more difficult, but when you are determined to gain access to sensitive information, that does not matter.

After the release of the PCI DSS in 2008, a lot of retailers implemented a variety of E2EE solutions.  Unfortunately, the endpoint at the retail location was the POS register and not the terminal.  This was not due to merchants’ negligence; this was due to how their POS applications operated.  This allowed for attacks such as that used in the Target breach to succeed.  All the attacker has to do is insert their malware into the POS process so that the malware can “see” the cardholder data before it gets encrypted.

Even in solutions that do E2EE/P2PE to the terminal can be defeated by taking the same approach and inserting the malware into the terminal process before the terminal can encrypt the data.  Worse yet, if the terminal is breached, the attacker can capture PINs if they also have malware that captures the keystrokes on the terminal before the PIN is encrypted.  There are a number of methods to minimize these risks at the terminal, but if the terminal supply chain is compromised as it was over a year ago in the Barnes & Noble breach, there is little a merchant can do to stop such attacks.

The bottom line is that all of these solutions are bolt-ons to the existing card paradigm and all still have risks that a breach could occur.

Using Complexity Against Us

Brian Krebs and others have wondered aloud how a sophisticated organization such as Target that has information security and forensic resources second possibly only to the government could have been compromised.  Particularly after the 2007 compromise by Albert Gonzales when Target totally revamped and increased their security posture to minimize the likelihood of another event.

The first clue to me came when I read the iSIGHT PARTNERS report on the Target breach.  The theme that comes through loud and clear is that the attackers are using the complexity of Target’s technology infrastructure against Target.  I mean how could FTP activity and huge data transfers (internal and external) go so unnoticed?

Actually, that was likely fairly easy.  The attackers used existing network traffic to mask their own network traffic.  They sought out servers that already had large volumes of traffic and put their data collection server on one of those servers that already had a lot of traffic.  Better yet, a server that was already running as an FTP server.  As a result, even with diligent monitoring, the increase in traffic likely did not raise any alarms.

People assume that such breaches are like a “snatch and grab” in the real world.  The attackers break into an organization’s network, quickly take what they can off of the computers they encounter and leave.  That was the modus operandi (MO) in the past, but not today.  Sophisticated and organized attackers such as those that breached Target, do what they can to remain unseen while they learn more about their victim.  They take their time mapping out the network and determining what devices they want to compromise to further their efforts to gain access to the sensitive information they seek.  Because of this, it is highly likely that the Target attackers encountered the Target customer database during their investigation of the Target network and took it first so that they would have at least something for all of their efforts.

The most insidious thing I think the attackers did was that they likely used Target’s software distribution system to disseminate their malware.  Given the number of POS systems compromised (around 51,000); I find it hard to believe that the attackers manually installed their malware on those POS systems.  It would have placed their operation at extreme risk likely resulting in its discovery.  By using Target’s software distribution system, the attackers got an added benefit of legitimacy to their malware because they Target themselves did the installation.  As such, the malware would appear as valid because Target’s software management system initiated the change.

Now What?

All of this brings up an interesting conundrum.  If attackers are stepping up their game and using such techniques, how do we detect them?  It is a very good question with no good answers.  The iSIGHT report offers methods to stop and eradicate this particular attack.  However, the next attack and the attack after that will all likely use different malware and different techniques to get the data out of your network.

We are in is a war of escalation with no end in sight.  Merchants step up their efforts to stop such attacks and the attackers adapt and adopt new techniques to breach organizations and gain access to their sensitive information.  What we need is a solution that stops the escalation and gets us out of this vicious circle.

That is why I am pushing the 15 – 16 character single use transaction code as that solution.  My reasons are as follows.

  •  The algorithms already exist as a number of the card brands experimented with them a decade or more ago.
  • It will work with existing POS technology and applications.
  • It will work with existing eCommerce sites.
  • It can be implemented into eWallet applications.
  • It can be processed, stored and transmitted without encryption.
  • It can be generated by PCs, smartphones, tablets, credit card sized devices and any other devices that have computational capabilities.
  • It can be displayed on devices in a character format for manual entry or as one or 2D bar codes for scanning.
  • It can be transmitted via swipe, EMV, near field communication (NFC), Wi-Fi or even Bluetooth.
  • And best of all, it is secure by the very nature that it can only be used once.

There will be some changes that would be required at the transaction processors and acquiring banks to handle such a solution.  But given that some of the card brands already have experience with this solution, there is a body of knowledge that already exists as to how it needs to be implemented.

Let the discussion begin on how we move ahead with a better, more secure solution.

07
Dec
13

POS Botnets

Just in time for the holidays.

An article came out this past week regarding botnets that are specifically targeting point of sale (POS) systems.  The reason I bring this up is because of this quote.

“StarDust developers have intimate knowledge of the inner workings of PoS applications such as Clearview PoS.  As a result, the malware can ferret out where in computer memory sensitive data, in some cases in cleartext form, is stored.  StarDust can also sniff network traffic and is able to extract Track1 and Track2 card data.  To remain covert, the software transfers card details only when the terminal is inactive and the screensaver is on. It also uses the RC4 cipher to encrypt data before sending it to the control server.”

Obviously, if your organization uses Clearview POS software you should probably be examining your systems and networks to ensure that they have not been compromised by StarDust.

However, the larger issue is that most merchants do not see themselves as targets of such attacks, let alone have they constructed a secure environment for their POS systems.  Some of this is not entirely the merchant’s fault.  A lot of merchants outsource the maintenance and management of their POS systems to a value added reseller (VAR) and that VAR is the one responsible for the POS network configuration.  Regardless of responsibility, a merchant needs to be aware of these threats and take appropriate action either internally or with their VAR to address these threats and minimize risk.

Regardless of whether it is StarDust or a similar threat, here are some steps you can take to minimize and detect such threats.

  • Segment your POS network from the rest of your internal network and limit POS network segment traffic to only communication to your processor and internal network and system support and operations systems.  This will require the development of network access rules so that traffic can only reach your processor or internal system support and operations systems.  This will limit the number of systems that could compromise your POS environment.
  • Monitor your POS network segment for any traffic that is directed to an external network other than your processor or system support and operations systems.  Your firewall rules should only allow secure connections between your POS network and your processor or your system support and operations systems.  Network traffic going anywhere else should be stopped and reported for further investigation.
  • Monitor your POS systems for any file or configuration changes.  Most anti-virus solutions can provide this capability, but there are also solutions that are specifically engineered for this task.  Regardless of which you choose, configure the tool to alert you as soon as it identifies a potential change to files or configuration of the POS system.  If approved changes were not made to the POS systems and you received an alert, you likely have been compromised.
  • Develop an incident response plan should you receive an alert indicating that your POS systems have been compromised.  An incident response plan provides the organization with a “battle plan” should a compromise occur.  This type of plan is key to minimize the potential reputational impact to the organization should such an attack be confirmed.  A good incident response plan can keep you from making mistakes as you navigate the mine field that is the media circus that comes with a breach.

Three straight forward and simple steps that can minimize the threat of StarDust and a documented incident response process should you unfortunately be breached.

Security does not have to be rocket science.

28
May
13

BlackPOS

I got a Tweet from a friend today regarding this new piece of malware found out in the wild and dubbed ‘BlackPOS’.  BlackPOS is very similar in nature to vSkimmer.  Now before everyone goes off and panics, if you are religiously following the PCI DSS, BlackPOS should not be an issue and here is why.

  • Requirement 11.5 – Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.  BlackPOS does a lot of manipulation around known file names, but the hash values of those files should change from the known good values, so any file monitoring system should alert on that fact.  It also uses file names that would never exist on a production system, so those should also generate an alert.  In addition, BlackPOS creates a TXT file that also should generate an alert when created.  However, if you are not alerting in real-time, you should be so that you pick up these issues as soon as possible.  This is where the bad guys are headed with their attacks, so you may as well alert as soon as an incident occurs so that you can address it before it gets out of control.
  • Requirement 1.1.5 – Documentation and business justification for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure.  BlackPOS uses FTP to move the TXT file from the POS system to their server.  If you are allowing FTP to flow freely from your POS or cardholder data environment (CDE) to anywhere on the Internet, you were not PCI compliant in my opinion, even if you had some bizarre business justification.
  • Requirement 5.1 – Deploy anti-virus software on all systems commonly affected by malicious software (particularly personal computers and servers).  While BlackPOS was only identified today, the anti-virus vendors will most likely have signatures out by the time you read this, so they will be looking for BlackPOS by the time you get your updated signatures.

Just these three requirements can stop this sort of an attack.  Yet, time and again we see these attacks succeed because people are not properly implementing their file integrity and not restricting network traffic flowing out of their internal networks.

PCI compliance does work when you use it the way it was intended.

16
Apr
13

The Insider Threat

At this year’s PCI Community Meeting there was a great presentation done by Verizon Business Services on their 2011 Data Breach Investigations Report.  However, one of the things that concerned me about their presentation is that they seemed to downplay the threat insiders pose to the breach of information.

So, I went back and reread their report because I did not get that same interpretation when I originally read the report.  On page 42 of the Verizon report, there is a discussion of “Errors.”  Errors are defined “as anything done (or left undone) incorrectly or inadvertently.”  According to this section, there were 219 instances out of the total 761 breaches where insiders contributed to the breach.  That computes to almost 30% of all breaches.  That is almost twice the 17% quoted as a highlight in the front of the report and used to justify the downplaying of the insider threat.  So the insider threat is still substantial and should not be ignored.

The biggest problem with the insider threat is that it does not matter how much technology you have in place to protect your information assets as it only takes one person in the right place to neutralize every last bit of your high-tech security solutions.  Just ask anyone at any of the recently breached organizations how all of their technology functioned when they suffered their breach.  I am sure they will tell you the technology worked just great – not so much for their people.

First, there are the mistakes everyone makes that are done in the name of efficiency or politeness.

  • The sharing of a manager password to expedite a process in the name of good customer service.
  • The holding open of a door to a secure facility because someone’s arms are full.
  • The swiping of your access card and letting others tailgate through a secured door to be polite.

At the end of the day, we all are guilty at one time or another of doing these things as well as many other bad security acts.

That is the real problem we all face and why security standards focus so much on a layered approach also known as defense in depth.  The hope is that with multiple layers in place, even if one or two layers become non-functional due to the people issues, another layer will stop or at least detect the issue and the issue will be averted or minimized.  However, in most cases, if someone can get the right software onto the right user’s computer, it really does not matter what security is in place.

The first example of such a breach was of RSA back in March 2011.  As the story goes, through the use of electronic mail, attackers targeted RSA network and system administrators with messages containing an Excel spreadsheet as an attachment.  The spreadsheet contained backdoor software that was surreptitiously installed on the computer providing the attackers with remote access into RSA’s network.  With remote access, the attackers are free to scope out the RSA network at their leisure.  Over time, the attackers obtain the code for RSA’s SecurID servers and FOBs.  Once discovered, RSA is forced to replace SecurID FOBs for free to their customers.

Right on the heels of the RSA breach was the breach of Epsilon in April 2011.  Epsilon was a quiet firm that did the electronic mail marketing and loyalty programs for such businesses as Best Buy, Kroger, Marriott and LL Bean.  It creates the biggest opportunity for spear fishing ever seen.  Based on news reports, Epsilon was attacked in a similar fashion as RSA although the two breaches have never been linked to the same attackers by authorities.

In May 2011, the apparent fruits of the RSA breach were unleashed on Lockheed Martin and rumored to have also been unleashed on Northrup Grumman and L3 Communications.  Using fake FOBs, the attackers broke into Lockheed Martin and attempted to gain information from Lockheed Martin’s systems.  News reports indicated that the Lockheed Martin attack was eventually repelled and no information was obtained by the attackers.

Citigroup suffered a double whammy of bad news in June 2011.  The first hit was the admission that their online banking site had been compromised and more than 350,000 customer accounts had their information leaked to the attackers.  Then on June 26, a former Citigroup executive was arrested for embezzling $19 million while they worked in the treasury finance department.  The first event was front page news in the papers and on the Internet.  The second event barely made a notice.

The reason I bring these breaches up is that while these computer attacks are big news, they point to the fact that the bigger threat is actually the people you employ.  The reason why attackers targeted these companies’ employees is that insiders have direct access to information and, in most cases therefore, that attackers do not need to hack any computer systems to gain the information.  This is bore out in the fact that security survey after survey keeps confirming that the vast majority of compromises are the result of some amount of insider involvement.

All organizations are at significant risk to the insider threat because most have done little or nothing to prevent it.  Sarbanes Oxley and the like have done no one a favor in propagating this view of controls.  This is why I think a lot of organizations push back on complying with the PCI DSS, they abhor controls and want nothing to do with controls of any type.  You hear arguments regarding the “stifling of creativity” and “make do work” which are nothing more than whining from people that have no clue as to why controls are needed.  I have documented in a previous post the three phases of a well structured control environment, so I will leave readers to review that for reference.  Properly designed and implemented, internal controls make a big difference because if people know that someone is looking over their shoulder to ensure that their job is done properly, that in and of itself goes a long way to keeping everyone honest as well as minimizing operational errors.

However, just because you have controls does not mean that everything will go smoothly.  A lot of organizations have a great control environment on paper, but do very little to ensure that it is executed as written.  We see time and again organizations that talk a good game, but when you start looking at their operations, the control environment is a paper tiger because no one is enforcing those controls and the controls are being followed haphazardly, if at all.

Then there are the organizations that have never reviewed and streamlined their controls since the founding of the organization.  These sorts of companies have created a controls monster.  What has happened is that as the business encountered a new issue, a new control was placed on top of other controls to address the new threat.  Over the years, all of these controls now keep a huge bureaucracy busy doing nothing but making sure that controls are controlled.

While some of this problem can be laid at the feet of management, it is not entirely all their fault.  Custom and packaged application developers have only recently started implementing security and controls into their software that allow the level of granularity necessary to meet SOX, PCI and other security requirements.  In most cases, security has always existed in applications, just not at the level necessary to properly ensure the security of sensitive information in today’s environment.

The lesson here is to ensure you have controls in place to ensure the security of your sensitive information.  If your applications that process, store or transmit sensitive information do not have the capability to properly protect your sensitive information, then you need to create manual controls to fill in any gaps.  Those manual controls need to have the ability to provide feedback so that if they begin to fail, someone is alerted to that fact and can step in and get the controls functioning again.

03
Apr
13

Merchant Beware – New Mobile Payment Solution Out In The Wild

Merchants need to be aware of a new mobile payment solution – Square from Square Inc.  A colleague pointed me to the Square site with the question, “Is this PCI compliant?”

Square appears to be a hardware/software solution for iPhones, iPads and Android devices.  It has a cute, square magnetic stripe reader for swiping cards, but also appears to provide the capability to manually enter cardholder data through these devices’ virtual keyboards.  This all appears to be similar to the iPhone that used to appear in the first Apple iPhone commercials that, for reasons that will become obvious, magically disappeared from their commercials very quickly and quietly.  It is also why Apple no longer uses iPhones or iPod Touches in their stores to process payments.

In referencing the PCI SSC’s PTS certification database, I could not find Square’s certification for the PTS standard.  Although, given the pictures on Square’s Web site, I really did not expect to find it certified to the PTS standard as there is no way it could meet the PTS standard.  Has Square submitted their solution for PTS certification?  It may have, but since the PCI SSC PTS certification database only lists those devices that have completed the certification process, there is no way for anyone to know if it has submitted Square until it is certified.  However, since the use of PTS certified devices is a requirement of all of the card brands, until Square is PTS certified, use of a Square device for processing of credit cards violates a merchant’s merchant agreement.  Game over.

While not complying with the PTS standard is a deal breaker in my opinion that is not the only PCI compliance issue.  In referencing the PCI SSC’s PA-DSS certification database, I could also not find the Square software application listed.  That situation was also not unexpected as the PCI SSC announced in a press release on June 24, 2011 that it was suspending the PA-DSS certification review of all mobile payment applications indefinitely.  As a result, there is no way Square’s software will be PA-DSS certified for the foreseeable future whether they submitted it for PA-DSS certification or not.  Not that the PA-DSS certification is a deal breaker for merchants to use the Square software, but it means that merchants using the Square software to process payments will have to have the Square software assessed to ensure it meets all of the PCI DSS requirements regarding payment applications.

And knowing what I know about all of these devices, I can guarantee that the Square software will not be PCI DSS compliant because all of these devices will store the cardholder data unencrypted for an untold amount of time until it is written over.  Even if Square’s software encrypts the data, the underlying OS will also collect the data in cleartext.  Forensic examinations of these devices have shown time and again that regardless of what the software vendor did, the data still existed in memory unencrypted.  And that unencrypted data in memory can exist in these devices for days, weeks to even months depending on transaction volume and other applications loaded on the device.  It is this surreptitious OS data collection activity, the security issues with other applications as well as other security concerns that caused the PCI SSC to suspend their PA-DSS certification activities of these applications.

There is only one solution that uses an iPhone or iPod Touch that is PTS and PA-DSS certified at this time and it is from Verifone.  The reason that Verifone’s PAYware solution is certified is that: (1) Verifone submitted it for the PCI certifications prior to the June 24 suspension and, the bigger reason in my book; (2) it relies on a digital back separate from the iPhone/iPod that performs the card swipe and all of the card data processing/transmission in a secure manner.  The iPhone or iPod Touch are used only as a display and cellular/Wi-Fi conduit for network connectivity.

The only other mobile payment solutions I am aware that are PTS compliant are purpose built credit card terminal using Wi-Fi or cellular communications.  These are considered terminals by the PCI SSC, so their underlying software is not required to be PA-DSS certified at this time, but they are required to be PTS certified.  In addition, these terminals have been in use in Europe for quite some time, so they are a proven secure solution.

The bottom line is that it is the merchant’s responsibility to ask vendors the right questions and weed out non-PCI compliant solutions.  The card brands and the PCI SSC are not in the business of regulating vendors, they leave that to the marketplace.

If you are looking for a PCI compliant mobile payment solution, talk to Verifone, Equinox, Ingenico or other recognized card terminal manufacturers as those are going to be your only PCI certified mobile payment processing options at this time.

24
Mar
13

Why vSkimmer Should Not Matter

It was announced this week by McAfee that a new threat to merchants has been discovered called vSkimmer.  This is a very insidious threat as most merchants will likely not know they have been infected until it is too late.

The net of vSkimmer is that it is malware of the highest order built for the explicit purpose of collecting track 2 data from Windows point of sale (POS) systems.  Worse yet, whoever wrote this little gem of software intends to enhance it in 2013 to include the ability to skim EMV cards’ “track” data as well.

vSkimmer can be deployed like Stuxnet through a USB thumb drive, as malware in an email message or on a Web site or any number of ways.  When installed, vSkimmer determines the operating system and version, hostname, active username and various other operational characteristics of the POS system.  It then inventories running tasks and memory to determine where track 2 data is stored and begins recording that data.

vSkimmer works whether the POS system is connected to the Internet or not.  When the POS is connected to the Internet, it transmits the data obtained to a control server using HTTP.  When the POS is not connected to the Internet, the information is stored until someone connects a USB device labeled ‘KARTOXA007’ and copies all the information it obtained onto the USB device.

As usual, the Internet is abuzz regarding how this will be addressed by the PCI DSS.  Sorry to disappoint, but it is already addressed.  Here are some key requirements in the PCI DSS that should mitigate vSkimmer.

  • Requirement 1.2.1.a requires that only that network traffic that is necessary is allowed through the firewall.  Merchants should be only allowing connectivity from the POS or card terminal to their processor and nowhere else.  Any traffic attempting to go anywhere else should be flagged and IT alerted to investigate.
  • Requirement 5.1 requires that you have anti-virus and anti-malware software installed on POS devices.  Given this is a Windows specific threat and Windows is highly susceptible to being infected, you should have done this already.  While anti-virus solutions are not perfect in always identifying such malware, since McAfee and other anti-virus solution vendors are the ones that found vSkimmer, I would imagine that they all have or will very soon have signatures for vSkimmer.
  • Requirement 6.1 requires that systems are patched current.  The problem with patching POS systems is that a lot of vendors issue POS updates for the OS and their application on a quarterly or even annual basis and do not recommend that merchants patch their POS systems directly from Microsoft because of compatibility issues.
  • Requirement 10.2 which requires the logging of events.  In the case of a USB device being plugged into the POS system, at a minimum you should see that the portable device enumerator service going active when a USB device is plugged in and if the device is new, you should see system event log entries regarding the loading of device drivers to support the USB device.  None of these actions should be seen in your log data, so if you monitor for these events, you will know that USB devices are potentially being plugged into your POS systems.
  • Requirement 10.5.5 requires the use of file integrity monitoring which would catch the installation of vSkimmer as a foreign piece of software even though it masks itself as ‘svchost.exe’.  This would provide a backup control for requirement 5.1 if vSkimmer changes its approach as to its file name and is not caught by the anti-virus solution.

In addition to the PCI requirements, you can do the following to increase your security in regards to vSkimmer.

  • Do not allow USB devices to be connected to your POS systems.  Most card terminals are RS232 devices, but USB is becoming more common.  The Windows Group Policy function can be used to disable USB ports on Windows systems.  There are also third party solutions that will disable USB ports.  A lot of these third party solutions can offer additional granularity in what types of USB devices can be connected.  This can be very advantageous when you are using USB card terminals which still need to connect, but other USB devices should not be allowed.  One of my more imaginative clients hot glues the ports shut on their POS systems.
  • Train your staff on the vSkimmer threat.  Explain how it works and what they can do to minimize this threat such as not allowing anyone to manipulate the POS systems other than employees responsible for the care and maintenance of the systems.
  • Lock your POS systems in a sealed cabinet or cage and only allow the manager on duty to have the key.  This may also involve additional security on POS servers if those are also used by your POS solution.
  • Periodically review video of your POS stations to determine if cashiers or other personnel appear to be manipulating the POS system.

If you adopt all of these measures, you will significantly reduce the threat presented by vSkimmer and will likely never encounter it.

24
Feb
13

2013 Threats To Databases

Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks.  Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.

Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align.  It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed.  Application Security Inc. sees the following as the top risks to databases for the coming year.

  • SQL injection
  • Password attacks
  • Improper or ineffective access controls
  • Database Java exploits
  • Misconfiguration of database security settings

SQL Injection

In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else.  Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.

I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work.  While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely.  In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother?  The next guy can fix it.

Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production.  The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.

Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?”  My response is always, “Show me the test network that fully simulates production.”  Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works.  That is just the nature of network changes and why they can only be tested after the fact.

In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications.  As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production.  If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production.  That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.

Database Java Exploits

Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java.  Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.

Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong.  And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker.  Better yet, it is a prime target with an ample supply of information.

This is why the server build and hardening standards are in requirement 2 of the PCI DSS.  The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed.  They also get people to think about what services they actually need from the database versus doing a default installation.

A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used.  However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future.  This approach is typically in response to the “rush” factor that I discussed in the SQL injection section.  The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.

I have grouped the last three risks together as they all relate to one another.

Misconfiguration Of Database Security Settings

In that rush to get the database system up, IT personnel just do the “default” install and move on.  The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall.  As a result, is it any wonder that databases are insecure?

One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database.  ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.

Again, this is why requirement 2 has all of the build and security standards required.  The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed.  That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.

Improper Access Controls

One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases.  The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts.  As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road.  These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.

Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server.  As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts.  And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.

An access control issue that occurs with all databases is the use of “service accounts” for database access.  Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control.  Then to access the database, the application uses a single service account.  Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials.  Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application.  However, in my experience, this is only done when pointed out during a security assessment.

Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application.  In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change.  But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?

The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.

Password Attacks

This all leads to the success of password attacks.  When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.

The leading reason password attacks are successful is that databases are used to store user credentials.  A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment.  As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table.  Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.

Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?

But the changes required to address this situation are not as easy as people think.  Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market.  Until this situation is resolved, these solutions will still be at risk.

One would think using Active Directory or another directory service would be an easy solution.  Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly.  On the face of it, it would appear that way and it does work for organizations that host their own Web presences.  But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest.  Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site.  As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.

For the time being we are stuck with our using the database to store credentials.  With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet.  The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient.  In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.

06
Feb
13

How To Be PCI Compliant And Still Be Breached

Bashas’ became the most recent example of a merchant claiming to be PCI compliant yet ending up breached.  A lot of naysayers I am sure are running around pointing to the PCI standards and say, “See, they are worthless.”  But the larger question most of you have is, “How can an organization be breached if it is PCI compliant?”

The first piece of the answer is security is not perfect.  Security controls have never, ever totally stopped an incident from happening.  If they were perfect, banks would no longer be robbed.  However, due to the security controls that have been implemented, the success of those robberies has dropped significantly.  This is the fact that the PCI SSC and the card brands seem to miss.  That while their standard is a good starting point, there is much more that has to be done to ensure a reasonable level of security.  And even then, an organization is never 100% secure.

The second part of the answer is that even if an organization is 100% compliant with the PCI DSS, there are still numerous ways to get around the controls and breach data as the Bashas’ breach may eventually point out.  Let us assume for this discussion that Bashas’ statement that they were PCI DSS compliant is accurate.  Then how could they have been breached?

The first clue is the statement that they discovered malware that went undetected for some period of time.  Any organization that believes that their anti-virus/anti-malware solution will address this issue is seriously lying to themselves.  AV is good, but it is also not perfect.  If the AV vendors have never seen the malware you picked up, then they have no signature to match it to, so they will likely not flag it.  This is the first indication that this attack was done by a professional.  The malware was not immediately detected which means the attacker likely developed it themselves from a variety of sources.

But how did the malware get on Bashas’ network?  The answer is social engineering and probably a spear phishing attack.  The attacker likely used PasteBin or similar Web sites, got some Bashas’ email addresses and used those to deliver the malware.  Someone unfortunately clicked on a link, opened an attachment or any other number of infection methods and the deed was done.  This is why security awareness training is so important.  Not that it stops these sorts of attacks, but it significantly reduces the likelihood that they are successful.  However, with the malware in place, now all it took was time to find the data.

But would not Bashas’ have noticed someone probing their network?  That depends on a number of factors, but based on the fact that they became aware of the malware, something eventually triggered an incident.  Unlike the security firm you hire to do vulnerability scanning and penetration testing, professional attackers do not perform their scans as quickly as possible.  They take their time and scan very, very slowly.  As a result, they usually do not generate enough traffic at once to garner an alert.  In addition to that, most of their backdoor software encrypts their external transmissions using SSL/TLS/IPsec over port 80 or 443 which are typically open to the Internet.  As a result, from a monitoring perspective, a lot of what is going on would appear “normal.”

So now that your view of the PCI DSS is dashed.  What should you do to respond?

  • Admit that security is not perfect and educate management that it is not perfect.  Breaches will still occur, but security controls are meant to minimize the number of those occurrences and the extent with which they obtain sensitive data.
  • Do not save sensitive data.  Merchants typically do not need to store cardholder data these days.  If you do, then use tokenization so your systems do not store cardholder data.
  • If possible, further isolate your sensitive data.  Look at Forrester’s “Zero Trust” model or the McGladrey Ultra Secure approaches.
  • If possible, reduce the number of actual people that can access your cardholder data to as few as possible.  The fewer people that can access cardholder data, the fewer targets that can be social engineered.
  • Use a “jump box” to provide access to your cardholder data environment so that you do not allow people direct access.  Couple this with different user credentials to gain access to the cardholder data environment.  Add in full instrumentation of the jump box to capture all activity performed on the jump box and monitor the jump box tightly.
  • More tightly monitor your communications through your firewalls.  Yes HTTP/HTTPS needs to be open these days just to do business, but do your personnel need totally unrestricted access to every possible IP address or URL?  No.  So white or black list IP addresses and URLs so that an attacker cannot just use whatever URL or IP address to work from.

Will all of this prevent a breach of your sensitive data?  No.  All these controls will do is reduce the risk of a breach to the lowest possible level.  In time, an ingenious professional attacker will find a way to compromise your controls.  However, with a rigorous control environment it is hoped that you will find them before they find your data.

06
Jan
13

Security And Compliance

I have written a lot about this topic over the years and was recently reviewing my Compliance Is Not Security – Busted! post and the comments that came in regarding it.

A theme of a number of the comments was that compliance does not equal security.  DUH!

I have never once said or even implied that compliance equaled security as – yes, here it comes – security is not perfect!  However, if you are complying with any security program/framework such as the PCI DSS, ISO 27K, etc., then you are likely more secure than those who are not.

Security technology such as firewalls, routers, servers, applications, etc. can all be set up with rules that are complied with 100% of the time, day in and day out, no exceptions.  The problem comes down to people who are fallible.  Their compliance is never 100% and you are probably lucky to have anyone above 90%, no matter how much security awareness training you do.  As a result, in organizations that are truly complying with the PCI standards, this is where the security breach starts, with people for one reason or another.

No, I am not necessarily talking about social engineering, although social engineering is growing because of the fact that organizations have invested a lot in security technologies yet people are fallible.  People can be the root cause because of any or all of the following.

  • How dare you do that to me!  This is the most obvious of the people issues that comes to mind.  Face it, when backed into a corner, people lash out just like a trapped animal.  The supposedly wronged party wants their proverbial “pound of flesh.”  They get that pound of flesh by hurting the organization that has just hurt them.  This can be as minimal as taking office supplies to downloading databases to a USB drive as they empty their desk.  Obviously, a database, network or system administrator’s access is much different than a clerk’s.  However, if your security is minimal on the inside as it is in most organizations, the clerk may actually have better access than the administrators when it comes to sensitive information.  Such a situation may not be the fault of the administrators, that old version of POS or ERP may not have the ability to be more granular regarding access to information.
  • Over inundated with alerts and cannot identify real alerts from false positives.  This typically occurs when an automated tool is implemented but never tuned to the organization’s environment.  In this sort of an environment, finding real alerts can be like finding a needle in a haystack when there are thousands of alerts an hour scrolling by on the screen.  This usually makes management wonder why the tool was needed in the first place.
  • Saw an alert and ignored it.  We see this most often coupled with the aforementioned inundation issue.  The other most common version of this issue is with internally used SSL certificates that were generated incorrectly or use a default certificate supplied by the application.  Users then see the “There is a problem with this Website’s security certificate” or similar error message in their browser whenever these flawed certificates are encountered and become conditioned to ignore the error message.  Over time, they become conditioned to ignore all of these sorts of messages, including those for malware infected Web sites and, surprise, you have been compromised.  I have lost count how many people have said to me, “We just ignore those alerts because we know they are false positives.”
  • Saw the alert but got side tracked and never came back to it.  This is a problem we see all of the time.  For example, the person that monitors the network is also the person that manages the network and configures the network.  An alert comes in and the person begins a root cause analysis (RCA) only to get pulled away because a remote facility is offline.  The offline issue gets resolved, but other issues come up as well as meetings and telephone calls and the person never gets back to the RCA for the alert because there is no “tickler” to remind them to go back and complete the RCA.  In the meantime, the attacker has gained their beachhead and is probing the network for whatever value it may contain.
  • Just did not put together all of the pieces to know they were compromised.  Like the reasons 9/11 occurred, most organizations do not correlate all of the potential incidents occurring in their networks and therefore do not understand that there is an active effort to compromise their network or that they have already been compromised until well after the incident has caused damage.  The reason this is important is that once an attacker is inside your organization’s security perimeter, it is typically game over because there are few controls to prevent access and identify that data is being taken.

If you have read the Verizon Business Services Data Breach Investigations Reports (DBIR) over the years you know how the bulk of attacks get inside, they are the result of people.  For the last two years, the DBIR has used the VERIS Event Threat Grid to show how breaches occur.  Across the top of the grid are the categories; Malware, Hacking, Social, Misuse, Physical, Error and Environmental.  The Social, Misuse and Error categories imply mistakes or deliberate acts of people.  If you read the definitions on the VERIS Web site, Malware is also very people centric as is hacking.  Surprisingly to some will be that the Physical and Environmental categories also have a good number of people errors.  Based on just a quick read, it looks to be that about 60% to even 70% of all of the incidents categorized by VERIS has some form of people error component.

Since we are not going to get rid of people in our organizations any time soon, what are you to do?

  • Admit that people are the problem and focus your security measures accordingly.  Every 12 step program says the first step is to admit the problem which, in this case, is that people are fallible.  As a result, we need to construct our security measures such that this fallibility is minimized as much as possible.  One of the best solutions is to integrate alerts into your help desk or change management system so that a ticket is generated.  Those tickets need to have an escalation process behind them so that if they are not investigated within a period of time, they are bumped up to the next higher rung of management and that escalation continues until the tickets are finally addressed.  This way there is visibility for the alerts should they slip through the cracks.  As a side benefit of this approach, you gain statistics to reinforce why you need more staff and/or more/better tools.
  • Strengthen your internal security measures.  As things stand, once inside most organization’s security perimeter, there is very little that stands in the way of an experienced attacker getting the data they desire.  Regardless of whether it is an insider attack or an attacker has managed to get inside, there is already justification for organizations to beef up their internal security measures.  To address this problem, I would recommend the security architectures as documented in my Fort Knox approach, Forrester’s Zero Trust Model or McGladrey’s Ultra Secure Network.  But most organizations do not have the infrastructure architecture, the application architecture or even the will to take such approaches.  But that does not excuse an organization from just saying they cannot do anything.  If anything, most organizations could vastly improve the monitoring they do on their internal networks.  Monitoring needs to be coupled with reducing the total number of ports that are open between network segments.  Most internal networks do a terrible job of this because of a variety of factors including applications people that cannot tell what ports need to be open to avoiding operational issues by just leaving things open.  Another area of improvement is reviewing user access rights on all systems and applications, not just those in-scope for PCI compliance.
  • Constantly tune your alerting system(s).  Just as attack methods are not static, neither are networks, systems and applications.  Changes are occurring all of the time in an organization’s IT environment, yet if you ask the people running the SIEM about changes, nine times out of ten, nothing seems to be changing other than requests to look for a new signature or anomaly.  There is a belief in the SIEM user community that a SIEM’s update process is making the necessary changes in the policies that ship with the SIEM.  To a certain extent SIEM solutions are similar to anti-virus and malware solutions.  However, because a SIEM monitors log data and the log data provided varies greatly from organization to organization, each organization needs to periodically review and adjust their alerting criteria to make sure that it reflects the organization’s operating environment and not just some template from the SIEM vendor.  If an organization is not reviewing its SIEM alerting rules based on the changes made, at least quarterly, then it is highly likely that the SIEM is not alerting properly.
  • Establish separate consoles from your SIEM for network, system, security and application administrators.  What a network administrator is looking for is vastly different from what an application administrator is looking for and what any particular group might be looking for to generate an alert.  As a result, to have only one console is really silly and non-productive.  Yet time and again, we see SIEM implementations with just that, one console and everyone being driven by email or SMS alerts.  The people alerted then have to get to the SIEM to find out what exactly triggered the alert and then determine what to do about it.  Having your own console view simplified things by only listing that viewer’s alerts and no one else’s alerts.  This allows people to focus on their problems and not the whole organizations problems.  The idea behind the single console is that if everyone knows what is going on overall, then correlation would occur because everyone sees everything.  While you would think that would be the case, in reality, people just want to fix their problem and move on, not the entire organization.  Which leads to my last point.
  • Watch the overall alerting picture so that correlations can be made.  According to most sources, today’s attacks are becoming more sophisticated and multi-pronged in their approach.  For example, while most DDoS attacks are just to be a pain in the posterior to the target and disrupt access to the target’s Web site, there are those DDoS attacks that are used as cover so that people inside are blinded to the real attack(s).  Whether or not the DDoS was a decoy depends on what other events or incidents occurred during the DDoS attack, if your alerting system did its work.  Higher end SIEM solutions can provide basic correlation rules, but most SIEM solutions require the end user to develop those correlation rules.  It is these correlation rules that help organization identify these more sophisticated attacks.  That said, these correlation rules do not have to be very sophisticated.  For example, during a DDoS attack, you really only need to look for malware attacks, failed authentication attempts and other anomalies that would be likely indicators of the DDoS attack being used to mask the real attack.

Is all of this going to address your security issues?  Sorry, not a chance.  None of the above stops all breaches, it merely minimizes the possibility that a breach goes on for months or years.  Hopefully it minimizes a breach down to weeks, days, maybe even hours in some cases but it will never totally eliminate them.  Security is not perfect.

There is a side benefit to all of this and that is it will assist you in doing RCA.  RCA is very effective in getting rid of those nagging operation issues that occur from time to time and mess up the delivery of your organization’s goods and services.  All of the information you collect for security purposes can also be used to find the needle in the haystack that is causing a database to corrupt, a network connection to drop or a server to fail because now you have information as to what was going on that led up to the problem.

The reason an organization is not secure is that there are so many areas of improvement needed that the full control triad is no longer functioning and holes exist that will allow an attacker to operate without the knowledge of the organization.  Until the controls are implemented and operating properly, it will be impossible to determine if they are secure or not.  The recommendations I have made will hopefully give you a better picture of what you face and reacting to issues that need attention before your organization is the next one to be breached.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031