Posts Tagged ‘vulnerability testing

06
Nov
14

The ASV Process Is Broken – Part 3

So what are my ideas on fixing the ASV process?

Modify The ASV Program

The conditions that drove the ASV process originally made sense.  Vulnerability scanning tools were predominately open source and anyone could do scanning and just about anyone was doing vulnerability scanning.  The results produced out of the open source tools could be highly questionable at best and the reporting was haphazard and about as trustworthy at times as a three dollar bill.  Even in large organizations, the people doing the vulnerability scanning did not necessarily have networking, security or even IT backgrounds.  Then there was a tremendously high false positive rate out of the open source tools.  As a result, most organizations ignored the results they received because they found that they could not be trusted.

The purpose of the ASV program was to bring some sanity and professionalism to the vulnerability scanning process.  MasterCard invented the ASV program (it was not called ASV then) back in 2005.  A test network was built and prospective ASVs were required to run their vulnerability scanners against this network and produce results which were then reviewed by MasterCard.  It was a much a test of the vulnerability scanning tool as it was of the person running the tool.  When the program transitioned to the PCI SSC, the Council added a multiple choice test to the process, but the virtual network testing and report review is still part of the process.

The trouble with this process is that the vulnerability scanning tool is no longer the problem.  Every ASV uses a commercial vulnerability scanning tool from either Tenable, Qualys, Saint, Tripwire or similar commercial tool vendor these days because they cannot afford to do otherwise.  Since these tool vendors are also ASVs, requiring a vulnerability scan for ASV certification has become a truly pointless exercise.  Other than the possibility of not properly entering the IP addresses to be scanned and running the wrong scanning policy, there really is very little that someone can screw up with a scanning tool.

The skill in vulnerability scanning today is reviewing the results, dealing with false positive results, working to address results with compensating controls and, with the Councils new edict on combining reports, working to get passing quarterly scans.

Therefore, in my opinion, training and testing of ASVs should be focused on the following.

  •  Determining the scope of vulnerability scanning.
  • Vulnerability scanning methodology.
  • Interpreting vulnerability scanning reports to confirm knowledge of the process and the meaning of the results.
  • What constitutes a false positive result and how to document a false positive result.
  • Development and documentation of an appropriate compensating control for a vulnerability.
  • Process for how to produce an acceptable passing scanning report from multiple reports.

And let us not limit ASV certification to just independent consulting firms.  As with the internal security assessor (ISA) program, open the ASV program to internal personnel as well.  Most large companies have independent vulnerability scanning teams that are as capable to more than capable than their ASV brethren.  There is no longer any reason that these internal people cannot do the ASV scans particularly if they meet the same standards and qualifications.

Approved Vulnerability Scanning Tools

I am not suggesting that the Council needs to develop a certification process for these tools as there are already plenty of sources that assess such tools.

The Council would publish a list based on the criteria developed by one or more independent tool assessment sources.  This list would define those tools acceptable to use for ASV vulnerability scanning.  The PCI DSS should then require that the QSA confirm that the vulnerability scanner used by the ASV is on the list in addition to confirming scope and the scanning policy used.

Require A Vulnerability Scanning Methodology

With the PCI DSS v3, the Council now requires penetration testers to use a documented and industry accepted penetration testing methodology.  Yet, there is no such requirement for vulnerability testing.

Most vulnerability scanning is done using what I call the “toss it against the wall and see what sticks” approach.  Basically, every possible vulnerability is run against every device.  Most commercial vulnerability scanners interpret banners, signatures and other markers to trim the list of vulnerabilities to be tested based on what they believe the target to be.  However, when you are scanning an external network blind, scanners cannot always properly interpret what an IP address resolves to as a device because of the mix of responses that they receive.  As a result, scanners do not necessary trim tests increasing false positive results or they trim them too much and the test is not complete.

Then there is the automated nature of today’s vulnerability scanning.  While I understand the desire to reduce costs of vulnerability scanning, the “point and click” nature of today’s ASV scanning has made it flawed.  And it gets worse as organizations get passing scans.  As a QSA, I cannot tell you how many passing scans I have reviewed where an organization could be hacked six ways to Sunday with the remaining vulnerabilities.  As a security professional, it scares me to death.  But as a QSA, while I can bring these up, they get no play because they do not have a CVSS of 4.0 or greater.  You hope that these vulnerabilities get picked up in an organization’s penetration test.

But there is no guarantee of that happening because the penetration tester’s vulnerability scanner may or may not pick up the same vulnerabilities.  As a result, part of the penetration testing methodology should include a review of all vulnerabilities found since the last penetration test and those should be tested for in the current penetration test to ensure they have been addressed.

Obviously, I have a preference to the methodology I discussed back in Part 2.  But there are a number of methodologies posted out on the Internet from a variety of good sources.  All I ask is that the vulnerability scanning methodology be integrated with the penetration testing methodology so that there are not gaps in coverage.

Require Monthly External Vulnerability Scanning

Before everyone panics, I am not asking that ASV scans be run monthly.  Although if the ASV program is modified, for organizations with internal ASVs that is a possibility.  I would still require the quarterly ASV scan, but I would add in monthly scans run by anyone deemed qualified as is allowed for internal vulnerability scans.

My primary rationale for this recommendation is driven by this simple fact.  When the dominant solution vendor releases patches on the second Tuesday of every month and the vast majority of those fixes have a CVSS score of 4.0 or greater, anyone that thinks quarterly scanning keeps them secure is seriously kidding themselves.  Not that a lot of security professionals bought into the quarterly vulnerability scanning requirement, even as a bare minimum.  But without the standard requiring it, a QSA has no leg to stand on other than to intimidate and shame people into doing monthly scanning.

Even if you are not Microsoft centric in your external environment, with the breaches that have occurred and the revelations of Shellshock and Poodle, it is painfully obvious that the quarterly requirement is not going to keep organizations secure.  I got a lot of calls after both of these vulnerabilities were announced with clients asking if their passing scans were no longer valid.  I was a bit schizophrenic in my thoughts.  On the one hand, I was glad they were at least thinking about the security implications of these vulnerabilities.  But their concern about their passing scans just highlights the importance of meeting a PCI requirement and passing their PCI assessment versus being secure.  Because, while I only got a few calls, you know that there are too many people that are congratulating themselves on dodging the bullets of Shellshock and Poodle because of the fortuitous timing of their  quarterly scans and that they got an additional 30, 60 or even 90 days to address them.

Then there are those organizations that run solutions such as IBM’s Websphere or Oracle’s eCommerce suites.  Both of these vendors not only patch their own application frameworks, but they also release those patches to the underlying operating systems that are compatible with their application frameworks.  But worse, these vendors do not release monthly patch releases, they do patch releases on quarterly, semi-annual or even annual bases.  As a result, there is a high likelihood that some operating system patches could be left out of these releases due to compatibility or timing issues.  The work around is to mitigate any remaining vulnerabilities through additional logging, additional monitoring, changes in firewall rules, changes in IDS/IPS rules, etc.  The additional vulnerability scanning could help organizations identify these issues and address them quicker than quarterly.

A side benefit of monthly scanning will be improving the ability of organizations and their QSAs to determine if an organization’s patching and mitigation processes are working according to requirement 6.1.  Quarterly scans typically document a lot of vulnerabilities, mostly those under a CVSS of 4.0.  As a result, whether or not an organization is properly managing their environment can be very difficult and time consuming leading to missing items that should be addressed.  Having reports more often can facilitate getting these issues addressed sooner rather than later and keeping the volume lower and less daunting.

The bottom line in all of this is that monthly scanning is required to even have a chance at being secure these days.  Yet the vast majority of organizations are only doing quarterly scans and thinking they are secure.  That practice must change.

So there we have it.  My thoughts on the ASV process and how I would go about fixing it.

Advertisement
01
Nov
14

The ASV Process Is Broken – Part 2

The next reason I believe the process is broken is with the automated scanning processes.  They do not seem to be accurately assessing the security of Web servers, firewalls, routers and other externally facing devices that form an organization’s perimeter.  In my opinion, it seems that in our drive to bring down the cost, we have created a false sense of security by having the concept of a “passing” scan and only requiring scanning on a quarterly basis.

It is hard to believe that external vulnerability scanning came into vogue more than a decade ago.  There were no automated solutions.  No popping out to a Web site, entering IP addresses and scheduling a scan.  It was all manual.  A person was required to configure the scanning solution and then launch the scanner.

There was a proven methodology used for approaching vulnerability scanning.  That methodology involved approaching the organization as an attacker would. .  The security professional would go out to domain registrars and other official sources to obtain potential IP address ranges, registered domains and other public information.  Then the professional would develop a profile of the organization’s external security posture using Nmap and similar tools as well as running non-intrusive scans to identify systems and potential entry points.  But even more insidious, the security professional would do what became to be known as “Google Hacking” to find out what, if any, information was available on the Internet that might facilitate the attack.  Log data, firewall configurations and even authentication credentials could be obtained through Google Hacking.  Even today, while Google does its best to minimize the information they collect, it is amazing the amount of information that can still be obtained with specially crafted Google searches.

Once the preliminaries were done, it was time to actually scan.  The most accepted approach used at that time was to run two scans, one fully external with security measures enabled and one with testing done inside the DMZ.  The scan done with the security measures enabled was to confirm that an organization’s firewalls, load balancers or other front ends were functioning as configured.  Essentially this scan showed what an attacker would discover in their scanning of the organization’s external presence.  There were two purposes of the second scan.  The first was to provide an inventory of vulnerabilities that were being obfuscated by the security measures so that they could be addressed or mitigated.

The second purpose of the DMZ scan was for what some would call today a form of “threat intelligence”.  The second scan provided a view of what an attacker would encounter if they were able to circumvent the organization’s external security.  It answered the question of how large the attack surface was in the DMZ should it be compromised?  What else could be subverted and used to further an attacker’s goal of getting even deeper into a network?  The point of this part of the exercise was to provide the organization with an idea of what to look for should anomalies begin to appear in log data.

Throughout this scanning process, separate scans were run for firewalls, routers, load balancers, Windows devices, Linux devices, etc.  The rationale for that was to obtain clear results based on the type of device.  No reason to toss a bunch of Windows vulnerabilities against a Cisco firewall or F5 load balancer and then have to sift through the results to remove the false positive entries generated by the Windows devices they protect.

All of this information would be collated, analyzed and an actionable report produced for management and security professionals to absorb and understand the organization’s true external security posture.  Once the draft report was delivered there would be one or more calls or meetings with the organization’s security personnel to tweak the message of the results, develop action plans to address the results and then a final meeting with management and security personnel to deliver the final report.  At the end of this process, management and security personnel knew exactly where they stood from a security perspective and what needed to be done to ensure that they remained secure going forward.

Obviously, such an approach is fairly comprehensive and that costs money to produce.  And it was the cost that drove the automation of the process so that it was less labor intensive.

In the mad dash to the lowest possible cost, the following is what typically happens today.

Organizations subscribe to an ASV vulnerability scanning service.  If the organization is lucky, they go through a scoping call with a human being at the ASV service to confirm the IP addresses that are in-scope for PCI compliance.  Having been on a number of these calls with ASVs, it is more of a questionnaire approach and there is very little done by the ASV to actually confirm the scope of the scanning.  It is more or less a conversation that explains how to enter the IP addresses into the scanning tool’s Web interface and to schedule a scan, not a diligent conversation on whether or not a device is in scope or not.  There is typically a brief discussion on how to use the Web site to dispute scanning results that caused a non-passing scan.  In total, the call takes 20 to 30 minutes at which point customers are essentially on their own.

Scans are run on a quarterly time period.  Scans may be rerun if vulnerabilities are identified that result in a non-passing scan.  Once a passing scan is obtained, a passing certificate is generated and the organization moves on to the next quarter.  Every now and then a result may be disputed, but those situations are typically rare.  The bottom line is that the ASV process is fairly automated with very little, if any, human intervention.

From a tool perspective, the vulnerability scanner is tossing every vulnerability at every IP address they are given.  Firewalls, load balancers, switches and routers are tested for Windows and Linux vulnerabilities.  False positive results can be rampant but most scanners weed out the obvious false positive results based on OS and device signatures.  But even if a few false positives remain, as long as they have a CVSS score of less than 4.0, no one cares.  And the scanner is configured to run as quickly as possible because it needs to be available for the next customer so that the ASV does not have to have an individual scanner for every customer.

All of this is driven by customers’ desire to minimize cost as much as possible.  Thus, in order to stay in business, the ASV scanning service providers have done an admirable job of driving down cost.  But at what other costs?

While I think we can all agree that the current approach is flawed, the Council would point to the fact that the quarterly ASV scanning requirement is only a minimum and that organizations should be doing their own external vulnerability scanning more often than quarterly.  And that is what is missed by a lot of organizations is that they need to do more than what the PCI DSS requires to be secure.  And no matter how loud that message is repeated, it seems to get missed over and over again.

However, a lot of organizations do not do anything more than the PCI DSS requires.  We have this “bare minimum” mentality because, if it is not mandated in the standard, then we must be secure if we do only what is required.  This results in the false sense of security that organizations have when they only do what the PCI DSS or any other security framework for that matter mandates.

In addition, I cannot tell you how many organizations totally write off vulnerabilities that have a CVSS of less than 4.0 as “trivial” representing no threat.  There is this mistaken belief that, because the CVSS score does not warrant immediate patching, it is not a threat.  However, when you have them in relative abundance and of certain types, those “trivial” vulnerabilities can be put together in a particular sequence and used to compromise a network.  To the shock and horror of a lot of organizations, their security is breached in moments during their penetration test using those “trivial” vulnerabilities.

Then we have the limitations of the tools themselves.  Their vulnerabilities are constructed for the general audience, not a specific network.  As a result, depending on security measures and configurations, a particular vulnerability for say HTTP or FTP might not register due to a load balancer or other devices in the communication path.  Yet with a minor tweak, the same vulnerability could actually work.  And that was another point of doing that second scan inside the DMZ back in the “old” days.  Taking the two scans allowed you to correlate that while HTTP or FTP was open; it was vulnerable because that vulnerability was shown on the internal DMZ scan.  As a result, you could take additional steps to ensure that the vulnerability could not be exploited from the Internet.

The bottom line in all of this is that ASV scans have become a pointless piece of the compliance assessment process.  It has not been because the ASVs are bad, it is that the process they are forced to follow is not doing what it once did at the beginning.  It does not help that merchants and service providers treat ASV scans as a “check the box” activity and that because they do something quarterly they are secure.

If the process is broken, what should be done to fix it?  That is the subject of my next post.

19
Oct
14

The ASV Process Is Broken – Part 1

The topic of ASV scanning came up as usual at the 2014 PCI Community Meeting.  The questions all seemed to revolve around how to obtain a passing scan.  What the Council representatives suggested is that multiple scans can be put together to create a passing scan.  Unfortunately, what the Council keeps suggesting as the solution is impossible to implement and here is why.

In a typical environment, an ASV customer logs onto their account with the ASV and schedules their ASV scans of their PCI in-scope assets.  The customer may also add or subtract the number of IP addresses that are scanned as the scope of their external environment may change.  Depending on a number of factors, there may be one scan or multiple scans.  The vulnerability scans are executed on the schedule and the results are returned to the customer.

If there are false positive results or results the customer does not agree, they can apply back to the ASV to have those results removed.  If there are actual vulnerabilities, the customer can contact the ASV with how they have mitigated the vulnerabilities and the ASV can either accept those mitigates and give the customer a passing scan or allow the results to stand.

So where are the problems?

Whether or not the Council acted on facts that cheating was occurring or anecdotal evidence is unknown.  But because of the potential for cheating by customers, the Council mandated a number of years ago that ASVs lock down their scanning solutions so that customers cannot modify anything regarding testing other than the IP addresses involved.  The ASV Program Guide v2.0 on page 11, states:

“However, only an authorized ASV employee is permitted to configure any settings (for example, modify or disable any vulnerability checks, assign severity levels, alter scan parameters, etc), or modify the output of the scan.  Additionally, the ASV scan solution must not provide the ability for anyone other than an authorized ASV employee to alter or edit any reports, or reinterpret any results.”

So right off the bat, the Council’s recommendation of “putting together multiple reports” is not as easily accomplished based on their earlier directives.  That is because it will require the ASV’s customer to get the ASV to agree to put together multiple reports so that they can achieve a passing scan.  That implies that the ASV’s solution will even accommodate that request, but then the ASV needs to be agreeable to even do that task.  Based on the Council’s concerns regarding manipulation of scanning results and the threat of the Council putting ASVs in remediation, I do not believe the ASVs will be agreeable to combining reports as that would clearly be manipulating results to achieve a passing scan.

But it gets worse.  As a lot of people have experienced, they can scan one day and get a passing scan and then scan a day or even hours later and get a failing scan.  The reason this happens is that the vulnerability scanning vendors are adding vulnerabilities to their signature sets as soon as they can, sometimes even before vendors have a patch.  As a result, it is very easy to encounter different results from scan to scan including failing due to a vulnerability that does not yet have a solution or the vendor only just provided a patch.

But if that is not enough, it gets even worse.  Statistically, the odds of getting a passing scan are nearly impossible and gets even worse if you are only doing quarterly scanning.  A review of the National Vulnerability Database (NVD) shows that 94% of vulnerabilities from 2002 to 2014 have a common vulnerability scoring system (CVSS) score of 4.0 or greater.  That means that it is almost impossible to obtain a passing vulnerability scan, particularly if you are only scanning quarterly, when vulnerabilities are announced almost daily and vendors such as Microsoft are coming out monthly with patches.  Those of you scanning monthly can attest that even on a 30 day schedule, a passing scan is nearly impossible to get.

For an organization that has only one Web site, this situation is likely not a problem.  But when organizations have multiple Web sites which a lot of organizations large and small have, you are really struggling in some cases to get passing scans.

But let us add insult to injury.  A lot of organizations have their eCommerce environments running on multiple platforms such as Oracle eCommerce or IBM Websphere.  In those examples, this situation becomes a nightmare.

Platforms such as those from Oracle and IBM may run on Windows or Linux, but Oracle and IBM do not allow the customer to patch those underlying OSes as they choose.  These vendors ship quarterly, semi-annually or on some other schedule, a full update that patches not only their eCommerce frameworks, but also the underlying OS.  The vendors test the full compatibility of their updates to ensure that the update will not break their frameworks.  In today’s 24x7x365 world, these vendors can run into serious issues if eCommerce sites begin to not function due to an update.  However, that also means there is the possibility that critical patches may be left out of an update due to compatibility and stability reasons.  As a result, it is not surprising that in some updates, vulnerabilities may still be present both those that are new and those that have been around for a while.

But if Oracle and IBM are not patching on 30 day schedules, that means there is a high likelihood that the scans will not be passing.  This means that the customer must go to their ASV with compensating controls (CCW) to mitigate these vulnerabilities to obtain passing scans.

The bottom line is that the deck is stacked against an organization obtaining a passing scan.  While the Council and the card brands do not recognize this, the rest of the world sure has come to that determination.

In Part 2, I will discuss the whole ASV approach and how I believe the drive to be the cheapest has turned the ASV process into a mess.

17
Feb
12

Database 2012 Threats

I attended a Webinar recently put on by Application Security Inc. regarding the threats to databases for the coming year.  If you did not attend it, you missed a good session.  But the most disturbing thing brought up was their top 10 list database vulnerabilities and misconfigurations.  Their top 10 list is:

  1. Default or weak passwords
  2. SQL injection
  3. Excessive user and group privileges
  4. Unnecessary DBMS features enabled
  5. Broken configuration management
  6. Buffer overflows
  7. Privilege escalation
  8. Denial of Service
  9. Unpatched RDBMS
  10. Unencrypted data

If you look at my post a while back on the 2011 Verizon Business Services’ reasons for why organizations were breached, there is a great correlation between Verizon’s report and what Application Security Inc. is saying.

Their first point about weak or default passwords is very clear and should not need to be discussed.  In this day and age, we should all be ashamed that this is even on the list, let alone the first item on the list.  The bottom line here is that, if you use default or weak passwords, you deserve to be breached.

They brought up and interesting point about SQL injection attacks that a lot of organizations do miss or underestimate and that is the internal SQL injection.  Most organizations are so focused on the external threat that they forget about the threat from the inside.  Worse yet, most security professionals and DBAs are unaware of the threat SQL injection poses even without the Web.  Since most of today’s attacks are perpetrated once past the perimeter, the protection from the inside attack is very relevant and very important.  Because once an attacker is on the inside, it is relatively trivial to use SQL injection or other techniques to obtain data.  More and more organizations are beginning to understand the insider threat and are firewalling all of their database servers away from the general user community as well as minimizing the number of users that have direct SQL access to those servers.

Excessive privileges cannot always be addressed at the DBMS level.  In today’s packaged software world, a lot of the rights are managed and maintained at the application level and that security matrix is maintained in a database table.  The granularity that can be granted is usually where things go awry because the application’s security system only provides an “all or nothing” approach.  Application vendors are getting better with this because of SOX, HIPAA, PCI and the like.  However, organizations typically need to be on the most current releases to have access to such enhanced security granularity.  Unfortunately, there are very few organizations that can afford the most current release or can implement the most current release due to their extensive modifications.  The simplest way to address this issue is the periodic reviews of database privileges and minimizing those users that have excessive privileges.  In the longer term, I expect we’ll see the return of the data dictionary with the addition of user rights and roles that will manage this problem.

Unnecessary features enabled are a vendor and DBA issue.  In some cases, vendors make changing features impossible or near to impossible once the RDBMS is installed.  In some cases, there are physical reasons as to why a feature must be enabled at installation.  However, there are also instances where features could be enabled or even disabled at anytime, but because the vendor only wanted to do that at installation, that is the only time you can deal with the feature.  This results in a lot of DBAs installing the RDBMS with every feature/function available, whether it is needed or not, just in case they might need it later on.  Do not get me wrong as I understand the drivers for this practice.  In today’s “I needed it yesterday” world, it is tough to be responsive when something will require an entire re-install of the RDBMS and migration of existing data in order to get something done.  It is time for IT people as a whole to start explaining to non-IT people that there are just some tasks that take time to do properly no matter how quickly anyone needs them completed.  Our infrastructure has become susceptible to attack in large part because of this rapid response desire.  If we intend to make things secure, we need to stop and think things through before creating even larger issues.

The previous issue feeds directly into the next; broken configuration management.  Configuration management is broken because the vendors make it virtually impossible not to break it.  And even when configuration and changing configuration is easy and manageable, DBAs are not always as structured as other IT operational disciplines.  As a result, whether talking about the configuration of the RBDMS or the client that loads on the workstation, configurations are too broad because of the “just in case” factor.  I know it is a pain to only enable what needs to be enabled and then six months later have to reinstall everything just for a particular feature, but that is the correct way to do things if you intend to be secure.

Buffer overflows, privilege escalations and denial of service are all common vulnerabilities that organizations will have differing levels of success in mitigating.  I will tackle the easiest to address first, privilege escalation.  If there is any area where security can always be addressed it is with privilege escalation.  The reason privilege escalation exists is because someone, usually a developer, created the issue because they decided to allow users to perform a task that the user should not be allowed to perform.  Because if they were allowed to perform the function, then they would not need their privileges escalated to perform it.  The easiest thing to do is to disable those functions that require privilege escalation.  However, in some cases, that approach will create operational issues that will be unacceptable.  In those cases, monitor the daylights out of things so that you can be sure that the privilege escalation did not result in a different outcome.

In a lot of cases, there can be little done to address a denial of service (DoS) attack short of blocking the offender(s).  Denial of service does not compromise information; it just makes the information stored in the database unavailable.  And for most organizations, that is an important distinction.  If no information has been or can be compromised, then DoS is an annoyance and should be treated as such.  However, some DoS attacks can be used to defeat security measures in the RDBMS by causing the RDBMS to fallback to a basic operational state.  It is in these situations that one has to be careful because information can be lost.  The easy fix is to put a firewall in front of the database and enable DoS attack protections.

Buffer overflows are the most insidious attacks because, in some cases, there is little that can be done to stop them.  A lot of security professionals make the success of buffer overflow attacks sound like they are all the result of sloppy coding practices.  And while there is some truth to that view, the amount of which depends on the skills of your programmers, the success of buffer overflow attacks is also the result of embedding too much flexibility into our applications and leveraging the capabilities of the RDBMS.  In today’s world of open constructs such as SQL and RegEx, we have effectively made everyone a potential database programmer all in the sake of expediency.  Yes, customer service is highly improved, but at what cost?  Web application firewalls can minimize buffer overflows by “learning” how your SQL calls are typically structured, but they are not a complete answer nor do they completely remove the risks.  The way to fix the problem is to reduce functionality and make applications more complicated and difficult to use.  For most organizations that is not an option.  As a result, we must minimize the risks but be willing to accept the risks that remain as a result of our desire for ease of use and flexibility.  Minimizing the risk may mean implementing that Web application firewall internally as well as externally.

While I was glad to see that unpatched RDBMS software low on the top 10 list, I was very disappointed that it was still in the top 10.  One would think with all of the discussions about the importance of patching software, this would not occur in the top 10.  I understand the issues of compatibility and testing that make patching difficult, but really?  Maybe you need to invest in more than one or two instances of the RDBMS.  This is the cost of doing business the correct way.  If you are not doing things the correct way, then do not complain when you have a breach.  So while you saved yourself money on licensing costs on the front end, you likely paid for that cost savings a hundredfold on the back end.

I also understand the issues and fears with encryption.  For a lot of people, encryption is this mystical science that only certain “geeks” practice and practice well.  For others, the problem with encryption is the perceived loss of ready access to their data.  As time goes on, I would say that unencrypted data will rise to the top of the top 10 list.  Why?  Because the information age is all about the control of information.  The more information you control and can use to your advantage, the more power and control.  If your information can be readily obtained through public sources or the lax security surrounding your information systems, then you have little, if any, power or control.  The next 10 years will likely be spent by most organizations figuring out what information is critical to their business model and implementing the necessary protections around that information.  Critical information will be protected like the gold at Fort Know because, to that organization, that is their “gold” and it must be protected accordingly.  And that protection will likely involve encryption for some or all of it.

I know that people have a lot on their plates these days.  However, if you are a security person or a DBA, you need to leverage these surveys to your advantage and address the top 10 issues.  If more companies did this, the less data that would be breached.

20
Mar
11

PCI SSC Updates The ASV Program and Issues New Information Supplement

March 2011 has been a busy month thus far for the PCI SSC.  On Thursday, March 10, they announced a new ASV training program and on Friday, March 18, they released an Information Supplement on protecting telephone-based cardholder data.

The ASV training program has blindsided the ASV community as it was a total surprise.  Yes, there has been talk over the years at the Community Meetings and in other venues regarding ASV qualifications and training, but nothing ever seemed to come from those discussions.

According to the press release, the ASV training program, “… is a direct response to the feedback we’ve received from the merchant community …”  One can only assume that the complaints that have been voiced about ASVs from merchants and service providers as well as the comments in the media have finally caught the attention of the PCI SSC and they are going to address the problem with training.

The consistent complaint I have heard from clients over the years was in regards to the fact that no two ASVs ever scoped their network the same for vulnerability scanning.  The next most common complaint was always about the different results of the vulnerability scanning which was most often voiced after a company changed ASVs.  While there are always some bad apples in the bunch, I do believe that most ASVs know what they are doing.

As a result of all of this, I am sure this new training requirement is to address the complaints and make the program better.  This training is in addition to the network security test ASV companies already had to pass.  For those of you that did not know, ASVs have to conduct network security scanning against a test network with predefined vulnerabilities operated and configured by the PCI SSC.  ASVs are expected to produce a sample ASV report and document all of the predefined vulnerabilities.

The ASV training is comprised of an eight hour online course and an examination.  A minimum of two employees are required to take the course and pass the examination.  Once this has been accomplished, the organization is designated as having Qualified ASV employees.  As with a lot of PCI requirements, there are some important dates involved.  ASVs that are renewing their status prior to June 1, 2011 need to get two personnel trained and passed the examination by June 15, 2011.  ASVs that will certify after June 1, 2011 need to have two staff trained and passed the examination by their renewal date.

Given the additional cost of this new training plus the requirement to have a minimum of two people trained, it will be interesting to see if any of the existing 130 ASVs drop out because of this new requirement.

Telephony Cardholder Data

The other issuance of interest this month was in regards to telephone conversation recordings that contain cardholder data.  While this is a longer dissertation on FAQ 5362, there is really nothing new presented in this information supplement.  The bottom line still is that if you have call recordings that contain cardholder data you are required to either; (a) do not record it in the first place, (b) if recorded, redact it if possible, or (c) make sure that it cannot be searched, is encrypted and access is restricted.  The best thing about this information supplement is the flowchart that was created for this situation.

09
Feb
11

The “Magic” Vulnerability – Revised

What started this post is that I have recently received a number of calls and messages from clients and colleagues.  The conversations have all gone basically the same.  They were calling me and telling me that their ASV had failed their vulnerability scan because the OS detected was unsupported and they are wondering whether or not I have encountered this before.

My first question usually was along the lines of; “So, what vulnerabilities did they detect?”

“None,” was the confused answer at the other end of the line.

“What?  They must have detected at least one high, severe or critical vulnerability?  That is the only way you can fail,” I would ask, now also confused.

“Nope.  Nothing.  Just the fact that the OS is unsupported,” I was told.

Do not get me wrong.  I am not advocating the use of unsupported operating systems, particularly unsupported versions of Windows.  The risk of course is that one or more vulnerabilities show up that the vendor will not fix because the OS is no longer supported.  So there is good reason to avoid this situation.  However, there are also situations when you just get no other choice either due to your own organization’s issues and politics or software vendor issues.

This situation got me thinking and doing some research since I did not remember ever seeing or being told that an unsupported OS was an automatic vulnerability scan failure.  I no longer do external vulnerability scanning, so my recollections of training and working on the ASV side of our business is a bit fuzzy and very rusty.  However, I had never failed a client for an unsupported OS.  So when this issue came up, my only action was to determine what had changed.

The first thing I did was review the latest version of the PCI ASV Scanning Procedures, v1.1.  I searched for terms such as ‘old’, ‘unsupported’, ‘out of date’, ‘OS’ and ‘operating system’.  No matches.  So there is nothing in the ASV scanning procedures that fail an organization for running an unsupported OS.  Even the PCI DSS does not call out unsupported software, so procedurally; I am thinking there is nothing explicit regarding unsupported OSes causing a failed vulnerability scan.

So when I made the original posting, I got a comment from one of my readers pointing me to the ASV Program Guide.  Low and behold on the top of page 16 is the following:

“The ASV scan solution must be able to verify that the operating system is patched for these known exploits. The ASV scanning solution must also be able to determine the version of the operating system and whether it is an older version no longer supported by the vendor, in which case it must be marked as an automatic failure by the ASV. “

So there is no “magic” vulnerability I was missing as the PCI SSC does specify that a scan automatically fails if the OS is unsupported.

But that is not the entire story.  The key to this whole process is that the vulnerability scanner used must be able to verify the operating system.  While all vulnerability scanners attempt to identify the operating system, the reliability of this identification process is suspect at best.  I am not aware of any vendor of security testing tools that makes a claim that they will identify an operating system 100% of the time.  This is because of the fact that there are many, many things that can influence the OS signature that the tools cannot control and therefore can greatly affect the ability of the tool to identify the OS, particularly when talking about external scanning.  And if an organization follows the OS security hardening guidelines, a lot of unsupported OSes will not be properly or reliably identified by vulnerability scanners.  As a result, I find it hard to believe that the PCI SSC intended to have ASVs only rely on the results of a vulnerability scanner, but that seems to be the case.

So with this clarification, I contacted our ASV personnel and they have told me that they too have been failing vulnerability scans if they run across unsupported operating systems.  I ask if the OS signature is inconclusive, then there is not a failure?  Yes, if the scan comes back and does not identify the OS, then they have nothing to go on to fail the scan and the scan passes.  Given the difficulties vulnerability scanners can have identifying the target operating systems such as when scanning through network firewalls, Web application firewalls, load balancers and the like, I now ask if they feel that these identifications are reliable enough to fail a scan.  I am told this is why they confirm the information with the client before issuing the report so that the report is accurate.  So if a client is not honest, they could influence the results of their scan?  I am reluctantly told that is probably true.

Then there is the issue that not all operating systems are created equal.  Operating systems such as MVS, VMS and MCP are nowhere as risky, if they are even risky to begin with, as Windows and Linux.  A lot of ASVs would argue that they never come across these operating systems running Web services.  However, all of them have the capability of running Web services and I personally know of a number of organizations that run their Web services from such environments.  Organizations are running these older versions of operating systems mostly because of the financial considerations of migrating to something else.  However, I can guarantee that none of the dozens of vulnerability scanners that I have used in the last 10 years would accurately identify any of these operating systems, let alone tell you the version unless some service message header information was retrieved by these tools.  And even then, most tools do not parse the header to determine the OS so it would take human intervention to make that determination.

Regardless of the failure, most ASVs do have a review or appeal process that allows organizations to dispute findings and to submit compensating controls to address any failures.  So for organizations that cannot get rid of unsupported OSes, they can use a compensating control.  Like compensating controls for the PCI DSS, the organization is responsible for writing the compensating control and the ASV needs to assess the compensating control to ensure that it will address the issues identified by the vulnerability scan.

So, if you can fail an organization over an unsupported OS, why is it that you do not automatically fail on unsupported application software?  I went through the Program Guide and there are all sorts of other criteria for applications but nothing regarding the fact of what to do if they too are unsupported.  Applications such as IBM Websphere and Oracle Commerce can become unsupported just as easily as their OS brethren.  And in my experience, use of unsupported application software is even more prevalent than unsupported OSes under the idea that if it is not broken and does not have vulnerabilities, why upgrade?  When I asked our ASV group if they fail organizations on unsupported applications I got silence and then the response that they will fail an application if the vulnerability scanner provides a high, severe or critical vulnerability.  To tell you the truth, while vulnerability scanners regularly return text header information for a lot of applications, I would be hard pressed without doing a lot of research to find out if the version being reported was unsupported.  However, scanners could provide this feedback if they were programmed to provide it.

Then there are all of the conspiracy theories out there that say the PCI SSC and technology companies are working together to drive software sales by forcing organizations to upgrade and there would appear to be a lot of anecdotal evidence that would seem to support this argument.  In reality it is not that the software companies are working together with regulators such as the PCI SSC so much as software companies operate this way in order to focus development and support resources on fewer, more current versions.  As a result, it is just happenstance that regulations cause organizations to have to update their software.

The bottom line in all of this is that you have options to avoid a failing vulnerability scan because of an unsupported OS.  The best method, and the one I most recommend, is do not use unsupported operating systems in the first place.  However, as a former CIO, I do understand the real world and the issues IT departments face.  As a result, I recommend all of the following which may or may not require you to develop a compensating control.

  • Implement not only a network firewall, but also a Web application firewall (WAF) and make sure that the rules are extremely restrictive for servers running unsupported operating systems.
  • Configure your firewalls to block the broadcasting of any OS signature information.  Masking the OS signature will provide the benefit of not advertising to the world that the OS running whatever application is unsupported.  This is not a perfect solution as, 9 times out of 10, the application itself will likely advertise the fact that the underlying OS is unsupported.  It is very important to note that this is only a stop gap measure and you should still be actively in the process of migrating to a supported OS.
  • Implement real-time monitoring of firewalls, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Implement a host-based intrusion detection/prevention solution on any servers that run the unsupported OS.  If using a HIPS solution, you may also want to consider using its preventative capabilities for certain critical incidents.
  • Implement real-time log analysis for firewall, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Actively use your incident response procedures to address any incidents that are identified with any unsupported OS.
03
Feb
11

Intent Of Requirements – 11.2

As I discussed in my earlier post on the intent to requirement 6.1, requirement 11.2 is another requirement where there is a hard and fast metric of four “clean” quarterly vulnerability scans.  Nice idea, but tough to execute.

To remind everyone, the intent of vulnerability scanning is to; (a) ensure that if a patch has been applied, it is in fact in place and the vulnerability has been addressed and, (b) identify any new vulnerabilities.  11.2 is in response to at least two issues.  The first is that organizations that do not vulnerability scan have no way to assess the larger risk their technology infrastructure poses to the potential loss of data.  The second is to ensure that updates and patches are in fact truly correcting vulnerabilities.  I have personally seen situations where patches have removed other patches thus leaving a system at risk of compromise.  Granted these situations are rare, but without vulnerability scanning, the removal of these patches would have never been known.

Requirement 11.2 requires that vulnerability scanning is performed at least quarterly.  However, given the 30 day patching rule, the fact that most vendors have a monthly release of patches and the fact that scanning must be performed after all “significant” changes, an organization really needs to conduct monthly vulnerability scanning at a minimum just to stay compliant.  As a result, if you have an organization doing anything less than monthly scanning, you really need to question if they understand the vulnerability scanning requirement.  However, from a ROC reporting perspective, you really only need to report the results of four quarterly scans even though 12 or more scans are available.

Then there is the problem of the definition of a “clean” scan.  According to the PCI SSC, a “clean” scan is one that contains no severe, high or critical vulnerabilities.  You hear complaints about this time and again from organizations, “How do you ever get a ‘clean’ scan when new vulnerabilities are introduced almost every day?”  This is a very fair question when most security professionals know that the odds for obtaining a “clean” scan are against them because 94% of all vulnerabilities are rated severe, high or critical.  Let alone the fact that tests for vulnerabilities can show up in scanning engines long before the vendor has a patch just exacerbates this situation.  Is it any wonder security professionals are frustrated with this requirement?

This is an area that the PCI SSC has really failed to address and bring some reality to the requirement.  For a lot of large organizations, vulnerability scans that have no severe, high or critical vulnerabilities documented cannot physically be achieved without some rare alignment of the planets, stars and moons.  They could be scanning 24×7 and still could not have a complete, truly “clean” scan of their environment.  As a result, a “clean” scan is usually all in the eye of the beholder.

But that is not the entire problem.  A lot of organizations run packages such as Oracle Commerce, IBM Websphere or MICROS Simphony.  Oracle, IBM and MICROS typically have issues with some OS or system software vendor patches such as those for Windows, Linux or Apache.  They usually issue advisories to their customers indicating that these patches cannot be installed.  In these cases, severe, high or critical vulnerabilities may not get patched for months as these vendors develop an update to their solution as well as packaging the other vendors’ patches.  As a result, organizations with these solutions can have severe, critical or high vulnerabilities in their scans for quite a while.  In these cases, a QSA needs to determine if the organization has followed the vendors’ recommendations or used any other recommendations for mitigating the risks of having these vulnerabilities.  QSAs also need to see proof that once Oracle, IBM or MICROS issue their fixes for these situations, that these organizations implement them in a timely manner.

As a result of these constraints, I have developed my own definition of a “clean” scan.  Again, I am not advocating that vulnerability scanning does not have to result in “clean” results.  I am advocating that QSAs bring reality to the process.  In my world, a “clean” scan is a scan that results in severe, critical or high vulnerabilities that were not documented in any of the previous scans.  If there are repeated severe, critical or high vulnerabilities, those vulnerabilities should only be the result of vendor patching issues and the organization must have documentation that proves that management understands that they exist, what measures the organization has taken to mitigate their existence and when the vendor anticipates to have the vulnerabilities addressed.

Here is my rationale for this definition.  There are just too many factors that go against getting what the PCI SSC defines as a “clean” vulnerability scan.  Real world experience says that getting a “clean” scan is near to impossible, particularly in large environments.  I can count on one hand the number of organizations that I have worked with that have truly four quarters of “clean” scans and none of them are very large.  Therefore, as long as I can prove that severe, high or critical vulnerabilities are being addressed in a reliable and as timely a manner as possible, then the scans are clean.

All of you anal-retentive types I am sure are going to flame me as not following the standard.  However, what is a standard worth if it cannot be applied equitably to all?

21
Jan
11

Why Stuxnet Matters To PCI Compliance

There is an article in the Sunday, January 16, 2011, New York Times that says the American and Israeli governments were behind Stuxnet, confirming a rumor that has been running around ever since Stuxnet was identified.  The consensus of the experts is that Stuxnet is to cyber warfare what the airplane was to conventional warfare, a radical game changer.

So why does Stuxnet matter to merchants, processors, acquiring banks and the card brands?  A whole lot if you think about it.

First, Stuxnet proves beyond a shadow of a doubt that governments are investing in cyber war and that in cyber war anything on the Internet is fair game.  While Stuxnet was built to target Iran’s centrifuges that are used in refining Uranium, there were a number of incidents of “collateral damage.”  This “collateral damage” was the result of Stuxnet attacking anything that used the same Siemens SCADA controllers.  Granted, Stuxnet was not as nasty to devices that were not centrifuges, but it still caused problems.  Imagine if an entity wrote an attack for a common device or protocol hoping to actually target another particular entity.  Do you think your organization could become “collateral damage” in such an attack?  I would say it is highly likely.

Second, imagine then the damage that could be done if a terrorist group or a government decided to go after another country’s financial systems using a Stuxnet type of attack.  Does such an attack sound unrealistic?  It should not given that the quickest way to bring any country to its knees is through its financial systems.  And what makes things doubly worse is that, thanks to outsourcing, most banks use a very limited number of application solutions thus making such an attack all that much easier.  Given the reliance of countries on their economic capabilities, such an attack would likely only be carried out by a rogue nation such as North Korea that has nothing to lose or even any other country if it is provoked long and hard enough.

But what if the attack was directed against credit card terminals?  While a lot of people would say that would be farfetched, it also is not as wild as it might seem.  All you need is someone on the inside at Ingenico, Verifone and the like to doctor the card terminals’ software to do whatever you want it to do.  Even large merchants do not necessarily monitor their card terminals, so such an attack could go on for quite a while before it was noticed, if it even ever was noticed.  Criminal gangs have been producing limited numbers of doctored terminals for the last four to five years.  Imagine this done on a large scale and you start to understand how nasty a threat this could be.  If introduced from the manufacturers into the distribution stream, there would be no way of knowing that you had been compromised unless you were monitoring your network properly which most organizations do not do.

Finally, there is the doctoring of integrated point of sale (POS) solutions or similar applications.  Again, not as farfetched as you might think.  There have been a number of instances over the years where software was written to provide backdoors or other openings in systems that allowed information to be leaked.  This is why certain governments have gone into the software business.  This is also why there are now valid concerns about how you confirm that your software is only doing what it is supposed to be doing.

The bottom line in all of this is that these concerns are no longer the ramblings of the paranoid among us.  These once imaginary seeming scenarios have actually come to pass and we need to address what to do to mitigate them.  So from a PCI perspective, what should an organization be doing?  While all of the PCI DSS provides methods to protect an organization, the following are what I consider the most important regarding inside attacks.

  • Monitor your internal network – This is the single, most important way to protect your network from doctored devices and applications.  A doctored device or application must transfer the information it has collected either in real time or in a batch process.  Transmission may be outbound or via an inbound process.  Outbound monitoring should be the easiest because most organizations know what external IP addresses to allow.  Inbound communications are always disputed by most organizations.  But if you remind people of how GoToMyPC and others of its ilk operate, they begin to understand how their networks could be easily compromised from the outside.
  • Analyze your logs – This is probably the next area where a lot of organizations are not doing a good enough job.  Most organizations do a good job collecting log data, but then do a mediocre or poor job analyzing that data to find exceptions.  The reason for this poor performance is a lack of defining criteria for attacks.  While the initial implementation did a good job creating initial definitions, as time goes on, the log analyses are not enhanced or updated to reflect changes in attacks and new attacks.
  • Tighten internal controls – Once inside most organization’s security perimeters, security gets pretty loose and free, if it even exists at all beyond a logon.  Unfortunately, attackers understand this fact, hence why they focus on getting inside.  Because once an attacker is inside, it is pretty much a cake walk to get whatever they want.  This is why locking down ports, reviewing firewall and ACL rules, disabling or removing unused services and disabling or removing unused user accounts become so important.  The fewer attack points you provide on your internal network, the more resilient it will be should an attacker get inside.

Remember, while the City of Troy was protected by insurmountable walls, Troy fell because they were attacked from the inside; an attack vector that Troy felt was not realistic or possible.  Troy’s short sidedness was the result of their arrogance and a failure to understand that an enemy determined to defeat you will find a way to overcome the insurmountable.  Learn from Troy’s mistake.

01
Dec
10

Interesting Announcements From The PCI SSC

For those of you that are not QSAs, the PCI SSC over the last year has tried to keep QSAs in the loop by issuing a monthly Assessor Update newsletter via email.  These usually are not noteworthy, but the November 2010 issue contains a number of items that need to be shared just in case you miss your edition or you are not a QSA.

PCI DSS Timeline Clarification

The Council apparently got the message that they did not communicate the sunset date for the PCI DSS v1.2.1 and the start date for PCI DSS v2.0 very well.  As a result, they issued a clarification in the November 2010 newsletter.  To quote the Council:

“Entities needing to comply with the PCI DSS are strongly encouraged to begin using the new standard immediately. However, version 1.2.1 will remain effective until December 31st, 2011 to allow everyone time to adopt any changes they may need to in order to maintain their PCI DSS compliance. This means that organizations assessing and reporting compliance during 2011 may validate to either version 1.2.1 or 2.0. However, the Council urges all organizations to complete their transition to the new standard as quickly as possible, especially where any new controls may enhance the protection of cardholder data.”

Since QSAs will not have the scoring template until sometime in January 2011, it makes planning and executing any assessments difficult until the scoring template is issued.  As a result, the earliest I can see any v2.0 assessments getting started is March 2011.

PCI DSS and PA-DSS v2.0 Scoring Templates

And speaking of those scoring templates, the scoring templates for v2.0 of the PCI DSS and PA-DSS should be published sometime in January 2011.  It would be nice to have these a bit earlier, but better late than never.

Expiration Of PABP v1.4 Extended 90 Days

The PABP v1.4 standard that was expected to expire tomorrow, December 2, 2010, has been extended to March 2, 2011.  To quote the Council:

“This updated deadline recognizes the challenges many merchants and Payment Application end users have in implementing system changes over the busy holiday period, and allows the Payment Application vendor community to consider submitting new versions of their products for assessment against the new PA-DSS 2.0 standard.

The Council is committed to reviewing all submissions for the updated versions of expiring PABP v1.4 applications, and this new March 2nd 2011 deadline will allow the review process to be completed before previous versions of these applications expire.  This extension will also provide more time for PA-QSAs to complete reviews of those Payment Applications that are currently in process.  Finally, this extension will allow Payment Application vendors, should they choose to hold off on assessment of expiring Payment Applications and instead submit (after January 1st, 2011) their Payment Applications for assessment against the new PA-DSS v2.0 standard.”

ASV Sampling And Scanning Do Not Mix

While sampling of devices is allowed under the PCI DSS, it is not allowed for ASV scans.  To quote the Council:

“Within a given quarter, all Internet accessible systems must pass an ASV scan. It is not necessary that they all be scanned at the same time, but they all must be scanned quarterly.”

Apparently, some ASVs were only scanning a sampling of PCI in-scope devices each quarter.  I am sure this will lead to consolidation of a lot of organization’s external network presence.

2011 PCI SSC Training Schedule

The training schedule for next year should be posted to the PCI SSC’s Web site by mid-December.

Telecom Private Circuit FAQ Issued

See the end of my post on MPLS for the text of the FAQ.

21
Nov
10

The Threat Landscape Is Changing – Cloud Cracking

There was an article published on Threat Post this past week regarding a German security researcher that used a new feature of the Amazon.com EC2 cloud computing environment to crack SHA1 password hashes.   I am sure a lot of you are asking yourselves, “So what?  This is just another Chicken Little warning that come out all of the time.”  I would agree with you that publications use highly threatening headlines to hype these sorts of articles to attract readers.  But if you read these articles closely, ignoring the hype factor and think through the concepts that they are discussing, you can understand the threat they might bring to your environment.

The thing that caught my eye about this threat is that it cost the researcher less than an hour of their time and a whole $2.10 to crack six character length 160-bit SHA1 hashes.  Even more disconcerting was that all of the necessary hardware and software was readily available through the EC2 service.  And if the researcher had desired, they could have used even more GPUs to shorten the time to crack these hashes.  Granted, the researcher only cracked 14 of these hashes, but what if those hashes were to one or more administrator accounts?

I am sure a lot of you are now saying, “Yeah, but, this is all theory and not a ‘real’ threat.”  No doubt about it.  I too have been known to toss out my famous, “In theory, theory works,” at this point.  But the only way to determine if the research really is a real threat is to read the article or research paper and then determine if the threat can truly be applied in the real world.  Based on what I have read about this threat, I would say that there is a great potential for misuse of this EC2 service for all sorts of encryption attacks, not just SHA1 hashes.

A lot of you are now probably pointing to the fact that these were all six character long items that were hashed.  I would agree but then also point out that they were 160-bit hashes, not less than 128-bit.  A lot of security professionals mistakenly believe that if they get hashes or encryption above 128-bit, that everything is secure.  However, the number of bits is not the only factor; there is also the strength of the key used.  If the key is not very long or is easy to guess, then it does not matter how many bits are used by the algorithm.

A lot of security professionals blow off threats because they just assume that if they are scanning regularly that any new threat will be caught by their scanning.  Unfortunately, scanning only looks for vulnerabilities to devices based on known attack vectors, not a threat like this one.  This threat comes into play with any encrypted data or transmissions that an attacker can come across and may or may not have a lot to do with vulnerabilities.

You can argue that because you scan and you comply with the PCI DSS and do not have any vulnerabilities of CVE 4 or higher, you are therefore secure.  However, as any network security professional that conducts penetration testing will attest, vulnerabilities with a CVE of less than 4 can be put together and used to compromise a network.  So, just because you do not have any vulnerabilities of CVE 4 or higher, does not necessarily mean that you are secure.

The real threat here is that should an attacker be able to get a hold of your encrypted data or data flows, they can just load it up and use Amazon.com’s EC2 cloud to attempt to break the encryption.  As a result, all of those claims over the years by security pundits that attackers would have to have access to supercomputers have apparently been realized with the advent of cloud computing.

But again, this is all theory, right?  You wish.  Even more shocking was a tidbit tucked away in the article.  There is a site entitled WPACracker.com that offers a 400 CPU cluster of computers and 284 million word dictionary and promises to crack WPA passwords on an average of 20 minutes for $17.  As a result, all of you relying on WPA to secure your wireless should be considering upgrading to WPA2 as soon as you can.

The next thing that rolled through my mind is what if some enterprising individual decided to conduct some “research” for themselves by using the Berkley Open Infrastructure for Network Computing (BOINC) platform?  This is the platform that runs SETI@home, Einstein@home and other worthwhile scientific research projects that need lots of computing power.  Based on the BOINC home page, that would put the potential of over 4,000 teraflops at the command of our “researcher.”  However, like all things technology based, our enterprising researcher would need to find a potential vulnerability in BOINC and leverage that to gain access to all of that computing power.  Either that or make their research project look enticing to the BOINC user population so that they add that workload to their systems.  But given that BOINC is open source, it is also possible that attackers could create their own BOINC network for the purposes of cracking encryption.  I could imagine botnets being put to this purpose.

This whole threat plays out best when the attacker has inside access to an organization’s network and data.  Reading the latest reports from Verizon Business Services and Trustwave, the majority of breaches have an inside component, so it is not too farfetched that an insider would have access.  So, if you are not monitoring your network and sensitive data and not strictly controlling access to your data, then it is anyone’s guess as to whether or not someone has taken your data and is now attempting to decrypt it.

If you learned anything at all from this post is the fact that attackers are leveraging cloud computing just like the rest of us.  The unfortunate aspect is that attackers are leveraging the cloud to continue their questionable and sometimes illegal activities.  And in so leverage this new technology, the potential that they are successful is only going to go up.




Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

May 2023
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031