Archive for the 'PCI ASV' Category



26
Apr
14

Why SAQ A-EP Makes Sense

A colleague of mine attended the PCI SSC QSA Update session at the ETA convention a couple of weeks back.  One of the big discussion items was how the Council is being pilloried over SAQ A-EP.  This SAQ was developed to address the recommendations that were documented in the information supplement titled ‘PCI DSS E-commerce Guidelines’ that was published in January 2013.  Specifically, SAQ A-EP addresses the ecommerce sites that do redirects to a processor’s site that does the actual payment processing.

Based on the comments I have seen online and made in personal conversations, you would think that SAQ A-EP was heresy or a bad joke.  All of these derogatory comments are being driven by merchants that were sold a bill of goods by slick, non-PCI informed, sales people pushing redirected ecommerce solutions by claiming that it put the merchant entirely out of scope.  This was not the case and never was the case, particularly after the issuance of the information supplement.  However, we still encounter outsourcing vendors that continue to claim a redirect approach puts the merchant entirely out of scope.

To understand the rationale of SAQ A-EP we need to understand the risk surrounding these redirect solutions.  The risk is that an attacker modifies the redirect on the merchant’s server to now point to their own payment page, collects the customer’s cardholder data (CHD) on the attacker’s page and then, optionally, passes the customer on to the original payment page at the processor so the customer and merchant are none the wiser.

Under the PCI DSS and card brands’ security programs, redirect systems are still in-scope for PCI compliance because they are a key control in the payment process even though the merchant’s server issuing the redirect does not come into direct contact with CHD.

With all of that said, SAQ A-EP is not a full SAQ D, but it is not as short and simple as SAQ A either.  There are a lot of requirements to be met with SAQ A-EP which is why merchants are up in arms.  However, if you understand the aforementioned risk, you should understand why the requirements that have to be complied with in SAQ A-EP are there.

The requirement 1 requirements are all there to ensure that there is a firewall protecting the server that does the redirect.  This is Security 101 and I would doubt that any merchant would not have a firewall protecting all of their Internet facing servers.  Routers have always been optional and if the merchant does not have control of those devices, then they would not be included here.

Requirement 2 is all about making sure that all devices in the cardholder data environment (CDE) are properly configured and security hardened.  Again, this is Security 101 stuff.  If a merchant is not doing this for Internet facing devices, they are just begging to be attacked and compromised.

The requirements called out in SAQ A-EP for requirement 3 are there to confirm that the merchant is not storing cardholder data (CHD) or sensitive authentication data (SAD).  A merchant using a redirect should be marking these as Not Applicable (NA) and documenting that they do not store CHD in their system(s) because they use a redirect that processes and transmits CHD directly between their processor and their customer.  Any merchant that answers these requirements any other way should not be using SAQ A-EP.  All of that said, merchants need to have proof that they examined logs, trace files, history files, databases, etc. and did not find any CHD or SAD in those files.

Requirement 4 is provided to ensure that secure communications are used.  I would recommend documenting the SSL/TLS certificate information for your processor for the requirements in 4.1.  But do not pass over requirement 4.2.  A lot of ecommerce only merchants have call centers or take telephone calls and do order entry into the same Web site used by their customers.  As a result, merchants need to make sure that email, instant messaging, etc. are never used for communicating CHD/SAD.

Requirement 10 is important for any forensic research should the redirect be manipulated so that it can be determined when that event occurred so that the scope of any compromise can be determined.

While one would think that the vulnerability scanning and penetration testing requirements in requirement 11 would be thought of Security 101 and self-explanatory, you would be surprised at how many merchants argue about that fact.  Again, the driver of these redirect solutions was cost reduction and vulnerability scanning and penetration testing incur costs, sometimes significant costs depending on the number of servers, firewalls, load balancers, switches, etc. involved.  If you do not do vulnerability scanning and penetration testing as required, how do you know that the redirect system(s) are properly secured and patched?

However, the key requirement that cannot be missed is requirement 11.5 regarding critical file monitoring.  That is because the whole security of the redirect environment is pinned on detecting any modification of the redirect URL.  All of the other requirements in SAQ A-EP are there to minimize the risk of compromising the redirect.  11.5 is there to ensure that, if the other controls fail, at least the merchant would be alerted to the fact that the redirect had been changed.  If a modification to the redirect cannot be reliably detected by the critical file monitoring solution, then the security of the redirect cannot be assured.

The remaining requirements for 5, 6, 7, 8, 9 and 12 are all Security 101 items.  If you are not following these requirements as part of best practices for security and IT operations in general, then you need to consider what exactly you are doing.

Hopefully everyone now understands SAQ A-EP and why it is not as simple as that slick sales person implied.

Advertisement
01
Jan
13

How The PCI Standards Will Really Die

Welcome to the new year.  I hope the holidays have been treating you well and the coming year is good as well.

There have been a number of articles written about why and how the PCI compliance process will die.  It is not that I look forward to the PCI standards dying as they have brought a needed visibility to information security and privacy as well as the fact that PCI keeps me gainfully employed.  However if things stay on their current trajectory, the PCI standards will eventually die, but not for the reasons being quoted in today’s articles.  The real killers of the PCI compliance process will be the card brands and the PCI Security Standards Council.  Yes, the very folks that brought us the PCI standards will bring the ultimate demise of their precious set of standards.

The first death knell I see is that it is very easy to issue edicts from on high when you do not have to implement them.  Over the years, clarifications have been issued, quality assurance reviews performed, forensic examinations conducted and a host of other activities have resulted in “enhancements” to how the PCI standards are assessed and enforced.  Do not get me wrong, a lot of what has been done was needed and appreciated.

However, by the same token, some of what has come down has been a nightmare to implement.  Any QSAC not using some sort of automated system to conduct their PCI assessments will find it impossible to meet the current and any future documentation and tracking standards now required by the PCI SSC’s QA process.  Under the current standards, QSACs need to document who they interviewed and what the persons were interviewed about as well as tying documentation and observations to the tests performed.  Without some sort of automated process, these requirements are just too intensive to perform manually.

Documentation received and reviewed needs to have its file name, date of issue and a description of its purpose in the PCI assessment process documented.  The basic PCI DSS has a minimum of around 200 discrete documents that are required for the PCI assessment process.  The average we see for most of our engagements is over 600 documents which also include not only policies, standards and procedures, but configuration files, interview notes and observations such as screen shots, log files and file dumps.  You really have to question any QSAC that tells you they manually manage the process.  They either have an amazing and magically efficient project management process, they have very, very inexpensive staff (i.e., overseas labor) or they are short cutting the processes and producing a work product that does not comply with the PCI SSC QA program and have yet to be assessed by the PCI SSC (the most likely scenario).

Even using simple SharePoint or Lotus Notes solutions are not cheap when you consider the cost of the server(s) and the storage of all of documentation collected, which can be around 5 to 10GB per project, as well as all of the requisite system maintenance.  Servers and storage may be cheap, but it all adds up, the more clients you assess.  And speaking of the storage of documentation, the PCI SSC requires that documentation related to PCI assessments be stored for at least three years.  For those of us with electronic work paper management systems, this is not a problem.  However, given the amount of paper generated by these projects, those QSACs using the traditional paper filing methods will find a lot of shelf space taken up by their PCI engagements if they are truly following the procedures required by the PCI SSC.

All of this drives up the cost of a proper PCI assessment, more than I think the card brands and the PCI SSC are willing to admit.  It is not that I think the card brands and PCI SSC do not care about this situation, but more related to they do not have an understanding of the operational ramifications of their edicts.  The card brands and PCI SSC tread a very fine line here and to this point they have been heavy handed in the issuing of their edicts.  Going forward, the PCI SSC needs to ask the QSACs, Participating Organizations and ASVs to assess the cost and time impacts of these edicts so that they can be weighed against their benefits versus what is done now which is more of a procedural and proofing review.  If this is not done, there will soon come a point where merchants and service providers will push back hard and refuse to go through the process due to the cost and the amount of time involved to be assessed.

The next death knell is the inane process that is called the PCI Report On Compliance (ROC).  When the PCI SSC did not have access to the QSACs’ work papers, the current ROC writing process made some sense as there was no other way for the PCI SSC or the processors and acquiring banks to know if the QSACs had really done the work they were saying they had done.  However, all of that changed a number of years ago when the PCI SSC required QSACs to add a disclaimer to their contracts stating that the PCI SSC had the right to review all work products.  Yet even with this change, we continue to have to write an insanely detailed ROC, typically numbering in a minimum of 300+ pages for even the most basic of ROCs.

Unfortunately, there are QSACs out there that apparently have not been through the PCI SSC QA process and that dreaded of all states – Remediation.  As a result, they have much lower costs because they are not documenting their assessment work as completely as they need to and are not sampling, observing or interviewing like QSACs that have been through the QA process.  In addition, based on some work products we have seen, they also do not care about the quality of the resulting ROC as it looks entirely like a ‘find and replace’ of a template and makes no sense when you read it.  In talking to other large QSACs that have been through the QA process multiple times, the PCI SSC has indicated that they are monitoring the large QSACs more than the little QSACs because there is more risk with the large QSACs.  While true to an extent, we have encountered a number of smaller QSACs that perform assessments for large clients due to their much lower cost structure and their willingness to ‘overlook’ compliance issues.  If the PCI SSC does not go after these QSACs soon, there will likely be a number of breaches that occur due to the QSACs’ lack of diligence in performing their assessments.

I know of a number of QSACs that would like to see Bob Russo and the representatives of the various card brands to actually work as staff on a few PCI assessment engagements so that they can better appreciate the inordinate amount of work involved in generating a ROC.  I think they would be shocked at the amount of work effort they have driven into a process that is already too complicated and prone for error.

As it stands today, the ROC writing, review and proofing process is probably 50% to 60% of a good QSAC’s project costs.  To address this, the PCI SSC QA group tells QSACs to develop one or more templates for writing the ROC which, from what we have seen from some other QSACs, means a lot of mass ‘find and replace’ to speed the ROC writing process.  For the last few years, a number of QSACs have brought the ROC writing process up at the Community Meetings.  However the card brands continue to shoot down any sort of changes to the process.  As a result, the cost of producing a ROC is driven by the size and complexity of the merchants’ or service providers’ cardholder data environment (CDE).  These costs will only continue to rise as long as the PCI SSC does not allow QSACs to mark items as ‘In Place’ with only a check box and rely on the QSAC’s work papers versus the verbosity required now.  If this sort of process can work for financial auditors, it can work here as well.

A third death knell is the PCI SSC and card brands continuing to quote that the majority of breaches are the result of organizations not complying with the PCI DSS.  In discussions with a number of the PCI forensic examination companies, I am hearing that the card brands cannot believe the fact that more and more organizations were PCI compliant at the time of their breach.  The PCI SSC and card brands have apparently convinced themselves that the PCI standards are “perfect” and they cannot imagine that an organization could be breached unless that organization was not complying with the PCI standards.  There is no security standard that I am aware that totally prevent breaches.  So while the PCI standards are good baseline security standards, the card brands and PCI SSC seem to have forgotten that security is not perfect and that any security standard only minimizes the damage done when a breach occurs if the standard is truly followed.

And as organizations have gotten the PCI “religion,” the effort required to compromise them from the outside via traditional attacks has increased significantly.  As a result, successful attackers have changed strategy and work on social engineering their way past the bulk of an organization’s security measures.  The PCI DSS only has a little bit on social engineering in requirement 12.6 regarding security awareness training.  And even those organizations with the most robust of security awareness programs will tell you that, even after extensive security awareness training, human beings are still fallible and that some people still do very questionable things that continue to put organizations at risk, sometimes significant risk.  Even when you have the most diligent of employees, they still make mistakes in judgment from time to time.

Until the human element can be totally removed, there will always be a certain amount of risk that will never go away.  Again, the PCI SSC and card brands seem to not want to acknowledge the failings of the human element and appear to believe that technology is the savior based on the focus of the PCI standards.  However time and again, every security professional has seen very sophisticated security technologies circumvented by human error or just plain apathy towards security (i.e., “it always happens to someone else, not my organization” or “we’re too small to be a target”).

Until the PCI SSC and the card brands drop the “holier than thou” attitude toward the PCI standards and stop the public pillory of organizations that have been breached, there will continue to be editorial commentary regarding the pointlessness of the standards and ever more serious push back to complying with the standards.

These are the reasons why the PCI SSC and the card brands will be the ones that will kill the PCI standards.  At the moment, they are so far removed from the process; they do not understand how complicated and expensive the process has become which is why merchants and service providers are complaining about the ever increasing costs and effort related to the PCI assessment process.

The PCI SSC and card brands also seem to have forgotten that QSACs have to make money doing these assessments and, when you pile on clarifications and edicts that do nothing to streamline and simplify the process; you are only driving the costs of the process higher.  And higher costs only make merchants and service providers, who are on thin margins to being with, even more incentivized to use the much lower cost QSACs, driving the diligent QSACs out of the market, thus increasing the likelihood of breaches.

Again, it is not that I want the PCI standards to go away as I think they have brought a real benefit.  However, if these issues are not addressed, the PCI standards will end up going away.  I fear that, with them gone, there will be no carrot to ensure the security of cardholder information and we will end up back where we were before the PCI standards existed.

20
Mar
11

PCI SSC Updates The ASV Program and Issues New Information Supplement

March 2011 has been a busy month thus far for the PCI SSC.  On Thursday, March 10, they announced a new ASV training program and on Friday, March 18, they released an Information Supplement on protecting telephone-based cardholder data.

The ASV training program has blindsided the ASV community as it was a total surprise.  Yes, there has been talk over the years at the Community Meetings and in other venues regarding ASV qualifications and training, but nothing ever seemed to come from those discussions.

According to the press release, the ASV training program, “… is a direct response to the feedback we’ve received from the merchant community …”  One can only assume that the complaints that have been voiced about ASVs from merchants and service providers as well as the comments in the media have finally caught the attention of the PCI SSC and they are going to address the problem with training.

The consistent complaint I have heard from clients over the years was in regards to the fact that no two ASVs ever scoped their network the same for vulnerability scanning.  The next most common complaint was always about the different results of the vulnerability scanning which was most often voiced after a company changed ASVs.  While there are always some bad apples in the bunch, I do believe that most ASVs know what they are doing.

As a result of all of this, I am sure this new training requirement is to address the complaints and make the program better.  This training is in addition to the network security test ASV companies already had to pass.  For those of you that did not know, ASVs have to conduct network security scanning against a test network with predefined vulnerabilities operated and configured by the PCI SSC.  ASVs are expected to produce a sample ASV report and document all of the predefined vulnerabilities.

The ASV training is comprised of an eight hour online course and an examination.  A minimum of two employees are required to take the course and pass the examination.  Once this has been accomplished, the organization is designated as having Qualified ASV employees.  As with a lot of PCI requirements, there are some important dates involved.  ASVs that are renewing their status prior to June 1, 2011 need to get two personnel trained and passed the examination by June 15, 2011.  ASVs that will certify after June 1, 2011 need to have two staff trained and passed the examination by their renewal date.

Given the additional cost of this new training plus the requirement to have a minimum of two people trained, it will be interesting to see if any of the existing 130 ASVs drop out because of this new requirement.

Telephony Cardholder Data

The other issuance of interest this month was in regards to telephone conversation recordings that contain cardholder data.  While this is a longer dissertation on FAQ 5362, there is really nothing new presented in this information supplement.  The bottom line still is that if you have call recordings that contain cardholder data you are required to either; (a) do not record it in the first place, (b) if recorded, redact it if possible, or (c) make sure that it cannot be searched, is encrypted and access is restricted.  The best thing about this information supplement is the flowchart that was created for this situation.

26
Feb
11

If Not The PCI Standards, Then What?

I have just read a couple of articles as well as attended a couple of meetings where the topic du jour was the PCI standards.  They were a bash fest of the highest order.  Frustrated, I asked the participants at my last meeting, “If not the PCI standards, then what standard do you want to follow to ensure the security of cardholder data?”  Roaring silence.

This is the frustration that I and others have with people who complain about the PCI standards or any standards.  People complain and complain, yet they offer no solutions to address their complaints.  One thing I have always stressed with and required of people who work with me, if you are going to complain about something, you better have an idea for a solution.  Constructive criticism is fine, but if you do not have any ideas on how to make things better, then all you are doing is whining.  Children whine, adults have solutions.

But then you have the complainers who do offer a solution but that solution is to allow the marketplace to address the problem.  Hello!  How long was it going to take before merchants and service providers got a clue about securing cardholder data?  If it was such a priority, why did it take the card brands to come out with the standards?  For merchants and service providers, cardholder data security was not a priority, it was some other merchants’ or service providers’ problem.

The other problem with the marketplace approach is that each organization learns from its own incidents and possibly from incidents suffered by their business partners, not from the incidents experienced by all.  Under the marketplace approach, security protection only improves as each individual institution suffers a particular incident.  As a result, organizations reinvent the wheel with the majority of incidents.

Standards allow organizations to learn from the collective experience of all organizations, not just their own organization.  For example, if your organization does not have wireless networking but decides to implement wireless, a standard provides a guideline as to how to implement wireless securely.  Without a standard, you are on your own to do the best you can.  On your own you will likely get some things right, but you will also get some things wrong.  It is those mistakes due to lack of experience that come back to bite organizations.  With a standard to follow, the chance of getting bitten after the fact is often greatly minimized.

However, standards are not a guarantee.  Going back to wireless, just look at how things went wrong with WEP.  WEP is a standard and was well documented on how to implement it; supposedly securely.  WEP was also known to have the potential for security problems, but those problems were not widely publicized until organization began to have security incidents.  So a stop gap standard was provided called WPA which turned out to have its own security issues.  Ultimately, WPA was replaced by WPA2 which is the secure, permanent solution.

This is why early adopters of technology can end up getting burned.  When an organization decides to hop onboard the latest and greatest technology, there is a high risk that the security learning curve is not very far advanced.  As a result, the organization will be at a higher risk of suffering a security incident than an organization using a more tried and true approach.  As a new technology matures, typically its security posture matures and with a more mature security posture, the lower the likelihood that a security incident will occur.  However, the time it takes for that security maturity to occur can take quite a while and it is where things take quite a while where organizations are at the highest risk.

Unfortunately, in some instances, a new technology gets quickly usurped by an even newer technology and the original new technology never matures.  The bad news is that the early adopters get stuck with a solution that will never have its security shortcomings addressed, leaving the early adopters to either convert to the newer technology or find another alternative.  Many a career has been ended over such technology leap frogging events.

The PCI standards were not developed in a vacuum.  They are a consolidation of a lot of other security standards and guidance gained through root cause analysis of security incidents gathered over the years with the express purpose of protecting cardholder data.  If you follow another security standard such as ISO 27K or FISMA, a lot of what is in those standards is also in the PCI standards.  But there are also a lot of requirements in the PCI standards that are not in other standards as well.

The bottom line is if you do not like the PCI standards, then get involved in the process to make things better and stop whining.

09
Feb
11

The “Magic” Vulnerability – Revised

What started this post is that I have recently received a number of calls and messages from clients and colleagues.  The conversations have all gone basically the same.  They were calling me and telling me that their ASV had failed their vulnerability scan because the OS detected was unsupported and they are wondering whether or not I have encountered this before.

My first question usually was along the lines of; “So, what vulnerabilities did they detect?”

“None,” was the confused answer at the other end of the line.

“What?  They must have detected at least one high, severe or critical vulnerability?  That is the only way you can fail,” I would ask, now also confused.

“Nope.  Nothing.  Just the fact that the OS is unsupported,” I was told.

Do not get me wrong.  I am not advocating the use of unsupported operating systems, particularly unsupported versions of Windows.  The risk of course is that one or more vulnerabilities show up that the vendor will not fix because the OS is no longer supported.  So there is good reason to avoid this situation.  However, there are also situations when you just get no other choice either due to your own organization’s issues and politics or software vendor issues.

This situation got me thinking and doing some research since I did not remember ever seeing or being told that an unsupported OS was an automatic vulnerability scan failure.  I no longer do external vulnerability scanning, so my recollections of training and working on the ASV side of our business is a bit fuzzy and very rusty.  However, I had never failed a client for an unsupported OS.  So when this issue came up, my only action was to determine what had changed.

The first thing I did was review the latest version of the PCI ASV Scanning Procedures, v1.1.  I searched for terms such as ‘old’, ‘unsupported’, ‘out of date’, ‘OS’ and ‘operating system’.  No matches.  So there is nothing in the ASV scanning procedures that fail an organization for running an unsupported OS.  Even the PCI DSS does not call out unsupported software, so procedurally; I am thinking there is nothing explicit regarding unsupported OSes causing a failed vulnerability scan.

So when I made the original posting, I got a comment from one of my readers pointing me to the ASV Program Guide.  Low and behold on the top of page 16 is the following:

“The ASV scan solution must be able to verify that the operating system is patched for these known exploits. The ASV scanning solution must also be able to determine the version of the operating system and whether it is an older version no longer supported by the vendor, in which case it must be marked as an automatic failure by the ASV. “

So there is no “magic” vulnerability I was missing as the PCI SSC does specify that a scan automatically fails if the OS is unsupported.

But that is not the entire story.  The key to this whole process is that the vulnerability scanner used must be able to verify the operating system.  While all vulnerability scanners attempt to identify the operating system, the reliability of this identification process is suspect at best.  I am not aware of any vendor of security testing tools that makes a claim that they will identify an operating system 100% of the time.  This is because of the fact that there are many, many things that can influence the OS signature that the tools cannot control and therefore can greatly affect the ability of the tool to identify the OS, particularly when talking about external scanning.  And if an organization follows the OS security hardening guidelines, a lot of unsupported OSes will not be properly or reliably identified by vulnerability scanners.  As a result, I find it hard to believe that the PCI SSC intended to have ASVs only rely on the results of a vulnerability scanner, but that seems to be the case.

So with this clarification, I contacted our ASV personnel and they have told me that they too have been failing vulnerability scans if they run across unsupported operating systems.  I ask if the OS signature is inconclusive, then there is not a failure?  Yes, if the scan comes back and does not identify the OS, then they have nothing to go on to fail the scan and the scan passes.  Given the difficulties vulnerability scanners can have identifying the target operating systems such as when scanning through network firewalls, Web application firewalls, load balancers and the like, I now ask if they feel that these identifications are reliable enough to fail a scan.  I am told this is why they confirm the information with the client before issuing the report so that the report is accurate.  So if a client is not honest, they could influence the results of their scan?  I am reluctantly told that is probably true.

Then there is the issue that not all operating systems are created equal.  Operating systems such as MVS, VMS and MCP are nowhere as risky, if they are even risky to begin with, as Windows and Linux.  A lot of ASVs would argue that they never come across these operating systems running Web services.  However, all of them have the capability of running Web services and I personally know of a number of organizations that run their Web services from such environments.  Organizations are running these older versions of operating systems mostly because of the financial considerations of migrating to something else.  However, I can guarantee that none of the dozens of vulnerability scanners that I have used in the last 10 years would accurately identify any of these operating systems, let alone tell you the version unless some service message header information was retrieved by these tools.  And even then, most tools do not parse the header to determine the OS so it would take human intervention to make that determination.

Regardless of the failure, most ASVs do have a review or appeal process that allows organizations to dispute findings and to submit compensating controls to address any failures.  So for organizations that cannot get rid of unsupported OSes, they can use a compensating control.  Like compensating controls for the PCI DSS, the organization is responsible for writing the compensating control and the ASV needs to assess the compensating control to ensure that it will address the issues identified by the vulnerability scan.

So, if you can fail an organization over an unsupported OS, why is it that you do not automatically fail on unsupported application software?  I went through the Program Guide and there are all sorts of other criteria for applications but nothing regarding the fact of what to do if they too are unsupported.  Applications such as IBM Websphere and Oracle Commerce can become unsupported just as easily as their OS brethren.  And in my experience, use of unsupported application software is even more prevalent than unsupported OSes under the idea that if it is not broken and does not have vulnerabilities, why upgrade?  When I asked our ASV group if they fail organizations on unsupported applications I got silence and then the response that they will fail an application if the vulnerability scanner provides a high, severe or critical vulnerability.  To tell you the truth, while vulnerability scanners regularly return text header information for a lot of applications, I would be hard pressed without doing a lot of research to find out if the version being reported was unsupported.  However, scanners could provide this feedback if they were programmed to provide it.

Then there are all of the conspiracy theories out there that say the PCI SSC and technology companies are working together to drive software sales by forcing organizations to upgrade and there would appear to be a lot of anecdotal evidence that would seem to support this argument.  In reality it is not that the software companies are working together with regulators such as the PCI SSC so much as software companies operate this way in order to focus development and support resources on fewer, more current versions.  As a result, it is just happenstance that regulations cause organizations to have to update their software.

The bottom line in all of this is that you have options to avoid a failing vulnerability scan because of an unsupported OS.  The best method, and the one I most recommend, is do not use unsupported operating systems in the first place.  However, as a former CIO, I do understand the real world and the issues IT departments face.  As a result, I recommend all of the following which may or may not require you to develop a compensating control.

  • Implement not only a network firewall, but also a Web application firewall (WAF) and make sure that the rules are extremely restrictive for servers running unsupported operating systems.
  • Configure your firewalls to block the broadcasting of any OS signature information.  Masking the OS signature will provide the benefit of not advertising to the world that the OS running whatever application is unsupported.  This is not a perfect solution as, 9 times out of 10, the application itself will likely advertise the fact that the underlying OS is unsupported.  It is very important to note that this is only a stop gap measure and you should still be actively in the process of migrating to a supported OS.
  • Implement real-time monitoring of firewalls, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Implement a host-based intrusion detection/prevention solution on any servers that run the unsupported OS.  If using a HIPS solution, you may also want to consider using its preventative capabilities for certain critical incidents.
  • Implement real-time log analysis for firewall, servers and applications.  Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
  • Actively use your incident response procedures to address any incidents that are identified with any unsupported OS.



Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031