Archive for the 'Uncategorized' Category


PCI Compliance Certificates Rear Their Ugly Head Again

Apparently, a bad practice started a number of years ago is appearing in other parts of the world.  That practice is PCI Compliance Certificates.

I wrote a post a number of years ago about this practice and provided the direct quote from the PCI SSC’s FAQ on the subject.  If you need more proof, go to the PCI SSC Web site and click on FAQ and search for ‘PCI DSS Compliance Certificate’.

This is a marketing ploy and it needs to stop.

These certificates are not worth the paper they are printed on and anyone purporting them to have meaning is uninformed, or worse, lying.

I would highly recommend that if you encounter anyone that tells you such nonsense, they should be immediately reported to the PCI SSC –  qsa AT pcisecuritystandards DOT org. Include their name and the name of their organization in your message.

UPDATE: Only a few minutes after I put up this post I received just such a certificate from a major bank as proof that their business partner was PCI compliant. Unbelievable.


Celebrating Five Years

Wow! Time really does fly when you are having fun.

Believe it or not, the PCI Guru has been doing this for five years.

Thank you to my readers for continuing to support the blog and I look forward to serving you with even more blog entries in the future.


Atom AMPD And PCI Compliance

Here is a relatively new player in network security space for small and mid-sized businesses (SMBs).  A friend of mine that does a lot of work with SMBs is encountering this solution more and more.  And is there any wonder why when it is portrayed as a God send for SMBs.  On the Atom AMPD Web site, they explain their Kwick Key solution.

“Kwick Key™, a USB flash drive, is the delivery device behind the AtomOS.  The Kwick Key is bootable and plugs into nearly all servers.  Kwick Key users experience a significant savings achieved with a high quality “one key” solution for their networking needs.

Simply install the Kwick Key into an internet-connected server, display the web interface, configure the features and you’re done.  The server is transformed into a multi-functional networking and communication device.

The underlying operating system behind the Kwick Key AtomOS is Linux. The content stored on the server is also backed up on the Kwick Key.  Once configured, the Kwick Key can be transferred to new equipment while maintaining its configuration, providing portability in the event of equipment failure.  A redundant option is also available.”

What is wrong with this picture?

If you said, “Too good to be true,” you would be correct.  There are no silver bullet solutions to security.  However these sorts of “all in one” security solutions are being marketed to SMBs all of the time as a cost saving way to be secure.  And since SMBs do not typically have any significant IT personnel, they are always looking for ways to reduce IT workload and save money.  However, if you need to be PCI compliant, this is not a solution for your organization.  Why?

If you read the Savings page on their Web site, they state:

“Your current IT infrastructure is likely requiring multiple boxes to serve your network and communication needs.  This likely includes multiple boxes supporting firewalls, content filters, routing and VoIP applications; each requiring individual training, maintenance, and ongoing licensing fees.  The AtomOS provides just one platform, one interface, one operating system. It brings to bear the BEST practices via a convergent technology.  All modules are tied together by our proprietary user interface.”

That “all in one” solution approach violates PCI DSS requirement 2.2.1 which states:

“Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)”

The reason for requirement 2.2.1 is to leverage the concept of “defense in depth”.  Defense in depth relies on multiple layers of defense such that if one layer develops vulnerability, the other layers still can provide some security and mitigate for the vulnerability until the vulnerability is fixed.  Under the Atom solution, vulnerability anywhere potentially creates a situation where the whole solution is at risk because of one part’s failure.

As a result, in order to be PCI compliant, it will require you to purchase multiple Kwick Keys.  I would assume that multiple keys will result in costs that negate Atom’s cost advantage over other PCI compliant solutions.

Then go to the solution’s product page for Kwick Key.  Take a look at all of the firewall features that are available.  Looks pretty good until you realize there is one notable feature missing – stateful packet inspection (SPI).  Basically, Atom has implemented port filtering which comes standard on Linux distributions.  Not that this is not secure, but it does not comply with requirement 1.3.6 which explicitly requires that SPI be implemented.

There are ways to add SPI to this solution.  However, that will mean you will have to support it yourself and the whole point of the Atom solution is to get out from under supporting such a solution for your organization.

My assumption is that with an appropriate wireless adapter in the system running Kwick Key that the solution will serve as a wireless access point.  Under requirement 1.2.3, wireless is required to be segregated from an organization’s cardholder data environment (CDE) by a firewall.  Given that the wireless is operating on the same device, it is questionable if compliance with this requirement could be truly accomplished.

The same concerns with wireless would exist with the virtual private network (VPN) solution.  Having the remote access to the internal network also running on the same system is not a best practice.  And how secure such a situation would be on this device is questionable.

You need to remember, this is not a purpose built networking device, this is a repurposed computer running Linux.  It is potentially susceptible to any number of Linux-based attacks and vulnerabilities depending on the services running.  And the more services you pile onto this device, the more potential for vulnerabilities.

Then there is the ability to add a voice over IP (VoIP) call manager solution.  Seriously?  What a silly and very dangerous idea.  Why?  VoIP protocols are primarily stateless (i.e., UDP) which means that they cannot be protected by today’s firewall technology which only work with stateful protocols (i.e., TCP).  I have actually had vendors correct me on this because VoIP call set up (pick up the handset) and tear down (hang up the handset) are conducted using TCP.  What these folks always miss is that the actual conversation is conducted over UDP so that the conversation can be streamed between the phones in use which is the bulk of the activity with a telephone call.  And it is not just one or a few UDP ports that can be open; it is typically a range of thousands of UDP ports that are open to support telephony.  Talk about a target rich environment.

Adding a VoIP call manager on top of your firewall is probably the most dangerous thing an organization could do because VoIP is so easy to attack due to the stateless nature of its protocols.  By implementing VoIP on a firewall you are essentially negating the firewall.  Running VoIP on anything but its own dedicated server on its own dedicated network is the only way VoIP should be configured for security, regardless of a need to be PCI compliant.

Finally, there is no pricing provided for the USB “key”.  I always get concerned about “wonder” solutions that do not provide pricing without contacting the vendor’s sales operation.  Nine times out of ten, all this does is force potential customers to then be contacted relentlessly by sales people until they purchase the solution which is likely overpriced.

This post is not to say that this solution is not appropriate for other organizations.  However, if you need to be PCI compliant, this solution is not for your organization if it is implemented as the vendor describes.


Remembering Walt Conway

It is with deep regret that I report the passing of my friend, Walt Conway, on Tuesday, June 26, 2013.

Walt and I met at the first PCI Community Meeting in Toronto back in the fall of 2007.  We both attended the infamous card brand session on pre-authorization data at that meeting.  We bantered back and forth periodically about what were the ‘take aways’ from that session.  For the record, I do not think we never completely agreed on our interpretations of the ‘take aways’.  However we did agree that pre-authorization data was to be protected just the same as post-authorization data.

Over the years Walt would contact me about mistakes, misinterpretations or misunderstandings regarding postings.  A number of times, I sent Walt the posting and he would edit it so that he felt it better represented the PCI SSC’s interpretation.  I always appreciated his help in that regard because the point of this blog is to provide people with accurate information.

Walt took over as the PCI commentator at StoreFrontBackTalk after the sudden passing of David Taylor a number of years back.  I always enjoyed Walt’s columns and that is why I posted the link on my blog to StoreFrontBackTalk.  While a number of times we covered the same topics, I always appreciated Walt’s take on those same topics.

Walt’s family has requested that donations be made to the Walter T. Conway, Jr. Fund at Episcopal Community Services.  You can make a donation either on-line at or by mail at Episcopal Community Services, 165 Eighth Street, 3rd Floor, San Francisco, CA 94103 to the Skills Center/CHEFS program.

Rest in peace my friend.  I will miss you.


What If?

Here is a thought provoking question that was posed to me recently by a former accomplice in the PCI world.

What if PCI DSS assessments were only required until a merchant proved they were PCI compliant or if a merchant had been breached?

The premise behind this idea is simple.  There are going to be merchants that get information security and there are those merchants that are never going to get information security no matter the carrots or sticks employed.  Not that merchants that get information security cannot be breached, but the likelihood is significantly lower than merchants that do not get information security.

Merchants would go through the PCI DSS assessment process, clean up their act and ensure they are compliant and then they would only have to go back through the process if they were breached.  As a best practice, merchants could chose to periodically assess themselves after significant changes to their cardholder data environments to make sure no new security issues had been created or at annual intervals of say three to five years.

In the event that a merchant were breached, the PCI assessment process would be required annually for the next three years or until all of the card brands involved agreed to drop the reporting requirement, whichever comes first.  For Level 1 merchants, they would go through the Report On Compliance (ROC) process performed by a QSA or an ISA.  For all other merchant levels, they could use the appropriate self-assessment questionnaire (SAQ) but that SAQ would have to be reviewed and signed off by a QSA.

For high risk organizations that process, store or transmit large volumes of cardholder data such as processors or service providers that do transaction reporting, statement rendering or other similar services, they would still have to go through the ROC or SAQ D as they do today based on the levels defined by the card brands.

For service providers such as managed security service providers (MSSP) or cloud service providers (CSP), they would be required to go through an annual SAQ D, at a minimum, or a ROC, if they desired, for all services that are required to be PCI compliant.  The ROC would have to be prepared by a QSA or ISA and the SAQ D would have to be reviewed and signed off by a QSA.

As with merchants, if a service provider suffers a breach, all bets are off and they must do a ROC by a QSA or ISA for the next three years or until the card brands tell you to stop.

Visa and MasterCard currently maintain lists of PCI compliant service providers.  Service providers pay to be listed on those lists and the qualifications to get on those lists would also not change.

Regardless of whether an organization is a merchant or service provider, the quarterly external and internal vulnerability scanning and annual external and internal penetration testing requirements would remain the same.  Merchants would be required to file their results with the merchants’ acquiring bank or processor(s).  For service providers, their scanning and penetration testing results would be filed with the relevant card brands.  The scanning and penetration testing just help to keep everyone honest in their efforts to maintain their security.

I have to say, it sounds like a rational process to me if you accept the original premise that organizations will either do what it takes to be secure or will not.  Thoughts?


Security And Compliance

I have written a lot about this topic over the years and was recently reviewing my Compliance Is Not Security – Busted! post and the comments that came in regarding it.

A theme of a number of the comments was that compliance does not equal security.  DUH!

I have never once said or even implied that compliance equaled security as – yes, here it comes – security is not perfect!  However, if you are complying with any security program/framework such as the PCI DSS, ISO 27K, etc., then you are likely more secure than those who are not.

Security technology such as firewalls, routers, servers, applications, etc. can all be set up with rules that are complied with 100% of the time, day in and day out, no exceptions.  The problem comes down to people who are fallible.  Their compliance is never 100% and you are probably lucky to have anyone above 90%, no matter how much security awareness training you do.  As a result, in organizations that are truly complying with the PCI standards, this is where the security breach starts, with people for one reason or another.

No, I am not necessarily talking about social engineering, although social engineering is growing because of the fact that organizations have invested a lot in security technologies yet people are fallible.  People can be the root cause because of any or all of the following.

  • How dare you do that to me!  This is the most obvious of the people issues that comes to mind.  Face it, when backed into a corner, people lash out just like a trapped animal.  The supposedly wronged party wants their proverbial “pound of flesh.”  They get that pound of flesh by hurting the organization that has just hurt them.  This can be as minimal as taking office supplies to downloading databases to a USB drive as they empty their desk.  Obviously, a database, network or system administrator’s access is much different than a clerk’s.  However, if your security is minimal on the inside as it is in most organizations, the clerk may actually have better access than the administrators when it comes to sensitive information.  Such a situation may not be the fault of the administrators, that old version of POS or ERP may not have the ability to be more granular regarding access to information.
  • Over inundated with alerts and cannot identify real alerts from false positives.  This typically occurs when an automated tool is implemented but never tuned to the organization’s environment.  In this sort of an environment, finding real alerts can be like finding a needle in a haystack when there are thousands of alerts an hour scrolling by on the screen.  This usually makes management wonder why the tool was needed in the first place.
  • Saw an alert and ignored it.  We see this most often coupled with the aforementioned inundation issue.  The other most common version of this issue is with internally used SSL certificates that were generated incorrectly or use a default certificate supplied by the application.  Users then see the “There is a problem with this Website’s security certificate” or similar error message in their browser whenever these flawed certificates are encountered and become conditioned to ignore the error message.  Over time, they become conditioned to ignore all of these sorts of messages, including those for malware infected Web sites and, surprise, you have been compromised.  I have lost count how many people have said to me, “We just ignore those alerts because we know they are false positives.”
  • Saw the alert but got side tracked and never came back to it.  This is a problem we see all of the time.  For example, the person that monitors the network is also the person that manages the network and configures the network.  An alert comes in and the person begins a root cause analysis (RCA) only to get pulled away because a remote facility is offline.  The offline issue gets resolved, but other issues come up as well as meetings and telephone calls and the person never gets back to the RCA for the alert because there is no “tickler” to remind them to go back and complete the RCA.  In the meantime, the attacker has gained their beachhead and is probing the network for whatever value it may contain.
  • Just did not put together all of the pieces to know they were compromised.  Like the reasons 9/11 occurred, most organizations do not correlate all of the potential incidents occurring in their networks and therefore do not understand that there is an active effort to compromise their network or that they have already been compromised until well after the incident has caused damage.  The reason this is important is that once an attacker is inside your organization’s security perimeter, it is typically game over because there are few controls to prevent access and identify that data is being taken.

If you have read the Verizon Business Services Data Breach Investigations Reports (DBIR) over the years you know how the bulk of attacks get inside, they are the result of people.  For the last two years, the DBIR has used the VERIS Event Threat Grid to show how breaches occur.  Across the top of the grid are the categories; Malware, Hacking, Social, Misuse, Physical, Error and Environmental.  The Social, Misuse and Error categories imply mistakes or deliberate acts of people.  If you read the definitions on the VERIS Web site, Malware is also very people centric as is hacking.  Surprisingly to some will be that the Physical and Environmental categories also have a good number of people errors.  Based on just a quick read, it looks to be that about 60% to even 70% of all of the incidents categorized by VERIS has some form of people error component.

Since we are not going to get rid of people in our organizations any time soon, what are you to do?

  • Admit that people are the problem and focus your security measures accordingly.  Every 12 step program says the first step is to admit the problem which, in this case, is that people are fallible.  As a result, we need to construct our security measures such that this fallibility is minimized as much as possible.  One of the best solutions is to integrate alerts into your help desk or change management system so that a ticket is generated.  Those tickets need to have an escalation process behind them so that if they are not investigated within a period of time, they are bumped up to the next higher rung of management and that escalation continues until the tickets are finally addressed.  This way there is visibility for the alerts should they slip through the cracks.  As a side benefit of this approach, you gain statistics to reinforce why you need more staff and/or more/better tools.
  • Strengthen your internal security measures.  As things stand, once inside most organization’s security perimeter, there is very little that stands in the way of an experienced attacker getting the data they desire.  Regardless of whether it is an insider attack or an attacker has managed to get inside, there is already justification for organizations to beef up their internal security measures.  To address this problem, I would recommend the security architectures as documented in my Fort Knox approach, Forrester’s Zero Trust Model or McGladrey’s Ultra Secure Network.  But most organizations do not have the infrastructure architecture, the application architecture or even the will to take such approaches.  But that does not excuse an organization from just saying they cannot do anything.  If anything, most organizations could vastly improve the monitoring they do on their internal networks.  Monitoring needs to be coupled with reducing the total number of ports that are open between network segments.  Most internal networks do a terrible job of this because of a variety of factors including applications people that cannot tell what ports need to be open to avoiding operational issues by just leaving things open.  Another area of improvement is reviewing user access rights on all systems and applications, not just those in-scope for PCI compliance.
  • Constantly tune your alerting system(s).  Just as attack methods are not static, neither are networks, systems and applications.  Changes are occurring all of the time in an organization’s IT environment, yet if you ask the people running the SIEM about changes, nine times out of ten, nothing seems to be changing other than requests to look for a new signature or anomaly.  There is a belief in the SIEM user community that a SIEM’s update process is making the necessary changes in the policies that ship with the SIEM.  To a certain extent SIEM solutions are similar to anti-virus and malware solutions.  However, because a SIEM monitors log data and the log data provided varies greatly from organization to organization, each organization needs to periodically review and adjust their alerting criteria to make sure that it reflects the organization’s operating environment and not just some template from the SIEM vendor.  If an organization is not reviewing its SIEM alerting rules based on the changes made, at least quarterly, then it is highly likely that the SIEM is not alerting properly.
  • Establish separate consoles from your SIEM for network, system, security and application administrators.  What a network administrator is looking for is vastly different from what an application administrator is looking for and what any particular group might be looking for to generate an alert.  As a result, to have only one console is really silly and non-productive.  Yet time and again, we see SIEM implementations with just that, one console and everyone being driven by email or SMS alerts.  The people alerted then have to get to the SIEM to find out what exactly triggered the alert and then determine what to do about it.  Having your own console view simplified things by only listing that viewer’s alerts and no one else’s alerts.  This allows people to focus on their problems and not the whole organizations problems.  The idea behind the single console is that if everyone knows what is going on overall, then correlation would occur because everyone sees everything.  While you would think that would be the case, in reality, people just want to fix their problem and move on, not the entire organization.  Which leads to my last point.
  • Watch the overall alerting picture so that correlations can be made.  According to most sources, today’s attacks are becoming more sophisticated and multi-pronged in their approach.  For example, while most DDoS attacks are just to be a pain in the posterior to the target and disrupt access to the target’s Web site, there are those DDoS attacks that are used as cover so that people inside are blinded to the real attack(s).  Whether or not the DDoS was a decoy depends on what other events or incidents occurred during the DDoS attack, if your alerting system did its work.  Higher end SIEM solutions can provide basic correlation rules, but most SIEM solutions require the end user to develop those correlation rules.  It is these correlation rules that help organization identify these more sophisticated attacks.  That said, these correlation rules do not have to be very sophisticated.  For example, during a DDoS attack, you really only need to look for malware attacks, failed authentication attempts and other anomalies that would be likely indicators of the DDoS attack being used to mask the real attack.

Is all of this going to address your security issues?  Sorry, not a chance.  None of the above stops all breaches, it merely minimizes the possibility that a breach goes on for months or years.  Hopefully it minimizes a breach down to weeks, days, maybe even hours in some cases but it will never totally eliminate them.  Security is not perfect.

There is a side benefit to all of this and that is it will assist you in doing RCA.  RCA is very effective in getting rid of those nagging operation issues that occur from time to time and mess up the delivery of your organization’s goods and services.  All of the information you collect for security purposes can also be used to find the needle in the haystack that is causing a database to corrupt, a network connection to drop or a server to fail because now you have information as to what was going on that led up to the problem.

The reason an organization is not secure is that there are so many areas of improvement needed that the full control triad is no longer functioning and holes exist that will allow an attacker to operate without the knowledge of the organization.  Until the controls are implemented and operating properly, it will be impossible to determine if they are secure or not.  The recommendations I have made will hopefully give you a better picture of what you face and reacting to issues that need attention before your organization is the next one to be breached.


What To Focus On In 2013

It is the end of the year and, like all other pundits, here is another idea on what 2013 will bring in the way of security issues.  After reading a lot of the other predictions out there, I tend to agree with those from Verizon Business Services’ Data Breach Investigation Report researchers.  While everyone else is predicting cyber-Armageddon as the biggest threat, the researchers at Verizon Business Services see a lot more of the same for 2013.

The biggest threat Verizon identifies is more attacks on authentication systems.  This is most likely because your vendor or your developers talked you into storing authentication information in your database that is Internet facing.  We see this all of the time with eCommerce and Internet banking solutions.  The external user credentials end up being stored in the database along with order entry, inventory and pricing data.  This is typically done because using a directory system for such purposes is difficult and, at times, not as functional as when authentication data is stored in and used from a database.  Given the prevalence of SQL attacks, all of that information results in being available for the taking through a SQL injection attack.  As a result, the attackers compromise the authentication system, gain access to everyone’s credentials, including administrators, and it is likely ‘game over’ regarding the rest of your security measures.

I want to touch next on social engineering because it is typically directly related to compromising authentication systems even though the second place attack method most concerning Verizon researchers is application attacks.  Social engineering is all about tricking your end users into giving up key information so that an attacker can compromise your environment.  The most common piece of information an attacker tries to obtain is an end user’s credentials for logging onto the network.  Hence the reason why I wanted to discuss this after the authentication system attacks.  Social engineering is the most insidious of attack methods because it does not involve any of an organization’s security technology.  And worst of all, if social engineering is successful, all or most of your organization’s security technology is effectively neutralized as a result.  That is because most organizations have little or no security once someone is on the inside.

Now let us look at the second most concerning attack to Verizon which is application attacks.  Verizon is saying that application attacks are more of a threat to governments and large applications.  Regardless of the target, any organization with an application presence on the Internet is a potential target such as with eCommerce or Internet banking.  A lot of these applications are based on on-line frameworks such as IBM’s Websphere or Oracle’s Application Framework.  It is not that these frameworks are insecure, it is that they still require development effort and it is those custom development efforts do not guarantee a secure application.  The problem comes from the fact that a lot of people believe that using a framework means that little to know security testing even though the amount of custom development done in these frameworks can be more extensive than starting from scratch.  As a result, we see a lot of organizations tossing Internet applications into production with little or no security testing and then ending up with breaches as a result.

In addition, there are third party applications served up by application service providers (ASP).  A lot of small and mid-sized businesses (SMB) use these sorts of applications to have an online presence.  As a result, a lot of SMBs believe that these solutions do not require any security testing because the vendor and the ASP do that for them.  However, we are encountering more and more attacks on SMBs, particularly those that have wealthy clientele such as country clubs and exclusive financial institutions because their applications are notorious for not being secure.  SMBs are constantly amazed that; (1) they were targeted and, (2) the application was not bettered secured.  Yet attackers know that while the take out of SMBs could be significantly less than a large organization, an SMB is usually easier to compromise because they do not have the security and monitoring resources of a large organization.  As a result,

Finally, something that concerns me as the previously discussed threats is mobile devices and devices not under the organization’s control, also known as ‘bring your own device’ or BYOD.  I think these devices will surpass the other three threats over the next few years because most organizations have difficulty maintaining security on their servers, desktops and notebooks, let alone something like an iPhone or an Android tablet.  The worst thing about mobile devices is that they are so easily lost and it fascinates me how many people lose their mobile devices.  The bottom line about mobile devices and BYOD is that you must be very, very careful as to what you allow these devices to access and how you grant that access.  And you must make sure that these devices are not allowed to download information, even if encrypted, as that information is highly likely to be lost.

So what should you be doing regarding these threats?  Here are my top things organizations should be doing to minimize the risks presented by these threats.

  • Trust no one.  This is particularly true of mobile devices or BYODs, but it also applies to your own internal systems.  Forrester has promoted this in their ‘Zero Trust Model’ as well as me in the ‘Fort Knox Approach’.  This is not as easy as one might think, but the approach makes sense in these days of attacks on authentication systems and social engineering approaches.
  • Classify your data.  This is usually a difficult project, but they pay dividends at the end because everyone understands why certain information cannot be allowed out of the control of the organization.  It also allows people to justify to others why data cannot be allowed to be accessed by people that do not need access as well as via mobile or BYOD.
  • Require encryption on mobile devices and BYOD.  Even if you do not allow data to end up on these devices, you do not even want the memory or other information that might be inadvertently stored on these devices out of your control.  As a result, if you encrypt these devices, there is a high likelihood that, if they are lost, the person finding them will just wipe them and start over.
  • When possible, use a directory system for authentication.  This is always painful for systems that operate outside of the traditional control environment of internal users.  Directory systems are usually designed to be more secure than any sort of database authentication system because they are assumed to be at risk from the start.  However, just because they are designed to be secure does not mean they cannot be implemented in an insecure manner.  Windows Active Directory takes a lot of heat for being insecure; however a lot of that heat is due to silly implementations to support insecure authentication methods for compatibility.
  • Conduct security awareness training.  The only thing that minimizes social engineering is consistent and regular security awareness training.  However, do not kid yourself or management.  Everyone has their ‘moments’ and does something they should not.  That said there are always those that just never seem to get it which is why you need other controls and monitoring to ensure you maintain your security.  However, to just throw up your hands and say it is pointless is also not a position to take.
  • Secure your applications.  This means conducting application code reviews and testing applications before they are put into production not after the fact.  Unlike networks where you need to put them into production before testing them, applications can be tested before going into production.  It amazes me how many organizations put their applications into production and, by the time they finally get around to testing them, they have already been compromised.  And while automated application code testing solutions are all of the rage, we still find that the best results come from the more traditional human code review not automated tools.
  • Monitor your network and applications.  This is a double edge sword.  You know what to look for, however, you have so many ports open that it is near to impossible to recognize bad traffic from good traffic.  And it is not necessarily the fault of your IT department as most packaged applications require an inordinate amount of ports open to function properly.  However, the key thing to monitor, more than anything, is any traffic going outside of your network to an unknown location.  When you see traffic going to Eastern Europe, China or any unexpected IP address, your monitoring system should generate an alert as that is typically a key indicator that you have been compromised.

Have a happy holiday season and I will “see” you all next year.


FishNet Security is looking for experienced QSAs for their PCI practice. If you are an experienced QSA and are looking for a change, go to the Web site (, search for 'PCI' and apply.

If you are posting a comment, be patient, as the comments will not be published until they are approved.

If your organization has a PCI opportunity, is in need of assistance with a PCI issue or if you would like the PCI Guru to speak at your meeting, you can contact the PCI Guru at pciguru AT gmail DOT com.

I do allow vendors to post potential solutions in response to issues that I bring up in posts. However, the PCI Guru does not endorse any specific products, so "Caveat Emptor" - let the buyer beware. Also, if I feel that the response is too "sales-ee", I reserve the right to edit or not even authorize the response.


April 2015
« Mar    

Enter your email address to subscribe to the PCI Guru blog and receive notifications of new posts by email.

Join 1,216 other followers


Get every new post delivered to your Inbox.

Join 1,216 other followers