Archive for July 3rd, 2016

03
Jul
16

The Council Issues A New Information Supplement

Back in May, the PCI SSC issued a new information supplement titled ‘Effective Daily Log Monitoring’.  It probably slipped under most people’s radar because of the issuing of v3.2 of the PCI DSS.  And for my friend, Anton Chuvakin, this will be on his reading list immediately if he has not read it already because his book, ‘Logging and Log Management: The Authoritative Guide to Understanding the Concepts Surrounding Logging and Log Management’, is listed in the information supplement’s bibliography and from the way this supplement reads, figured prominently in its content.

First some overall impressions.  Unlike a lot of other information supplements, this one actually clarifies and provides a significant amount of knowledge and great advice.  Every organization that needs to meet PCI compliance should read it.  But more importantly, any organization that does not fully understand the collection and analysis of log data should read it to get a foundation in these practices.

One of the best recommendations comes in section 5.2 in the supplement which instructs organizations to develop a baseline from log data.  This is the biggest problem when an organization gets serious about analyzing log data, they fail to baseline the exceptions.  I cannot tell you the number of times that I have discussed logging with the personnel responsible for monitoring for alerts and they cannot tell you what is “normal” activity versus “abnormal” activity.  As a result, they either: (1) chase every alert as though they are an attack (they are not), or (2) ignore alerts because they assume they are “normal” activity (they are not).  The result in case #1 is that they cause burn out in the organization and, of course, in case #2 the organization is likely getting attacked and compromised.

I cannot understate the importance of this activity both in getting a log monitoring project going but also as something that needs to also be an ongoing activity.  No organization’s IT environment is static, so it is important to continuously baseline the log data and adjust for changes.  This can be done by adding an item on change control tickets to evaluate the impact on log data or re-baselining every quarter.  The bottom line is that baselining is not something done once, the baseline must be periodically updated.

The most glaring omission in this document is a reference on page 27 to Appendix B that promises to provide readers with a list of “sample tools” for normalizing log data.  But guess what?  There is no Appendix B.  Hopefully the Council will issue a revision to this document and add Appendix B to that future revision.

If I had to ding this information supplement for anything it is the downplaying of the need to have one or more tools to collect and analyze the log data.  The Council obviously has to walk a fine line on this, but let us be honest.  Anyone that believes that log monitoring and analysis can be done without one or more commercial tools is kidding themselves.  Yes, a person can put up a centralized log collection server and the parse that collection of data with tools such as Grep, Perl or Python.  Again, let us be honest, no one has the time to write all of the necessary scripts or search patterns to find all of the conditions you need to find.

It used to be that there were open source tools such as Splunk and similar available for log gathering and analysis.  However, those days disappeared a number of years back and only commercial versions are available.  So everyone needs to be prepared to spend a certain amount of money to get a toolset that will provide the information gathering and analysis toolsets necessary.

I also fault this document for focusing too much on log information normalization.  This is exactly why organizations need to invest in a commercial tool.  This was the big complaint with using open source tools.  People found that they got basic functionality, but then had to essentially conduct a full-fledged system development effort to get the analysis and alerting done.  This was the big selling point with commercial tools such as LogRythms and ArcSight that came the basics as well as a whole host of analysis templates both free and for purchase.

Another ding I have is the minimal number of requirements that the information supplement identifies as being satisfied by log monitoring and alerting.  I can find at least 15 instances where requirements can be met by monitoring and alerting, not just the eight that are identified in the supplement.

Even with these concerns and shortcomings, this is possibly one of the better information supplements issued by the Council and is a must read by everyone.

To get your own copy of the information supplement, go to the PCI SSC Document Library, select ‘Guidance Documents’ and scroll down to the ‘Logging’ banner.

Advertisement



Welcome to the PCI Guru blog. The PCI Guru reserves the right to censor comments as they see fit. Sales people beware! This is not a place to push your goods and services.

July 2016
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031