Akamai just released their third quarter 2012 Internet statistics and are pointing to China as the generator of at least a third of all attacks. Not only that, the Chinese attackers are going almost exclusively after Microsoft SQL Server.
Then I get on a Webinar with Application Security Inc. and they discuss what they see as the top risks for databases in 2013 and lo and behold, risks align. It is no wonder that Chinese attackers are focused on SQL Server, not only are targets plentiful, but most of the time, the software was improperly installed. Application Security Inc. sees the following as the top risks to databases for the coming year.
- SQL injection
- Password attacks
- Improper or ineffective access controls
- Database Java exploits
- Misconfiguration of database security settings
In our rush to be “first” and to “get applications out the door” we seem to neglect security, privacy, good management practices and everything else. Management repeatedly says, “we’ll fix it later” or “that is a version 2 enhancement” only to never get back to it or version 2 is a total rewrite with a new set of flaws.
I used to believe that when we found SQL injection that it was the protocol being used that required a certain amount of risk in order for the application to work. While a few years ago that was true, it now turns out that most SQL injection vulnerabilities are there because it was quicker/easier/faster to do something stupid than to do it securely. In our “I probably won’t be here in six months anyway” view of employment, it will be someone else’s problem, not theirs so why bother? The next guy can fix it.
Requirement 6 and, in particular, requirement 6.6 require that applications developed get tested and remediated before they are placed into production. The operative word here is ‘before’ which seems to be a stumbling block for a lot of organizations.
Application developers point to the requirements in 11 and ask, “How come the network folks get to test after the fact?” My response is always, “Show me the test network that fully simulates production.” Do not get me wrong, there is usually infrastructure that provides something similar to production, but once a firewall, router or switch is implemented in production, testing of production changes amounts to making a change and seeing if it works. That is just the nature of network changes and why they can only be tested after the fact.
In the case of application development, developers usually have at least one, if not a few, development and testing environments that they can use to determine if applications are working properly and that they integrate with other applications. As a result, applications have the opportunity to be vulnerability scanned and penetration tested before they are moved to production. If vulnerabilities are found, they can either be remediated or mitigated prior to being moved to production. That is what the requirements in 6 are all about – making sure that what ends up in production is as secure as possible.
Database Java Exploits
Unbeknownst to a lot of database administrators as well as IT personnel, database vendors now ship their database products with Java. Java is the attacker’s dream environment because you can develop an exploit in Java and it will run virtually anywhere Java is installed.
Since most IT professionals do not realize Java installs by default with the database management system, Java ends up where it does not belong. And since they do not realize it is installed, it also never gets patched resulting in a prime target for an attacker. Better yet, it is a prime target with an ample supply of information.
This is why the server build and hardening standards are in requirement 2 of the PCI DSS. The idea behind these standards is that they require people to stop blindly installing software without understanding what gets installed. They also get people to think about what services they actually need from the database versus doing a default installation.
A lot of vulnerabilities with databases would be eliminated if IT departments did some research into database security and set forth installation standards that removed services and features that are never used. However, in a lot of organizations, unnecessary services and features are installed just in case they are needed sometime in the future. This approach is typically in response to the “rush” factor that I discussed in the SQL injection section. The DBA does not want to be the critical point in a new application, so they just install everything and then complain about getting beat up over all of the security issues such an installation creates.
I have grouped the last three risks together as they all relate to one another.
Misconfiguration Of Database Security Settings
In that rush to get the database system up, IT personnel just do the “default” install and move on. The problem with that approach is that a lot of security settings and features/functions are set at the time of installation and cannot be changed without a reinstall. As a result, is it any wonder that databases are insecure?
One of the biggest security holes we find is the implementation of open database connectivity (ODBC) on a database. ODBC has no security capabilities (unless you are talking about the IBM iSeries), so if you have ODBC installed (typically by default), you have essentially installed a backdoor into your databases for anyone on your network.
Again, this is why requirement 2 has all of the build and security standards required. The idea is that these standards will tell the technicians how to do a correct installation and avoid installing services and features/functions that are insecure or not needed. That way the database system is secure from the start as opposed to finding out after the fact that one of those just in case services or features are insecure.
Improper Access Controls
One of the most troubling settings people use for SQL Server is mixed mode authentication which allows for both Active Directory and SQL Server to control access to databases. The first problem we see with mixed mode authentication is people seem to forget the SQL Server managed accounts. As a result, they typically do not get the account management and review activities they deserve until they are remembered again, possibly years down the road. These accounts also can get forgotten for monitoring, so if their credentials are compromised, it might not be recognized for a while, if at all.
Even if SQL Server managed user accounts are remembered and monitored, the second problem with mixed mode is that people forget that password change reminders, password complexity and the like are not managed by SQL Server. As a result, people outside of the DBA arena assume that SQL Server managed user accounts are managed the same as the Active Directory accounts. And that assumption gets a lot of organizations into trouble when those SQL Server account credentials are compromised.
An access control issue that occurs with all databases is the use of “service accounts” for database access. Under these scenarios, an application controls access to the information stored in the database by performing the user management functions and access control. Then to access the database, the application uses a single service account. Those single accounts are typically configured as administrators and provide unimpaired access to the data stored in the database, making the theft of that information a relatively simple affair if someone gains access to the service account’s credentials. Some databases have the capability to set up these service accounts so that they cannot be used by anything other than the application. However, in my experience, this is only done when pointed out during a security assessment.
Another problem with service accounts is that the credentials for those accounts may be stored in a database table, stored in a parameter file (e.g., INI or CFG) or, worse yet, hardcoded in the application. In the case of when it is stored in the code, the ability to change the service account’s credentials requires an application change. But the larger question is who has access to the credentials and how are you ensuring that everyone understands their responsibilities to ensure the credentials’ security?
The PCI DSS has requirements in 2 (configuration standards), 7 (access control methods) and 8 (account management) that deal with these issues.
This all leads to the success of password attacks. When databases are not properly configured and/or access controls are not properly constructed, then it will be virtually impossible to protect the information in the databases.
The leading reason password attacks are successful is that databases are used to store user credentials. A lot of e-Commerce solutions use a table in the database to store users’ credentials as well as the credentials for administrators of the e-Commerce environment. As a result of the other conditions, compromise the database and you have access to the user credentials stored in the credential table. Worse yet, the encryption keys for passwords are also likely stored in the same database or in a related database that shares administrator credentials with the compromised database.
Given the ease with which SQL injections and other database attacks can be conducted, the fact that most Internet facing databases are used for managing user accounts, the misconfiguration of databases and the improper access controls, is it any wonder that password attacks are so successful?
But the changes required to address this situation are not as easy as people think. Most pre-packaged Web-based solutions are not engineered to address these credential security issues because that would raise their cost to a point where they are not priced for small and mid-sized merchants who are their target market. Until this situation is resolved, these solutions will still be at risk.
One would think using Active Directory or another directory service would be an easy solution. Active Directory and the like are designed to securely store account credentials as long as they are configured and implemented properly. On the face of it, it would appear that way and it does work for organizations that host their own Web presences. But for service providers it is not that easy as you realize that each customer’s Web presence would have to have their own branch in the directory’s forest. Since there are no automated domain provisioning tools for directory applications, the ability to create or remove branches in a forest has to be manually done which would drive up the cost of a site. As well as the manual process resulting in delays in establishing a site until the directory maintenance is completed which is totally unacceptable in our “have to have/do it now” world.
For the time being we are stuck with our using the database to store credentials. With that the case, then that database should not be mixed with the other databases and should be on its own, not accessible to the Internet. The applications that manage the credentials need to be properly engineered so that they are secure as well as efficient. In addition, the development effort should be reviewed by someone with a security focus so that security and privacy are not left to the very end and then found to be too cumbersome to implement.