Security: Dirty COW Security Hole Discovered

Date

A fairly nasty security bug CVE-2016-5195 nicknamed Dirty COW was found by Phil Oester, a Linux Security researcher.  The flaw is relatively easy to exploit so its important to patch this on your systems ASAP, everyone running Linux will have this problem.  RedHat’s description can be found here and relating to the issue states:-

"A race condition was found in the way the Linux kernel's memory subsystem handled the copy-on-write (COW) breakage of private read-only memory mappings.  An unprivileged local user could use this flaw to gain write access to otherwise read-only memory mappings and thus increase their privileges on the system."

The security issue has been in the Linux kernel for a long time but has only recently been uncovered, it is not known if this problem has been exploited or not, there is no known instances of it but thats not to say it has not been.  However, now this vulnerability is a known problem we can expect attacks to follow so anyone running a linux environment should be patching this as soon as possible. 

Hornbill’s own cloud environment is all run on a CentOS distribution of Linux so our systems have had that vulnerability.  We are lucky in a sense because we do not provide direct access to our systems outside of our own network.  All end user access is provided through our application services and API’s which are not vulnerable to the problem.  In theory an attacker could chain this exploit together with another security issue it could be possible to exploit but thats a pretty unlikely scenario. 

Of course, as soon as we got to know about the issue we reviewed our systems to make sure there was no direct way for anyone outside of our own network to exploit this on our systems and ensure there was not immediate risk to our customers data, and then we waited patiently for a patch to be developed by the security experts in the Linux community. 

We use http://spacewalk.redhat.com/ to manage all our internal Linux servers so pushing out the patch and confirming it has been applied is trivial. We always push out changes to our development and test environment in order to confirm that our systems are still working as expected.  We then push the change into our production systems, generally 48 hours after. This particular change is a little tricker than usual as its a Kernel update it requires a restart of the servers to make sure the patch is actually applied, and this needs to be done while not disrupting service. 

We have now patched all of our servers at all global locations and everything is up and running without issue.

Subscribe to our Blog

Stay up to date with the latest news on Enterprise Service Management and collaboration

We care about the protection of your data. Read our Privacy Policy.

Thanks for Subscribing to our Blog.

Demo