Ops Script of the week - sanity check that iptables rules are current

So, I just did this n00b mistake: rebooted a server without confirming that the saved iptables.rules matched what was currently active. Yep, upon reboot I was getting a weird error when starting one of the services and it turned out to be a test change that should have been persisted.

Two lessons come from this kind of mistake

  1. Never make changes to your servers from the command line, and if you have to, always confirm your configuration tool gets updated
  2. Before rebooting a server, ensure that your current environment is the same as what the restarted server will use

Now lesson 1 varies depending on what configuration management tool you use, Chef, Puppet or whatever - most of the time in those environments you just simple don't make local changes - for me I was working on a server for our beta product that I then use as a reference for the configs, but that's also just an excuse, not a reason :/

Lesson 2 tho, should have been handled by me running some sort of sanity check, and this is the crux of this post - how to sanity check your iptables environment (or at least how I do it.)

So without any further self-flagilation, here is my check_iptables.sh script:

#!/bin/bash

iptables-save | sed -e '/^[#:]/d' > /tmp/iptables.check

if [ -e /etc/iptables.rules ]; then
    cat /etc/iptables.rules | sed -e '/^[#:]/d' > /tmp/iptables.rules
    diff -q /tmp/iptables.rules /tmp/iptables.check
else
    echo "unable to check, /etc/iptables.rules does not exist"
fi

Mentions