Automating Node JS Testing when HAProxy is involved
Automating your NodeJS project’s testing is a solved problem if you’re working with a normal app that has a server.js process and listens on a socket for HTTP traffic. It gets even easier when you use the Ploy module from Substack – you can now push code to be tested to Ploy’s GIT service and your tests are run and QA is possible.
But what happens when you are dealing with an app that needs to sit behind HAProxy or some other alternative proxy-forward tool? Then it gets more complicated because Ploy tries to assign a random port to your app by setting the PORT environment variable. Often, your choice is to pre-configure HAProxy with dozens of ports, but then you also need to map sub-domains to those ports… it quickly gets complicated and your QA environment starts to impose a structure that isn’t really wanted or required.
The solution I came up with was to create a bash script to use for npm’s scripts.start in package.json that creates my HAProxy config dynamically and then tells HAProxy to restart itself. The ability of HAProxy to restart is not obvious at all and is currently not found in the docs but it works. Using scripts.start instead of scripts.prestart, or one of the pre- / post- variants is required because currently Ploy only sends the PORT environment variable to it’s spawn of scripts.start.
Ok, enough chatter, let’s see some code :)
This is the replacement for what most people have in their package.json for scripts.start:
#!/bin/bash
if [ -f /etc/haproxy/build_config ]; then
if [ ! “${PORT}” = “” ]; then
echo -e “backend ${BRANCH}_myapp\nbalance roundrobin\n server myapp_${BRANCH} 127.0.0.1:${PORT} weight 1 maxconn 2500 check\n” > /tmp/backend-${BRANCH}.cfg
echo -e “acl is_${BRANCH} hdr_beg(Host) -i ${BRANCH}\n” > /tmp/acl-${BRANCH}.cfg
echo -e “use_backend ${BRANCH}_myapp if is_${BRANCH}\n” > /tmp/use-${BRANCH}.cfg
sudo /etc/haproxy/build_config ${BRANCH}
fi
fi
node server.js
The script first detects if it’s being run in the QA environment by checking if the HAProxy build_config script is present – this allows us to have the script always present and not hinder any non-QA environments.
If HAProxy is detected, we write out the three pieces of information most HAProxy configs require when working with sub-domains: ACL, use case and backend definition. These will be combined by the build_config script into an HAProxy config.
The HAProxy build_script below and has the assumption that it is added to your sudoers list for the deploy user so they can only run it (that’s also why all of the files are placed into /tmp/* – it’s one place a non-privilidged user can write files).
#!/bin/bash
# usage: build_script BRANCH
if [ -f /tmp/backend-${1}.cfg ]; then
cp /tmp/backend-${1}.cfg /etc/haproxy/conf.d/
cp /tmp/acl-${1}.cfg /etc/haproxy/conf.d
cp /tmp/use-${1}.cfg /etc/haproxy/conf.d
cat /etc/haproxy/haproxy.base /etc/haproxy/haproxy.acls /etc/haproxy/conf.d/acl-*.cfg /etc/haproxy/haproxy.uses /etc/haproxy/conf.d/use-*.cfg /etc/haproxy/conf.d/backend*.cfg > /etc/haproxy/haproxy.cfg
if [ -f /var/run/haproxy.pid ]; then
service haproxy restart
fi
fi
The core part of build_script is to combine different parts of an HAProxy config into a single config – we accomplish this by breaking it into a combination of a base template (haproxy.base, haproxy.acls and haproxy.uses) and then add all of the dynamic pieces at compile time. The cat command will do all of that for us, we just have to get the templates correct ;)
Our example uses the Ubuntu upstart method of service management and the upstart config for haproxy is:
# HAProxy
description “HAProxy”
start on runlevel [2345]
stop on runlevel [016]
env CONF=/etc/haproxy/haproxy.cfg
pre-start script
[ -r $CONF ]
end script
exec /usr/local/sbin/haproxy -f $CONF -sf $(cat /var/run/haproxy.pid)
The magic part is the haproxy exec line – the option that does the required work to get HAProxy to stop the current process, drain its sockets and then switch to the new process is handled by the “-sf” command line option, we just have to pass it the older PID – which happens to be the current PID at the time the command line is evaluated.