Monthly Archives: January 2012

Never Work on a Machine, Apply a Configuration

If you’ve worked with Linux as a systems administrator, or even for your own services, you’ve almost certainly fiddled with a few config files on whatever machines you’re in charge of. You’ve also probably configured the same thing many times over on every new system you get. On every machine I’ve ever owned, I’ve made a user account for myself. They also all have ntp and ssh running. Every machine I work with has good reason to have my public ssh key, so I have to copy that. If I keep going for a while, I’ve suddenly got a list of things to take care of.

When I first got started playing with Linux over a decade ago, I evolved through a few systems until I started playing with Linux From Scratch as a distribution sometime in late 2000 or early 2001 (I’m registered user #23… and 553 because I forgot I registered the first time). After I built my box a few times over, including with KDE and kernel compiles that I remember taking 24 hours, I started getting really excited about the idea of packing my Linux From Scratch work. Next time I wanted to get all the new versions, why would I want to go back and try to remember all the flags I set? I never had more than one system to deal with at a time, but I knew I’d want to simplify the work because it would come up again.

Since then I’ve mostly lived in a Debian and Ubuntu world because I don’t want to go back and figure out those dependancies, wait 24 hours for KDE to build, or focus on all the software options. Yet even when I’ve whittled myself down (at times) to just one server, the package management system still leaves me needing to do mostly repetitive configuration work. Further, if I want to be good, I’ll keep my iptables configuration up to date. Scripting that is a nuisance, and it’s unfortunate when things are inconsistent.

Working on putting all my configurations into Puppet is quite a bit of work. The answer to that is to just be incremental. Wikimedia, like Rome, didn’t build their intense implementation in a day. The suggestion I’ve got is to just do it with the next change you make. Install Puppet on that once machine and use it to deploy that one configuration file. Drop it in the templates directory and don’t sweat the idea of variables. It won’t cost you more than a few minutes of extra time. Don’t worry about making everything, or anything, a variable until you need it.

class base {
 package { ntp:
 ensure => latest

service { $operatingsystem ? {
  "Debian" => "ntp",
  default => "ntpd",
 ensure => running

I’ll never again install ntp from the command line, or worry about whether it actually runs. If I create a config file, I can repoint all my servers to different time masters at once. If I don’t want to worry about that, I can just leave it at the defaults that the package uses.

It’s going to pay off the next time you upgrade or replace that server and you don’t have to select all the different packages that need to be there. You won’t be fiddling with extra options or uninstalling excess software because the bare minimum and a puppet client will put all those packages you need in place and drop in your configs. Next time I want to test something, I can make a dev server on EC2 in minutes.

You get security because you get consistency. That tripwire, apparmor, or iptables setting you learned years ago and didn’t carry on to your next system because it  takes a long time to get right can now be copied everywhere without thought. You get time back without an upfront cost as you slowly roll every change in, even with just a few machines. You become a better systems admin, even if you’re just working on your one server.

At some point, you’ll find that putting a new machine in place with security, backups, user accounts, monitoring, dns, and the software you need at the start will be nothing more than a case of

node "newhost" {
 include base

Now, next time you upgrade that PHP script that’s in a .tar.gz for the 14th time this year and do the WordPress dance of “keep this directory, but copy all new files”, ask yourself: might it be worth it if I write a packaging definition file and just run that against the .tar.gz file next time?