My SaltStack Deployment
I really like my open source world. I have been doing a bit of IT work for a company with a small IT shop. Their environment is a couple of orders of magnitude smaller than some of the other environments that I’ve worked in. They still performed many of the operational tasks manually. A simple tasks like adding my account to each of the Linux servers required someone to login to each server, create the account, and set the initial password. It’s not to bad for a small environment, but it’s not scalable or efficient. In addition, much of the work was performed by outside vendors.
One of the first major changes that I made was to introduce configuration management and automation. I had a bit more flexibility since this was a greenfield exercise. I wanted something that was scalable, efficient, cross platform, and open source if possible. I’m familiar with CFEngineand Puppet, but I wanted to see what else what available. I added Chef and SaltStack as options. After some testing, I decided to roll out a pair of SaltStack servers with a Subversion repository to manage the promotion of state files from non-prod to prod servers. SaltStack met my technical requirements, and I felt that the learning curve for my peers would be much less steep.
The biggest challenge of rolling out configuration management tool into an existing IT environment is reverse engineering the current state of each server and merging those into the configuration management tool. I can’t just assume that the sshd_config file on the first server will work for all the others. This is where SaltStack grains come into play. I was soon managing configuration files based on several grains – location, role (test, dev, or prod), function (db, app) or any other identifier that I wanted.
SaltStack isn’t just blindly pushing configuration files to every server. It’s a tool to efficiently manage servers. Need to update the ntp.conf file for the west coast servers because the NTP server was replaced? No problem. Push the new ntpd.conf to the west coast cities and rehup the daemon. Need to patch the dev app servers? Again no problem. Just send the correct yum command. With a small time investment to figure out the rules for how things are managed, I can save a lot of time in the future and use my time for more challenging issues.