Whereas automated testing, version control systems (VCS) and design patterns have been a part of Software Engineering for a respectable time, the same rigour and accountability have, until quite recently, unfortunately been absent from the world of Operations or Systems Administration.
Additionally, setting up systems has – for the longest part – been a menial task. Each server or workstation would be set up manually, using CDs, USB-keys, disk images, or booting from a PXE server in the best case. Partitioning, installing core libraries and the required applications, configuring and securing the system would have to be repeated, over and over again, on different machines, often taking hours on end.
Doing away with the tedious task of going from server to server and use configuration tools to set up one instance after another, Software Configuration Management (SCM) tools such as Ansible, Chef, Puppet or Salt – to name only the most popular – have allowed us to provision infrastructure and their configuration using machine-processable definition files – in other terms: code
!
Defining your infrastructure as code opens up a range of new possibilities and advantages: Suddenly, we can apply the same configuration to to a large computing infrastructure in parallel, while still allowing for the individual hosts to diverge from each other. The software configuration management utility will make sure the result will be as you defined it, even if the underlying infrastructure varies in hardware architecture, installed components or even operating system.
At the same time, using SCM tools for applying configurations across entire data-centres, we not only distribute configurations to the devices, we also identify the systems we will be provisioning. With Puppet – the tool of choice at TenTwentyFour1024 – a dedicated utility collects facts about the target system and stores them in a central database. From this database, using various Puppet techniques and third party clients, we can now automatically pipe this information to our Monitoring tools and have them add and watch new infrastructure on the fly.
We can equally quite simply query the collected data from our internal wiki. Instead of lagging behind, updating information on our servers manually in the wiki, we created a tool that queries the Puppet Database instead, retrieving facts about the server such as its CPU model, assigned IP addresses, operating system or even uptime.
Using definition files allows us to model the actual infrastructure through files and directories. Current design patterns encourage to think of your infrastructure in roles and profiles, thereby keeping both your code and infrastructure clean and concise. D
Defining infrastructure as code, however, has more benefits than simply abstraction and automation. Keeping our definition files in version control systems, not only can we easily revert changes that turned out to be unsatisfactory, we can now also apply and take advantage of all the tools and methodologies we’re used to and have learned to appreciate when working with code to make sure we never even have to revert anything.
So yes, we can retrace exactly who introduced a configuration change and when – and usually even why, by looking at the respective commit message. We can peer-review any changes before they’re accepted in the repository’s main branch, making configuration changes to our infrastructure quite easily auditable. We can run the changes through a Continuous Integration (CI) environment, linting the definitions, gathering metrics and applying automated tests, as we would with software. And we can apply the modified configuration to a test environment at no additional cost, making sure that the introduced changes do not break the running infrastructure.
Having several years of experience provisioning infrastructure using Puppet, allow us to analyse your current infrastructure and suggest how we could make it easier to maintain using configuration management. Get in touch with us!