Our AWS setup is automated using Terraform and Ansible. We recently upgraded to the next major release Terraform 0.12 and needed to refactor our setup in order to support this new major version. This post is a write up of how to setup Terraform 0.12, AWS and Ansible using dynamic inventory.
Before the upgrade our setup was as follows: - Ansible - Terraform 0.11 - terraform-inventory
Terraform-inventory was used in order to use the terraform state as dynamic inventory for Ansible. Unfortunately the terraform-inventory
library has multiple issues with the latest terraform release. Therefor we needed to figure out how to setup the dynamic inventory again.
There is a pretty good upgrade guide from Terraform describing which steps to take in order to upgrade to the latest 0.12 version. Since we needed to deal with multiple terraform version we started using Terraform version manager, inspired by rbenv, to easily manage terraform multiple versions. In the end following the upgrade guide did not give us many problems, other than fixing some deprecation warnings.
We have Terraform and Ansible code in the same git repository. The terraform structure is basically as follows:
# Modules
terraform/modules/web-server
# Config per environment
terraform/acceptance/main.tf
terraform/acceptance/inventory/
terraform/production/main.tf
terraform/production/inventory/
The simplified version of the web-server
terraform module:
resource "aws_instance" "web_server" {
ami = "abc"
key_name = "ben"
tags = {
Name = var.server_name
ServerRole = "web"
Env = var.environment
}
}
Then we can create instances per environment as follows:
In terraform/acceptance/main.tf
module "web-server" {
source = "../modules/web-server"
environment = "acceptance"
server_name = "web01.acceptance.server.nl"
}
In terraform/production/main.tf
module "web-server" {
source = "../modules/web-server"
environment = "production"
server_name = "web01.production.server.nl"
}
Next we want to connect the Terraform setup with our Ansible playbooks in order to provision the servers. The Ansible setup is roughly:
# Ansible setup
group_vars/app-web.yml
group_vars/app-web-acceptance.yml
group_vars/app-web-production.yml
app-web.yml
The app-web.yml
---
- hosts: app_web
# rest omitted
The app-web.yml
is the playbook, the group_vars/app-web.yml
contains shared variables which can be overridden by the more specific group_vars/app-web-***.yml
. This is not something that works out of the box when using dynamic inventory.
In order to have dynamic inventory we choose to use the aws_ec2 inventory source plugin.
To use the aws_ec2
plugin you need to enable the plugins in your ansible.cfg
file:
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
Then create a file ending with aws_ec2.yml
in your inventory directory. In our case in terraform/acceptance/inventory/
The acceptance.aws.ec2.yml
:
plugin: aws_ec2
strict: True
regions:
- eu-central-1
boto_profile: my-aws-profile-corresponding-to-aws-credentials
keyed_groups:
- key: tags.ServerRole
prefix: app
filters:
tag:Env: acceptance
vpc-id: vpc-123456
compose:
ansible_host: private_ip_address
hostnames:
- tag:Name
- private-ip-address
The boto_profile
points to the name of your aws-profile
typically defined in your ~/.aws/credentials
file.
The keyed_groups
will create groups to be used by Ansible to target multiple hosts at once. In this case it will create a group called app_web
.
With filters
you can control which servers you want to exist in this inventory file. In our case we create an inventory file per environment which correspond to a vpc-id
(Virtual Private Cloud in AWS) and, to be super sure, a tag called Env
with the value acceptance
.
Finally the compose.ansible_host
ensures that Ansible will use the private_ip_address
to connect to the server. So you have to be connected to the VPC to be able to provision the servers.
To test our setup run ansible-inventory -i terraform/acceptance/inventory/acceptance.aws_ec2.yml --graph
. This displays the hosts per group created by the aws_ec2
plugin:
@all:
|--@aws_ec2:
| |--web01.acceptance.server.nl
|--@app_web:
| |--web01.acceptance.server.nl
|--@ungrouped:
As expected this created the group defined in keyed_groups
called app_web
. This of course corresponds to the host group we defined in our Ansible playbook app-web.yml
: hosts: app_web
.
This however will not load our group_vars
files and hierarchy in order to be able to overwrite environment specific variables as mentioned earlier.
Luckily Ansible allows us to use multiple host files, so you can combine static and dynamic inventory. You can specify them each individually like: ansible-inventory -i terraform/acceptance/inventory/acceptance.aws_ec2.yml -i terraform/acceptance/inventory/hosts --graph
or just point to a directory: ansible-inventory -i terraform/acceptance/inventory/ --graph
.
Please note that the files are read in alphabetical order so to ensure this setup always works as expected it is wise to prefix them for instance with 1.acceptance.aws_ec2.yml
and 2.hosts
In terraform/acceptance/inventory/2.hosts
# This will ensure that group_vars/app-web-acceptance.yml
# overrides variables defined in the group_vars/app-web.yml
[app-web:children]
app-web-acceptance
# This will connect the Ansible world with the aws_ec2 group
# returned from aws, the underscored one.
[app-web-acceptance:children]
app_web
Now when running ansible-inventory -i terraform/acceptance/inventory --graph
it will create all the groups with the correct hierarchy we want:
@all:
|--@aws_ec2:
| |--web01.acceptance.server.nl
|--@app-web:
| |--@app-web-acceptance:
| | |--@app_web:
| | | |--web01.acceptance.server.nl
To sum up the final directory structure looks like:
# Terraform
terraform/modules/web-server
terraform/acceptance/main.tf
terraform/acceptance/inventory/1.acceptance.aws_ec2.yml
terraform/acceptance/inventory/2.hosts
terraform/production/main.tf
terraform/production/inventory/2.production.aws_ec2.yml
terraform/production/inventory/2.hosts
# Ansible
group_vars/app-web.yml
group_vars/app-web-acceptance.yml
group_vars/app-web-production.yml
app-web.yml