Using Ansible with AWS – Creating Multiple EC2 Instances, Tagging Them and ELK (AWS/Ansible)

An Ansible playbook to create multiple AWS EC2 instances

In my previous blog post I used Ansible to create an AWS EC2 instance and discussed how to do this via the Ansible ec2 module. Today I am going to go slightly more in depth with creating multiple instances, inventory groups and tagging.

The playbook I am using for this section is “create_multiple_ec2_instances.yml” and can be found on my GitHub (https://github.com/geektechdude/AWS_Ansible_Playbooks).

Creating Multiple Instances, Inventory Groups and Tagging

geektechstuff_ansible_aws_ec2_2
An Ansible playbook to create multiple AWS EC2 instances

The playbook may look familiar to readers of my previous post, and the “Create AWS EC2 Instances” section is nearly identical except for two lines:

count: has been increased from 1 to 2. This will create two EC2 instances instead of 1.

register: has been added to the section. This registers the facts about the EC2 instances to a variable called ec2. The variable name could be anything but I do recommend keeping it appropriate and related to the information it contains. If I had named the variable geek_facts, then the loop would need to call geek_facts.instances instead of ec2.instances.

The “Add EC2 Instances To Host Group” section uses Ansible’s add_host module to add the EC2 instances to an inventory group. It is using the facts registered in the ec2 variable to loop through the instances (items). I am using the public_ip of the EC2 instances as their hostnames and adding them to an inventory group named geektechstuff_ec2Note: This inventory group is only active whilst the playbook is running.

The “Tag EC2 Instances” section uses Ansible’s ec2_tag module to tag both of the EC2 instances with appropriate tags.

resource: requires the resources AWS id, so I used another loop of the ec2 instances (ec2.instances) and to place in their ID (items.id).

tags: requires the tags to be a key and a value (i.e. key: value) and I recommend using tagging to help keep resources organised. In this instance I created a key called “env” (short for environment) and one called “purpose”. I then assigned the keys the values geek_dev (as its my geektechstuff development environment) and testing (as the EC2 instances are for testing).

env: geek_dev
purpose: testing

So with the above Ansible can be used to create multiple EC2 instances, add them to inventory groups and add tags. With that in mind, I’m going to use that information to deploy some software.

ELK

The ELK (Elastic, Logstash, Kibana) stack is powerful tool for data analytics and I’ve already looked at using Ansible to create a stack and also using roles to tidy that playbook up. Both of those playbooks ran against virtual machines (i.e. acting as networked computers); now to see if I can use a similar playbook in the AWS cloud.

The playbook for this section is under the “aws_elk” folder in the GitHub repository. Note: For this playbook I had to increase my AWS EC2 instances from a t2.micro to a t2.small as I needed some more memory for the Java Runtime Environment (JRE). This may see an increase in AWS billing as I don’t think t2.small is in the free usage tier.

geektechstuff_ansible_aws_ec2_3
Ansible roles with the ec2-user

ansible.cfg – I’ve added a line that points Ansible to the expected AWS private SSH key.

create_elk_stack_playbook.yml – is where the play starts, it spins up 3 t2.small instances and tags one as Elastic, one as Kibana and one as Logstash and then calls on the various roles I’ve created for Elastic, Kibana and Logstash.

I’ve included a 3 minute pause in the playbook to give the EC2 instances some time to start, pass AWS status checks and start the SSH service.

I have also told Ansible to user the remote_user “ec2-user” as it is the default user for AWS EC2 instances and I’ve not yet installed another user.

The 3 roles files are very much the same as my previous blog post but with APT replaced by YUM and SERVICE replaced by SYSTEMD. In future an Ansible OS variable would fix this and allow for one role that works on different Operating Systems (OS).

I’m still working on the Logstash role for my AWS deployment, and I have a line to resolve in the Kibana (so that it points to the elastic EC2’s IP address).

Security Group Note: The various parts of the ELK communicate via ports and will need to communicate with each other. Make sure that Kibana and Logstash can communicate with Elastic on port 9200 (default Elastic port) and that Kibana’s web interface can be reached on port 5601.

Posted in AWS