- Overview
- Commands
- Inventory
- Playbook
- Tasks
- Variables
- Loops
- Testing
- Tags
- Limit
- Error handling
- Debugging
- Roles
- Collections
- Importing and including
- Secrets
- Configs
- Work with Azure
- Work with Windows hosts
- AAP
- Interaction with Terraform
- Sponsored by Red Hat
- Automates cloud provisioning, configuration management and application deployments
- Agentless, but nodes and the control machine needs Python
- Connect to each node through SSH (WinRM for Windows)
- Idempotent
ansible
run ad-hoc tasksansible-inventory
list hostsansible-playbook
run playbooks
The inventory is a file that defines the hosts upon which the tasks in a playbook operate.
-
Static inventory
hosts: vm1: ansible_host: 13.79.22.89 vm2: ansible_host: 40.87.135.194
-
Dynamic inventory, see https://docs.ansible.com/ansible/latest/collections/azure/azcollection/azure_rm_inventory.html
plugin: azure_rm include_vm_resource_groups: - learn-ansible-rg auth_source: auto keyed_groups: # places each host in a group named 'tag_(tag name)_(tag value)' for each tag on a VM. - prefix: tag key: tags # adds variables to each host found by this inventory plugin hostvar_expressions: my_host_var: # A statically-valued expression has to be both single and double-quoted, or use escaped quotes, since the outer # layer of quotes will be consumed by YAML. Without the second set of quotes, it interprets 'staticvalue' as a # variable instead of a string literal. some_statically_valued_var: "'staticvalue'" # overrides the default ansible_host value with a custom Jinja2 expression, in this case, the first DNS hostname, or # if none are found, the first public IP address. ansible_host: (public_dns_hostnames + public_ipv4_addresses) | first
- the file name needs to end with
azure_rm.(yml|yaml)’
- Uses Azure CLI for auth by default
Verify that Ansible can discover your inventory
ansible-inventory --inventory azure_rm.yml --graph # @all: # |--@tag_env_dev: # | |--vm_dev_1 # | |--vm_dev_2 # |--@tag_env_prod: # | |--vm_prod_1 # | |--vm_prod_2 # |--@ungrouped:
You can use
ping
module to verify that Ansible can connect to each VM and Python is correctly installed on each node. (ping
actually connects over SSH, not ICMP as the name suggests)# ping a specified group ansible \ --inventory azure_rm.yml \ --user azureuser \ --private-key ~/.ssh/ansible_rsa \ --module-name ping \ tag_Ansible_mslearn
- the file name needs to end with
Here is an example playbook that configures service accounts
# playbook.yml
---
- hosts: all
become: yes # apply with `sudo` privilege
tasks:
- name: Add service accounts
user: # 'user' module
name: "{{ item }}"
comment: service account
create_home: no
shell: /usr/sbin/nologin
state: present
loop: # looping
- testuser1
- testuser2
Run the playbook
ansible-playbook \
--inventory azure_rm.yml \
--user azureuser \
--private-key ~/.ssh/ansible_rsa \
playbook.yml
Verify by running a command on each host
ansible \
--inventory azure_rm.yml \
--user azureuser \
--private-key ~/.ssh/ansible_rsa \
--args "/usr/bin/getent passwd testuser1" \
tag_Ansible_mslearn
Example modules:
---
- hosts: demoGroup
become: true
tasks:
- name: Ping
ping: # 'ping' module, no arguments required
- name: Install nginx
apt: # 'apt' module, install a package
name: nginx
state: present
- name: Find nginx configs
find: # 'find' module, find files
path: /etc/nginx/conf.d/
file_type: file
- name: Ensure Nginx is running
service: # 'service' module
name: nginx
state: started
There are many ways to filter tasks, by tags, by conditions, etc
---
...
tasks:
- name: Install nginx
apt: # 'apt' module, install a package
name: nginx
state: present
tag: nginx
- name: Find nginx configs
find: # 'find' module, find files
path: /etc/nginx/conf.d/
file_type: file
tag: config
- name: Ensure Nginx is running
service: # 'service' module
name: nginx
state: started
tag: nginx-start
# start from a task
ansible-playbook playbook.yml --start-at-task 'Find nginx configs'
# run tasks one-by-one
ansible-playbook playbook.yml --step
Task conditions using when
:
tasks:
- name: Upgrade in Redhat
when: ansible_os_family == "Redhat"
yum: name=* state=latest
- name: Upgrade in Debian
when: ansible_os_family == "Debian"
apt: upgrade=dist update_cache=yes
Blocks could be used to apply common directives to a group of tasks, such as tags
, when
, become
, ignore_errors
when
is checked for each task in the block, not at the block level
tasks:
- name: Install, configure, and start Apache
block:
- name: Install httpd and memcached
ansible.builtin.yum:
name:
- httpd
- memcached
state: present
- name: Apply the foo config template
ansible.builtin.template:
src: templates/src.j2
dest: /etc/foo.conf
- name: Start service bar and enable it
ansible.builtin.service:
name: bar
state: started
enabled: True
when: ansible_facts['distribution'] == 'CentOS'
become: true
become_user: root
ignore_errors: true
tags: [tag1, tag2]
Could be defined in group_vars
folder
- group_vars
|- all/
|- group1/
|- group2/
Variables in all/
will be applied to all hosts, in group1/
for hosts in group1
, ...
Could be also defined in inventory files directly
[my-hosts]
vm1
vm2
vm3
vm4
[webservers]
vm1
vm2
# variables for 'all'
[all:vars]
temp_file=/tmp/temp1
# variables for 'webserver'
[webservers:vars]
temp_file=/tmp/temp2
Use variables in playbooks
- hosts: webservers
tasks:
- name: Create a file
file:
dest: '{{temp_file}}'
state: '{{file_state}}'
when: temp_file is defined
You can also pass in a variable in command line
ansible-playbook demo.yml -e file_state=touch
The ansible_facts
variable contains info about remote system
- name: Print all available facts
ansible.builtin.debug:
var: ansible_facts
Some important facts:
"ansible_facts": {
"all_ipv4_addresses": [
"172.16.16.16"
],
"date_time": {
"date": "2023-05-05",
...
},
"default_ipv4": {
"address": "172.16.16.16",
...
},
"distribution": "Ubuntu",
"distribution_file_variety": "Debian",
"distribution_major_version": "20",
"domain": "gary.com",
"env": {
"HOME": "/home/gary",
...
},
"fqdn": "demo.gary.com",
"hostname": "demo",
"machine": "x86_64",
"pkg_mgr": "apt",
"python_version": "3.8.2",
"system": "Linux",
"user_id": "gary",
...
}
-
You can reference an env variable by
{{ ansible_facts['env']['HOME'] }}
-
Facts are cached (in memory by default), and available to all hosts, you can access fact of one remote host in another host like
{{ hostvars['vm1']['ansible_facts']['os_family'] }}
-
set_fact
set fact about current host, by default, you can not access it viaansible_facts
, unless you setcacheable: yes
- name: Set a temporary fact set_fact: my_fact: "my value" cacheable: yes - debug: var: my_fact - debug: var: ansible_facts['my_fact']
-
The infomation gathering could be disabled by
gather_facts: false
--- - name: Testing gather_facts: false hosts: all
They contain information about ansible operations.
-
hostvars
contains- all variables about each host, including variables defined in inventory files, variables defined in playbooks
- and
ansible_facts
gathered for each host
"hostvars": { "vm1": { "my_custom_var": "my custom value", "groups_names": ["group1", "group2"], "inventory_file": "/path/to/inventory.ini", "ansible_facts": { ... }, "ansible_run_tags": [ "all" ], "ansible_skip_tags": [], "ansible_verbosity": 0, "ansible_version": { "full": "2.9.6", "major": 2, "minor": 9, "revision": 6, "string": "2.9.6" }, ... } ... }
-
groups
-
group_names
, a list of group names the current host is a member of -
inventory_hostname
, the name defined in inventory file (could be alias, FQDN, IP, etc)ansible_host
IP or FQDN in inventory filesansible_hostname
this is short hostname gathered from the remote host
-
ansible_play_hosts
is the list of all hosts still active in the current play. -
ansible_play_batch
is a list of hostnames that are in scope for the current 'batch' of the play. The batch size is defined by serial, when not set it is equivalent to the whole play (making it the same as ansible_play_hosts). -
inventory_dir
-
inventory_file
-
playbook_dir
-
role_path
-
ansible_check_mode
is a boolean, set to True if you run Ansible with --check. -
ansible_version
-
default
# default to 5 {{ some_variable | default(5) }} # default value if any field undefined {{ foo.bar.baz | default('DEFAULT') }} # default to "admin" if the variable exists, but evaluates to false or an empty string {{ lookup('env', 'MY_USER') | default('admin', true) }} # making an variable optional {{ item.mode | default(omit) }} # making an variable mandatory, when `DEFAULT_UNDEFINED_VAR_BEHAVIOR` is set to false {{ item.mode | mandatary }}
- hosts: localhost
vars:
fruits: [apple,orange,banana]
tasks:
- name: Show fruits
debug:
msg: I have a {{item}}
with_items: '{{fruits}}'
Using --check
flag
ansible-playbook demo.yml --check
It's often easier to test with localhost first
-
Create an inventory file, specify
ansible_connection=local
, you may need to specify a password if you usebecome
in you playbooklocalhost ansible_connection=local ansible_become_password=<password>
-
Test your playbook with
ansible-playbook -i ./localhost.localonly.ini my_playbook.yml
-
Use
-K, --ask-become-pass
if it needs root password:ansible-playbook -i ./localhost.localonly.ini -K my_playbook.yml
You can add tags to task or play, and filter tasks by tags
# list available tags in a playbook
ansible-playbook playbook.yml --list-tags
# run tasks tagged 'tag1' or 'tag2'
ansible-playbook playbook.yml --tags "tag1,tag2"
# skip tags
ansible-playbook playbook.yml --skip-tags nginx
There are two special tags: always
and never
always
tagged task is always run, unless skipped specifically with--skip-tags always
never
tagged task is always skipped, unless included specifically with--tags never
tasks:
- name: Run the rarely-used debug task
ansible.builtin.debug:
msg: '{{ aVar }}'
tags:
- never
- debug
If you specify --tags debug
, the task will run as well
- With plays, blocks,
role
,import_*
, tags are inherited. - With
include_role
andinclude_tasks
, tags are not inherited.-
This means if you add
--tag myTag
when running a playbook, for a task in the included file or role to run, theinclude_*
statement itself needs to havemyTag
, and the included task need to have the same tag as well, see an example here -
There are two ways to workaround this:
- Use
apply
keyword
- name: Apply the db tag to the include and to all tasks in db.yml include_tasks: file: db.yml # adds 'db' tag to tasks within db.yml apply: tags: db # adds 'db' tag to this 'include_tasks' itself tags: db
- Use a block
- block: - name: Include tasks from db.yml include_tasks: db.yml tags: db
- Use
-
If you run or skip certain tags by default, you can use the TAGS_RUN
and TAGS_SKIP
options in Ansible configuration to set those defaults.
When you run a playbook, you can selectively choose (with --limit
flag) which managed nodes or groups in your inventory to target.
A pattern can refere to a single host, an IP address, an inventory group, a set of groups, or all hosts in your inventory.
all
or*
: all hostshost1
host1:host2
orhost1,host2
192.0.*
,one*.com
: wildcard is allowedgroup1:group2
: multiple groupsgroup1:!group2
: all hosts in group1, but not in group2group1:&group2
: all hosts in both group1 and group2~(web|db).*\.example\.com
: use~
for a regex pattern
-
By default if a command returns non-zero code, the task fails, but this could be customized
- name: Fail task when both files are identical ansible.builtin.raw: diff foo/file1 bar/file2 register: diff_cmd failed_when: (diff_cmd.rc == 0) or (diff_cmd.rc >= 2)
-
By default an error would stop tasks on the host, but you can change this behavior
- name: Do not count this as a failure ansible.builtin.command: /bin/false ignore_errors: true
-
Customize
changed
- name: Report 'changed' when the return code is not equal to 2 ansible.builtin.shell: /usr/bin/billybass --mode="take me to the river" register: bass_result changed_when: "bass_result.rc != 2"
To enable the debugger:
- use
debugger
keyword (task, block, play or role level) - in configuration or environment variable
- as a strategy
- name: My play
hosts: all
tasks:
- name: Execute a command
debugger: always
ansible.builtin.command: "true"
when: False
when the debugger is triggered, you print out and update task variables, arguments, then rerun it (see https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_debugger.html#available-debug-commands)
p task.args
p task_vars
task.args['arg1'] = 'new arg value'
redo
- Roles is a way to organize your Ansible code and make it reusable and modular.
- Ansible Galaxy is a public repository for Ansible roles and collections.
- Use
ansible-galaxy
command to create a role, install roles/collections from Galaxy.
# init a role, this creates a bunch of folders and files
ansible-galaxy init testrole1
ls -AF
# .travis.yml README.md defaults/ files/ handlers/ meta/ tasks/ templates/ tests/ vars/
-
defaults/
for default variable values- could be overwritten by group_vars or host_vars
-
vars/
override default variable values- could not be overwritten by group_vars or host_vars
- CAN be overwritten by block/task vars,
-e
- see https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence for full details
-
tasks/
tasks for this role -
templates/
jinja2 template files for thetemplate
task -
handlers/
handlers are only fired when certain tasks report changes, and are run at the end of each play- usually used to restart services/machines
handlers
could be in the same playbook file- Use the handler
name
field innotify
--- - name: This is a play within a playbook hosts: all tasks: - name: Task 1 module_name: param1: "foo" notify: restart a service - name: Task 2 module_name_2: handlers: - name: restart a service ansible.windows.win_service: name: service_a state: restarted start_mode: auto
Define and use a role called webserver-config
File structure:
playbook.yml
roles
|- webserver-config
|- tasks
|- main.yml
main.yml
for webserver-config
role, defining two tasks
---
- name: Install Apache
apt:
name: apache2
state: latest
- name: Enable Apache service
systemd:
name: apache2
enabled: yes
playbook.yml
using the webserver-config
role
---
- name: Playbook with Role Example
hosts: webserver
become: yes
roles:
- webserver-config
Example: using azure.azcollection
to read resource groups
# install
ansible-galaxy collection install azure.azcollection
# install required Python packages of the collection
pip install -r ~/.ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt
ansible-playbook rg.yml --extra-vars "subscription_id=<sub-id> client_id=<client-id> secret=<secret> tenant=<tenant> cloud_environment=AzureCloud"
# rg.yml
# a playbook using the `azure.azcollection.azure_rm_resourcegroup_info` module
---
- name: Example playbook using the azure.azcollection
hosts: localhost
tasks:
- name: Get resource groups in the Azure subscription
azure.azcollection.azure_rm_resourcegroup_info:
cloud_environment: "{{ cloud_environment }}"
tenant: "{{ tenant }}"
subscription_id: "{{ subscription_id }}"
client_id: "{{ client_id }}"
secret: "{{ secret }}"
register: rg_info
- name: Print the list of resource groups
debug:
var: rg_info.resourcegroups
To install all collections in a requirements file
ansible-galaxy collection install -r collections/requirements.yml
Compare include_*
and import_*
:
Include_* | Import_* | |
---|---|---|
Type of re-use | Dynamic | Static |
When processed | At runtime, when encountered | Pre-processed during playbook parsing |
Keywords | include_role , include_tasks , include_vars |
import_role , import_tasks , import_playbook |
Context | task | task or play (import_playbook ) |
Tags | Not inherited, you could filter which tasks to run by add tags to both the include_* task and tasks in the included file |
Inherited, applies to all imported tasks |
Task options | Apply only to include task itself | Apply to all child tasks in import |
Calling from loops | Executed once for each loop item | Cannot be used in a loop |
Works with --list-tags , --list-tasks , --start-at-task |
No | Yes |
Notifying handlers | Cannot trigger handlers within includes | Can trigger individual imported handlers |
Using inventory variables | Can include_*: {{ inventory_var }} |
Cannot import_*: {{ inventory_var }} |
With variables files | Can include variables files | Use vars_files: to import variables |
-
Playbooks can be imported (static)
- import_playbook: "/path/to/{{ import_from_extra_var }}" - import_playbook: "{{ import_from_vars }}" vars: import_from_vars: /path/to/one_playbook.yml
-
The bare
include
keyword is deprecated -
Avoid using both includes and imports in a single playbook, it can lead to difficult-to-diagnose bugs
-
If you use roles at play level, they are treated as static imports, tags are applied to all tasks within the role:
--- - hosts: webservers roles: - role: my-role vars: app_port: 5000 tags: typeA - role: another-role
"Vault" is a feature that allows you to encrypt sensitive data in your playbooks and roles. The encrypted data can then be safely stored in a source control system.
Two types:
- Vaulted files
- Full file is encrypted, could contain variables, tasks, etc
- Decrypted when loaded or referenced
- Can be used for inventory, anything that loads variables (vars_files, group_vars, include_vars, etc)
- Single encrypted variable
- Only work for variables
- Decrypted on demand, so you can have vaulted variables with different vault secrets and only provide those needed
- You can mix vaulted and non-vaulted variables in the same file, even inline in a play or role
When using -v
(verbose) mode, you can hide a secret value by adding no_log: true
to the task
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True
Note that the use of the no_log
attribute does not prevent data from being shown when debugging Ansible itself via the ANSIBLE_DEBUG
environment variable.
You can encrypt any file
# encrypt a file
ansible-vault encrypt demo-playbook.yml
# edit
ansible-vault edit demo-playbook.yml
When using this encrypted playbook, you could provide the password by prompt or a file
# provide the vault password by prompt
ansible-playbook demo-playbook.yml --ask-vault-pass
# use a password file
ansible-playbook demo-playbook.yml --vault-password-file ~/.vault_pass.txt
# use a script, which outputs the password to standard output
ansible-playbook demo-playbook.yml --vault-password-file ~/.vault_pass.py
You can also specify the password file with environment variable ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt
ansible-vault encrypt_string --vault-id @prompt mysupersecretstring
Use the secret in a playbook
- hosts: localhost
vars:
secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
66613333646138636537363536373431633333353631646164353031303933316533326437366564
6430613461323339316130626533336165376238316134310a303836356162633363666439353534
39653865646130346239316137373565623934663238343061663239383139613032636262363565
6138613861613031650a326230616637396232623630323362386430326464373364323531303631
32393362326164343566383936633838336166363535383333366237636639636535
tasks:
- name: Test variable
debug:
var: secret
!vault
tag is needed, so both Ansible and YAML are aware of the need to decrypt
Run the playbook
ansible-playbook use-secret.yml --ask-vault-pass
Multiple vaults could be encrypted with different passwords. And different vaults can be given a label to distinguish them.
You can use --vault-id
to specify the password for each vault
# use a password file for vault "label1"
# use prompt for vault "label2"
ansible-playbook site.yml --vault-id label1@~/.vault_pass.txt --vault-id label2@prompt
By default the vault label ("label1", "label2" etc.) is just a hint. Ansible will try to decrypt each vault with every provided password.
Setting the config option DEFAULT_VAULT_ID_MATCH
will change this behavior so that each password is only used to decrypt data that was encrypted with the same label.
Ansible configs are in /etc/ansible/ansible.cfg
[defaults]
# control the output format
stdout_callback = yaml
You can also configure your own Ansible collection repo URLs in the config (like the Ansible Automation Hub)
[galaxy]
server_list = community, rh-verified
[galaxy_server.community]
url=https://my-ansible-hub.example.com/api/galaxy/content/community/
token=<token>
[galaxy_server.rh-verified]
url=https://my-ansible-hub.example.com/api/galaxy/content/rh-verified/
token=<token>
# Install Ansible az collection for interacting with Azure.
ansible-galaxy collection install azure.azcollection
# Install Ansible modules for Azure
sudo pip3 install -r ~/.ansible/collections/ansible_collections/azure/azcollection/requirements-azure.txt
Use a service principal, it should have proper permissions on the target subscription, two ways for credentials:
-
Put it in
~/.azure/credentials
[default] subscription_id=<subscription_id> client_id=<service_principal_app_id> secret=<service_principal_password> tenant=<service_principal_tenant_id>
-
Environment variables:
export AZURE_SUBSCRIPTION_ID=<subscription_id> export AZURE_CLIENT_ID=<service_principal_app_id> export AZURE_SECRET=<service_principal_password> export AZURE_TENANT=<service_principal_tenant_id>
You could use ad-hoc commands or playbooks
-
Ad-hoc command
ansible localhost \ --module-name azure.azcollection.azure_rm_resourcegroup \ --args "name=rg-by-ansible-001 location=australiaeast"
-
Playbooks
# create-rg.yml - hosts: localhost connection: local collections: - azure.azcollection tasks: - name: Creating resource group azure_rm_resourcegroup: name: "rg-ansible-002" location: "westus"
ansible-playbook create-rg.yml
Full details: https://docs.ansible.com/ansible/latest/os_guide/windows_setup.html
Quick setup:
wget https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1 -o .\x.ps1
.\x.ps1
# check the listeners are running
winrm enumerate winrm/config/Listener
# Listener
# Address = *
# Transport = HTTP
# Port = 5985
# Hostname
# Enabled = true
# URLPrefix = wsman
# CertificateThumbprint
# ListeningOn = ...
# Listener
# Address = *
# Transport = HTTPS
# Port = 5986
# Hostname = vm-demo
# Enabled = true
# URLPrefix = wsman
# CertificateThumbprint = 832A4CA59901EE8DD4060123E2D669F9FB71C578
# ListeningOn = ...
-
Install collection, Python package
# install collection ansible-galaxy collection install ansible.windows # install pywinrm pip show pywinrm
-
Set appropriate host variables, like
ansible_connection
etc:[win] 172.16.2.5 172.16.2.6 [win:vars] ansible_user=vagrant ansible_password=<password> ansible_connection=winrm ansible_winrm_server_cert_validation=ignore # ignore cert validation
-
Password could be passed in the command line, or you could encrypt the inventory file
ansible -i ../inventories/windows-hosts -m win_ping all -e "ansible_password=<pass>" # 20.5.202.140 | SUCCESS => { # "changed": false, # "ping": "pong" # }
-
Windows reboot playbook example
--- - name: win_reboot module demo hosts: all become: false gather_facts: false tasks: - name: reboot host(s) ansible.windows.win_reboot: msg: "Reboot by Ansible" # this message will be showing on a popup pre_reboot_delay: 120 # how long to wait before rebooting shutdown_timeout: 3600 reboot_timeout: 3600
- Project: usually a link to a Git repo
- Inventories:
- could be added manually, from supported cloud providers or through dynamic inventory scripts
- you can add variables for groups and individual hosts
- Credentials:
- Secrets could be saved externally, such as in Azure Key Vault
- Could be
- password for logging in host
- password for decrypting a vault file/string
- Job Templates:
- what inventory to run against
- what playbook to run, and survey for variables
- what credentials to use
- Workflow Templates:
- you can build a workflow by joining multiple steps together(each step could be job templates, other workflow templates, repo sync, inventory source sync, approvals, etc), similar to Azure Logic Apps
- the included job templates could have different inventories, playbooks, credentials, etc
- You could add a survey to a workflow template
- RBAC
- Entity hierarchy: organization -> team -> user
- Built-in roles: Normal User, Administrator, Auditor
- Scenarios: give user read and execute access to a job template, no permission to change anything
- Automation Hub
- Host private Ansible collections
- And execution environments
You could use cloud.terraform.terraform
Ansible module to run Terraform commands
- name: Run Terraform Deploy
cloud.terraform.terraform:
project_path: '{{ project_dir }}'
state: present
force_init: true
- Use
local-exec
provisioner to run Ansible playbooks
- Use
ansible/ansible
provider in Terraform - It could create Ansible inventory host/group, playbook, vault
Example: add a VM to Ansible inventory
resource "aws_instance" "my_ec2" {
...
}
resource "ansible_host" "my_ec2" {
name = aws_instance.my_ec2.public_dns
groups = ["nginx"]
variables = {
ansible_user = "ec2-user",
ansible_ssh_private_key_file = "~/.ssh/id_rsa",
ansible_python_interpreter = "/usr/bin/python3",
}
}
Then in inventory.yml
, use the cloud.terraform.terraform_provider
plugin, which reads the Terraform state file to get the host information
---
plugin: cloud.terraform.terraform_provider
You can validate this by:
ansible-inventory -i inventory.yml --graph --vars
# @all:
# |--@nginx:
# | |--ec2-13-41-80-241.eu-west-2.compute.amazonaws.com
# | | |--{ansible_python_interpreter = /usr/bin/python3}
# | | |--{ansible_ssh_private_key_file = ~/.ssh/id_rsa}
# | | |--{ansible_user = ec2-user}
# |--@ungrouped:
Build a workflow template, with steps:
- Build Terraform config files with variables
- Run Terraform to provision VMs
- Sync dynamic inventory (this will add the new VM instance to the inventory)
- Run Ansible Playbook to configure the new VM