Getting Started with Ansible on Command Line

Channel: Linux
Abstract: ~$ ansible -i hosts production -m file -a "dest=/root/test.sh mode=600 owner=adminsupport group=adminsupport" -u root~$ ansible -i hosts production -m

ANSIBLE is an open source software platform for configuration management, provisioning, application deployment and service orchestration. It can be used for configuring our servers in production, staging and developments. It can also be used in managing application servers like Webservers, database servers and many others. Other systems similar to configuration management is CHEF, Puppet, SALT and Distelli, compared to all these ANSIBLE is the most simple and easily managed tool. The main advantage of using Ansible is as follows:

1. Modules can be written in any programming language.
2. No agent running on the client machine.
3. Easy to install and Manage.
4. Highly reliable.
5. Scalable

In this article, I'll explain some of the basics about your first steps with Ansible.

how to install and run docker and c...

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

how to install and run docker and containers in docker machine Understanding the hosts file

Once you've installed Ansible, the first thing is to understand its inventory file. This files contains the list of target servers which are managed by Ansible. The default hosts file location is /etc/ansible/hosts. We can edit this file to include our target systems. This file specifies several groups in which you can classify your hosts as your preference.

As mentioned here, important things to note in creating the hosts file:

# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or IP addresses
# - A hostname/ip can be a member of multiple groups
# - Remote hosts can have assignments in more than one groups
# - Include host ranges in one string as server-[01:12]-example.com

PS : It's not recommended to make modifications in the default inventory file, instead we can create our own custom inventory files at any locations as per our convenience.

How Ansible works?

First of all, Ansible admin client connects to the target server using SSH. We don't need to setup any agents on the client servers. All you need is Python and a user that can login and execute the scripts. Once the connection is established, then it starts gathering facts about the client machine like operating systems, services running and packages. We can execute different commands, copy/modify/delete files & folders, manage or configure packages and services using Ansible easily. I'll demonstrate it with the help of my demo setup.

My client servers are 45.33.76.60 and 139.162.35.39. I created my custom inventory hosts file under my user. Please see my inventory file with three groups namely webservers, production and testing.

In webservers, I've included two of my client servers. And separated them in two other groups as one in production and other in testing.

linuxmonty@linuxmonty-Latitude-E4310:~$ cat hosts

[webservers]

139.162.35.39 45.33.76.60

[production]

139.162.35.39 [Testing] 45.33.76.60

Establishing SSH connections

We need to create the SSH keys for the Admin server and copy this over to the target servers to enhance the SSH connections. Let's take a look on how I did that for my client servers.

linuxmonty@linuxmonty-Latitude-E4310:~$ # ssh-keygen -t rsa -b4096
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2e:2f:32:9a:73:6d:ba:f2:09:ac:23:98:0c:fc:6c:a0 linuxmonty@linuxmonty-Latitude-E4310
The key's randomart image is:
+--[ RSA 4096]----+
| |
| |
| |
| |
|. S |
|.+ . |
|=.* .. . |
|Eoo*+.+o |
|o.+*=* .. |
+-----------------+
Copying the SSH keys

This is how we copy the SSH keys from Admin server to the target servers.

Client 1:

linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

linuxmonty@linuxmonty-Latitude-E4310#

Client 2:

linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

linuxmonty@linuxmonty-Latitude-E4310#

Once you execute these commands from your admin server, your keys will be added to the target servers and saved in the authorized keys.

Familiarizing some basic Ansible Modules

Modules controls system resources, configuration, packages, files etc. There are about 450+ modules used in Ansible. First of all, let's use the module to check the connectivity between your admin server and the target servers. We can run the ping module from your Admin server  to confirm the connectivity.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m ping -u root
139.162.35.39 | success >> {
"changed": false,
"ping": "pong"
}

45.33.76.60 | success >> {
"changed": false,
"ping": "pong"
}
-i : Represents the inventory file selection
-m : Represents the module name selection
-u : Represents the user for execution.

Since you're running this command as a user to connect to the target servers, you need to switch to the root user for module execution.

This is how to check the inventory status in the hosts file.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts webservers --list-hosts
139.162.35.39
45.33.76.60
linuxmonty@linuxmonty-Latitude-E4310:~$
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production --list-hosts
139.162.35.39
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts Testing --list-hosts
45.33.76.60
Setup Module

Now you run the setup module to gather more facts about your target servers to organize your playbooks. This module provides you the information about the server hardware, network and some of the ansible-related software settings. These facts can be described in the playbooks section and represent discovered variables about your system. These can be also used to implement conditional execution of tasks.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m setup -u root



You can view the server architecture, CPU information, python version, memory, OS version etc by running this module.
Command Module

Here are some examples of the command module usage. We can pass any argument to this command module to execute.

uptime:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'uptime' -u root
139.162.35.39 | success | rc=0 >>
14:55:31 up 4 days, 23:56, 1 user, load average: 0.00, 0.01, 0.05

45.33.76.60 | success | rc=0 >>
14:55:41 up 15 days, 3:20, 1 user, load average: 0.20, 0.07, 0.06

hostname:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'hostname' -u root
139.162.35.39 | success | rc=0 >>
client2.production.com

45.33.76.60 | success | rc=0 >>
client1.testing.com

w:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'w' -u root
139.162.35.39 | success | rc=0 >>
08:07:55 up 4 days, 17:08, 2 users, load average: 0.00, 0.01, 0.05
USER TTY LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 07:54 7:54 0.00s 0.00s -bash
root pts/1 08:07 0.00s 0.05s 0.00s w

45.33.76.60 | success | rc=0 >>
08:07:58 up 14 days, 20:33, 2 users, load average: 0.03, 0.03, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 101.63.79.157 07:54 8:01 0.00s 0.00s -bash
root pts/1 101.63.79.157 08:07 0.00s 0.05s 0.00s w

Similarly, we can execute any linux commands  across multiple target servers using the command module in Ansible.

Managing User and Groups

Ansible provides a module called "user" which server this purpose. The ‘user’ module allows easy creation and manipulation of existing user accounts, as well as removal of the existing user accounts as per our needs.

Usage : # ansible -i inventory selection -m user -a "name=username1 password=<crypted password here>"

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m user -a "name=adminsupport password=<default123>" -u root
45.33.76.60 | success >> {
"changed": true,
"comment": "",
"createhome": true,
"group": 1004,
"home": "/home/adminsupport",
"name": "adminsupport",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 1004
}

In the above server, this command initiates the creation of this adminsupport user. But in the server 139.162.35.39 this user is already present, hence, it skips any other modifications for that user.

139.162.35.39 | success >> {
"changed": true,
"comment": "",
"createhome": true,
"group": 1001,
"home": "/home/adminsupport",
"name": "adminsupport",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"stderr": "useradd: warning: the home directory already exists.\nNot copying any file from skel directory into it.\nCreating mailbox file: File exists\n",
"system": false,
"uid": 1001
}

Usage : ansible -i inventory selection -m user -a 'name=username state=absent'

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts Testing -m user -a "name=adminsupport state=absent" -u root
45.33.76.60 | success >> {
"changed": true,
"force": false,
"name": "adminsupport",
"remove": false,
"state": "absent"
}

This command deletes the user adminsupport from our Testing server 45.33.76.60.

File Transfers

Ansible provides a module called "copy" to enhance the file transfers across multiple servers. It can securely transfer a lot of files to multiple servers in parallel.

 

Usage : ansible -i inventory selection -m copy -a "src=file_name dest=file path to save"

I'm copying a shell script called test.sh from my admin server to all my target servers /root. Please see the command usage below:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m copy -a "src=test.sh dest=/root/" -u root
139.162.35.39 | success >> {
"changed": true,
"dest": "/root/test.sh",
"gid": 0,
"group": "root",
"md5sum": "d910e95fdd8efd48d7428daafa7706ec",
"mode": "0755",
"owner": "root",
"size": 107,
"src": "/root/.ansible/tmp/ansible-tmp-1463040011.67-93143679295729/source",
"state": "file",
"uid": 0
}

45.33.76.60 | success >> {
"changed": true,
"dest": "/root/test.sh",
"gid": 0,
"group": "root",
"md5sum": "d910e95fdd8efd48d7428daafa7706ec",
"mode": "0755",
"owner": "root",
"size": 107,
"src": "/root/.ansible/tmp/ansible-tmp-1463040013.85-235107847216893/source",
"state": "file",
"uid": 0
}

Output Result

[root@client2 ~]# ll /root/test.sh
-rwxr-xr-x 1 root root 107 May 12 08:00 /root/test.sh

If you use playbook, you can take advantage of the module template to perform the same task.

It also provides a module called "file" which will help us to change the ownership and permissions of the files across multiple servers. We can pass these options directly to the "copy" command. This module can also be used to create or delete the files/folders.

Example :

I've modified the ownerships and groups for an existing file test.sh on the destination server and changed its permission to 600.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/test.sh mode=600 owner=adminsupport group=adminsupport" -u root
139.162.35.39 | success >> {
"changed": true,
"gid": 1001,
"group": "adminsupport",
"mode": "0600",
"owner": "adminsupport",
"path": "/root/test.sh",
"size": 107,
"state": "file",
"uid": 1001
}

Output :

[root@client2 ~]# ll | grep test.sh
-rw------- 1 adminsupport adminsupport 107 May 12 08:00 test.sh
Creating A folder

Now, I need to create a folder with a desired ownership and permissions. Let's see the command to acquire that. I'm creating a folder "ansible" on my production server group and assign it to the owner "adminsupport" with 755 permissions.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/ansible mode=755 owner=adminsupport group=adminsupport state=directory" -u root
139.162.35.39 | success >> {
"changed": true,
"gid": 1001,
"group": "adminsupport",
"mode": "0755",
"owner": "adminsupport",
"path": "/root/ansible",
"size": 4096,
"state": "directory",
"uid": 1001
}

Output :

[root@client2 ~]# ll | grep ansible
drwxr-xr-x 2 adminsupport adminsupport 4096 May 12 08:45 ansible
[root@client2 ~]# pwd
/root
Deleting a folder

We can even use this module for deleting folders/files from multiple target servers. Please see how I did that.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/ansible state=absent" -u root
139.162.35.39 | success >> {
"changed": true,
"path": "/root/ansible",
"state": "absent"
}

The only variable that determines the operation is the arbitrary variable called "state", it is modified to absent to delete that particular folder from the server.

Managing Packages

Let's see how we can manage packages using Ansible. We need to identify the platform of the target servers and use the desired package manager modules like yum or  apt that suits the purpose. We can use apt or yum according to the target servers OS version. It also has modules for managing packages under many platforms.

Installing a VsFTPD package on my production server in my inventory.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=present' -u root
139.162.35.39 | success >> {
"changed": true,
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: mirrors.linode.com\n * epel: mirror.wanxp.id\n * extras: mirrors.linode.com\n * updates: mirrors.linode.com\nResolving Dependencies\n--> Running transaction check\n---> Package vsftpd.x86_64 0:3.0.2-11.el7_2 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n============================\n Package Arch Version Repository Size\n===========================================\nInstalling:\n vsftpd x86_64 3.0.2-11.el7_2 updates 167 k\n\nTransaction Summary\n===============================================\nInstall 1 Package\n\nTotal download size: 167 k\nInstalled size: 347 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n Verifying : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n\nInstalled:\n vsftpd.x86_64 0:3.0.2-11.el7_2 \n\nComplete!\n"
]
}

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=present' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"vsftpd-3.0.2-11.el7_2.x86_64 providing vsftpd is already installed"
]
}

If you notice, you can see that, when I execute the ansible command to install the package first time, the "changed" variable was "true" which means this command has installed the package. But when I run that command again, it reported the variable "changed" as "false" which means the command checked for the package installation and it found that to be already installed, so nothing was done on that server.

Similarly, we can update or delete a package, the only variable which determines that is the state variable which can be modified to latest to install the latest available package and absent to remove the package from the server.

Examples:

Updating the package
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=latest' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"All packages providing vsftpd are up to date"
]
}

This claims that the installed software is already in the latest version and there are no available updates.

Removing the package
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=absent' -u root
139.162.35.39 | success >> {
"changed": true,
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: fastestmirror\nResolving Dependencies\n--> Running transaction check\n---> Package vsftpd.x86_64 0:3.0.2-11.el7_2 will be erased\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nRemoving:\n vsftpd x86_64 3.0.2-11.el7_2 @updates 347 k\n\nTransaction Summary\n================================================================================\nRemove 1 Package\n\nInstalled size: 347 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Erasing : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n Verifying : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n\nRemoved:\n vsftpd.x86_64 0:3.0.2-11.el7_2 \n\nComplete!\n"
]
}

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=absent' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
 "vsftpd is not installed"
]
}

First time when we run the ansible command it removed the VsFTPD package and then we run it again to confirm that there is no package existing in the server now.

Managing Services

It is necessary to manage the services which are installed on the target servers. Ansible provides the module service to attain that. We can use this module to enable on-boot and start/stop/restart services. Please see the examples for each case.

Starting/Enabling a Service
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx enabled=yes state=started' -u root
45.33.76.60 | success >> {
"changed": false,
"enabled": true,
"name": "nginx",
"state": "started"
}

139.162.35.39 | success >> {
"changed": false,
"enabled": true,
"name": "nginx",
"state": "started"
}
Stopping a Service
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx state=stopped' -u root
139.162.35.39 | success >> {
"changed": true,
"name": "nginx",
"state": "stopped"
}

45.33.76.60 | success >> {
"changed": true,
"name": "nginx",
"state": "stopped"
}
Restarting a Service
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx state=restarted' -u root
139.162.35.39 | success >> {
"changed": true,
"name": "nginx",
"state": "started"
}

45.33.76.60 | success >> {
"changed": true,
"name": "nginx",
"state": "started"
}

As you can see the state variable is modified to started, restarted and stopped status to manage the service.

Playbooks

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can assign different roles, perform tasks like copying or deleting files/folders, make use of mature modules that shifts most of the functionality or substitute variables to make your deployments dynamic and re-usable.

Playbooks define your deployment steps and configuration. They are modular and can contain variables. They can be used to orchestrate steps across multiple machines. They are configuration files written in simple YAML file which is the Ansible automation language. They can contain multiple tasks and can make use of "mature" modules.

Here is an example of a simple Playbook.

linuxmonty@linuxmonty-Latitude-E4310:~$ cat simpleplbook.yaml
---

- hosts: production
remote_user: root

tasks:
- name: Setup FTP
yum: pkg=vsftpd state=installed
- name: start FTP
service: name=vsftpd state=started enabled=yes

This is a simple playbook with two tasks as below:

  1. Install FTP server
  2. Ensure the Service status

Let's see each statement in details

- hosts: production  -   This selects the inventory host to initiate this process.

remote_user: root  - This specifies the user which is meant to execute this process on the target servers.

tasks:
1. - name: Setup FTP
2. yum: pkg=vsftpd state=installed
3. - name: start FTP
4. service: name=vsftpd state=started enabled=yes

These specifies the two tasks which is meant to be performed while running this playbook.  We can divide it to four statements for more clarity. First statement describes the task which is setting up an FTP server and the second statement performs that by choosing/installing the package on the target server. Third statement describes the next task and fourth one ensure the service status by starting the FTP server and enable it on boot.

Now let' see the output of this playbook.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts simpleplbook.yaml

PLAY [production] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]

TASK: [Setup FTP] *************************************************************
changed: [139.162.35.39]

TASK: [start FTP] *************************************************************
changed: [139.162.35.39]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=3 changed=2 unreachable=0 failed=0

We can see that playbooks are executed sequentially according to the tasks specified in the playbook. First it chooses the inventory and then it starts performing the plays one by one.

Application Deployments

I'm going to set-up my webservers using a playbook. I created a playbook for my "webservers" inventory group. Please see my playbook details below:

linuxmonty@linuxmonty-Latitude-E4310:~$ cat webservers_setup.yaml
---

- hosts: webservers
vars:
- Welcomemsg: "Welcome to Ansible Application Deployment"

tasks:
- name: Setup Nginx
yum: pkg=nginx state=installed
- name: Copying the index page
template: src=index.html dest=/usr/share/nginx/html/index.html
- name: Enable the service on boot
service: name=nginx enabled=yes
- name: start Nginx
service: name=nginx state=started

Now let us run this playbook from my admin server to deploy it.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts -s webservers_setup.yaml -u root

PLAY [webservers] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup Nginx] ***********************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Copying the index page] ************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Enable the service on boot] ********************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [start Nginx] ***********************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=5 changed=4 unreachable=0 failed=0
45.33.76.60 : ok=5 changed=4 unreachable=0 failed=0

This playbook describes four tasks as evident from the result. After running this playbook, we can confirm the status by checking the target servers in the browser.

 

Now, I'm planning to add a PHP Module namely php-gd to the target servers. I can edit my playbook to include that task too and run it again. Let's see what happens now. My modified playbook is as below:

linuxmonty@linuxmonty-Latitude-E4310:~$ cat webservers_setup.yaml
---

- hosts: webservers
vars:
- Welcomemsg: "Welcome to Nginx default page"
- WelcomePHP: "PHP GD module enabled"

tasks:
- name: Setup Nginx
yum: pkg=nginx state=installed
- name: Copying the index page
template: src=index.html dest=/usr/share/nginx/html/index.html
- name: Enable the service on boot
service: name=nginx enabled=yes
- name: start Nginx
service: name=nginx state=started
- name: Setup PHP-GD
 yum: pkg=php-gd state=installed

As you can see, I append these highlighted lines to my playbook. So this is how it goes now.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts -s webservers_setup.yaml -u root

PLAY [webservers] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup Nginx] ***********************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Copying the index page] ************************************************ 
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Enable the service on boot] ********************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [start Nginx] ***********************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup PHP-GD] ********************************************************** 
changed: [45.33.76.60]
changed: [139.162.35.39]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=6 changed=2 unreachable=0 failed=0
45.33.76.60 : ok=6 changed=2 unreachable=0 failed=0

On close analysis of this result, you can see that only two sections in this have reported modifications to the target servers. One is the index file modification and other is the installation of our additional PHP module. Now we can evident the changes for the target servers in the browser.

Roles

Ansible roles are a special kind of playbooks that are fully self-contained and portable. The roles contains tasks, variables, configuration templates and other supporting tasks that needs to complete complex orchestration. Roles can be used to simplify more complex operations. You can create different roles like common, webservers, db_servers etc categorizing with different purpose and include in the main playbook by just mentioning the roles.  This is how we create the roles.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-galaxy init common
common was created successfully

Now, I've created a role named common to perform some of the common tasks in all my target servers. Each role contains their individual tasks, configuration templates, variables, handlers etc.

total 40
drwxrwxr-x 9 linuxmonty linuxmonty 4096 May 13 14:06 ./
drwxr-xr-x 34 linuxmonty linuxmonty 4096 May 13 14:06 ../
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 defaults/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 files/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 handlers/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 meta/
-rw-rw-r-- 1 linuxmonty linuxmonty 1336 May 13 14:06 README.md
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 tasks/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 templates/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 vars/

We can create our YAML file inside each of these folders as per our purpose. Later on, we can run all these tasks by just specifying these roles inside a playbook. You can get more details on Ansible roles here.

I hope this documentation provided you with the basic knowledge on how to manage your servers with Ansible. Thank you for reading this. I would recommend your valuable suggestions and comments on this.

Happy Automation!!

Ref From: linoxide
Channels:

Related articles