-
Notifications
You must be signed in to change notification settings - Fork 21
Multinode Glusterfs with All in 1 Tendrl for Monitoring Vagrant Intern Edition Notes
Author: Nathan Weinberg Date: 6 June 2018
Based heavily off README by julienlim found here
Install VirtualBox and Vagrant (rpm/yum)
You may have to run the command $ sudo dnf install kernel-devel dkms kernel-headers
Put Ju's "Vagrantfile" and "bootstrap.sh" in a new directory.
You'll have to make changes to "bootstrap.sh" to reflect your specific ntpdate config requirements.
Navigate to your new directory and run $ vagrant up --provider virtualbox
By this point you should have 4 VMs (node0...node3) up and running.
$ vagrant ssh node0
$ ssh-keygen
$ cat /root/.ssh/id_rsa.pub
Copy the contents of this file to your clipboard.
On all nodes, run $ ssh-keygen
, then paste your clipboard contents (SSH key from node0) into a new file located at /root/.ssh/authorized_keys
On all nodes, do the following:
-
Edit
/etc/ssh/sshd_config
such that the following settings are configured- Permit root login yes
- RSAAuthentication yes
- Pubkey Authentication yes
- PasswordAuth no
-
Edit
/etc/hosts
such that each node has the ip address of the others follwed by the node name- ip addresses can be found
$ ip a
- ip addresses can be found
-
Run
$ service sshd restart
By this point you should be able to passwordless SSH from node0 to all other nodes.
On nodes 1-3, do the following:
$ fdisk /dev/sdb
$ mkfs.xfs /dev/sdb1
$ parted /dev/sdb print
Then uncomment "# /dev/sdb1..." in /etc/fstab and $ mount -a
Verify with $df -k
SSH into node1 and run
$ gluster peer probe node2
$ gluster peer probe node3
$ gluster peer status
$ gluster volume create vol1 node1:/bricks/brick1 node2:/bricks/brick1 node3:/bricks/brick1 force
$ gluster volume start vol1
$ gstatus -a
By this point your gluster cluster should be up and running.
Exit back into node0 and run
$ cd /etc/yum.repos.d
$ wget https://copr.fedorainfracloud.org/coprs/tendrl/release/repo/epel-7/tendrl-release-epel-7.repo
$ yum install tendrl-ansible
Run $ cp /usr/share/doc/tendrl-ansible-VERSION/site.yml .
such that VERSION is your version of tendrl-ansible.
Then create a new file "inventory_file" that looks as follows (IP address may vary):
[gluster_servers]
node1
node2
node3
[tendrl_server]
node0
[all:vars]
# Mandatory variables. In this example, 192.0.2.1 is ip address of tendrl
# server, tendrl.example.com is a hostname of tendrl server and
# tendrl.example.com hostname is translated to 192.0.2.1 ip address.
etcd_ip_address=172.28.128.3
etcd_fqdn=172.28.128.3
graphite_fqdn=172.28.128.3
configure_firewalld_for_tendrl=false
# when direct ssh login of root user is not allowed and you are connecting via
# non-root cloud-user account, which can leverage sudo to run any command as
# root without any password
#ansible_become=yes
#ansible_user=cloud-user
Run $ ansible-playbook -i inventory_file site.yml
. If you run into issues try running $ ansible -i inventory_file -m ping all
and ensure all nodes are able to communicate with one another.
You should now be able to access the Tendrl dashboard from you machine via a browser at this URL: http://<node0-ip-address>/