Ondrej Famera - top logo

Configuring basic High-Availability clusters in RHEL/CentOS using ansible

For those being lazy and not wanting to setup the rgmanager or pacemaker cluster in RHEL/CentOS I have a good news. You don't have to anymore. I pursuit doing same stuff over and over again I have created 2 ansible roles that should help in setting up the High_Availability clusters in RHEL/CentOS environment with minimum of effort. Check links below for roles:

  • OndrejHome.ha-cluster-rgmanager
  • OndrejHome.ha-cluster-pacemaker

How to use them

Step 1: Install the roles from ansible galaxy as root using command ansible-galaxy install OndrejHome.ha-cluster-rgmanager or ansible-galaxy install OndrejHome.ha-cluster-pacemaker, depending on which cluster stack you want to use.

Step 2: Create the inventory file containing host on which you want to create a cluster. For example a 3-node cluster inventory file example is below. Note that you have to use hypervisor_name variables if the ansible role should setup a fencing devices using fence_xvm for you.

[cluster_nodes] hypervisor_hostname=fastvm-c6.8-51 hypervisor_hostname=fastvm-c6.8-52 hypervisor_hostname=fastvm-c6.8-53

Step 3: Create the ansible playbook with desired role for creating cluster. In below example the created cluster will be a pacemaker one.

- hosts: cluster_nodes
  remote_user: root
       - { role: OndrejHome.ha-cluster-pacemaker, cluster_name: 'test-1-cluster' }

Step 4: Run the ansible and wait for cluster to get created.

$ ansible-playbook -i ansible_hosts.txt ansible_playbook.yml

Requirements and limitations of ha-cluster-* roles

  • Future cluster nodes must be accesible from machine on which you run ansible through SSH (preferably using SSH keys)
  • In case of RHEL systems you need to register systems prior to running the ansible playbook and ensure that chanels or repositories for High-Availability are enabled.
  • Currently roles doesn't support detection of changes in pacemaker clusters as only node authorization and stonith device creation are implemented as custom modules
  • Rgmanager role will replace any existing cluster configuration with one in the role, so this is not suitable for existing clusters, but rather only for new deployments.
  • Current goal is to have roles focused on deploying the new clusters rather than to edit existng ones, the roles are ultimately failing if they need to change already running cluster on the cluster nodes.

Feel free to create issues on github if you encounter any issues in the above roles or if you would like some special feature be included in them. Just please note the current limitations above.

Last change .