Ondrej's blog about various things
Refreshing the ansible roles around clusters made me to revisit the nearly
1,5 year old playbook for setting up cluster with HA-LVM. What has changed
since then? Playbook is longer :) But does more things in proper way than
previous versions did and mostly can run in "dry-run" (
mode. Also this time the rgmanager cluster is omitted based on adoption of
pacemaker instead of it.
Thanks to Ted Won I was able to give a short presentation on fast-vm and ansible playbooks
for pacemaker cluster creation for
JBUG Korea community in Seoul. More details and presentation can be access via links below.
Below you will find what new and exciting was added or changed in
fast-vm version 1.3.
fast-vm initial installation and configuration is
not the easiest one and has several places where choices can be made. However
for some people this might not be interesting or some would like to just get
the thing installed with some kind of "recommended configuration ready for
use". So to achieve this I have created 2 Ansible roles which if combined gets
all the needed setup and ready for use. In case you don't care about any
settings then you will receive applying these roles:
With upcoming version 1.2 of fast-vm you will be able to use custom sized
images and therefore I have decided to make some rebuild of older ones.
New images are smaller than previous ones (6GB vs 10GB) and are generated
using scripts rather than by hand. This provides more consistent results
and hopefully eliminate human errors where possible.
Performance wise images imports on my machine 30-40% faster compared to 10GB version
and are up to 30% smaller than previous versions when compressed.
Scripts used to create these images are available in same repository
as XML and hack files for other public fast-vm images in 'ks' subfolder.
Read further on how to use them.
Since the initial version of roles for ansible to configure pacemaker cluster
there was need to have modules that will interact with pacemaker in idempotent
way. Initial version of 'pcs-modules' was trying to achieve this via importing
functions used by 'pcs' utility from python. Unfortunatelly this was a
fragile approach as there is not a stable API for interacting with pcs. Still
pcs is a good idea to use as it handles most of the sanity checking before the
actual cluster altering commands are used so I wanted to stick with it.
New 'pcs-modules-2' are a complete rewrite into calls to 'pcs' script instead
of importing parts of it. This works much better and still allows the
idempotency to be achieved.
Thanks to people that attented the workshop presented on the DevConf 2017
on Sunday from 11:00 in A112. Below you will find the links for materials
that were used during workshop including also README file describing needed
changes for some of issues that we have hit.
One of the noticable features of the fast-vm 1.0 is support for UEFI booting.
From technical point of view only change on fast-vm side needed was automatic
deletion of nvram file.
To get some UEFI image working we will first need to provide UEFI firmware to
libvirt. Libivrt needs actually 2 files for UEFI: UEFI firmware itself and
UEFI variables template file. On first VM start libvirt will make a writable
copy of UEFI variable template file and provide it to VM as EUFI variables
store. On VM deletion fast-vm instructs libvirt to delete this UEFI variables
store file. Below is example on how to add the OVMF firmware to libvirt that
can be used also with fast-vm.
Thanks to linux.cz the fast-vm public images are now mirrored!
The page about
fast-vm was updated to contain table with copy&paste commands
using the new mirror for better convenience.
The title explains quite well what the following ansible playbook is cabable
of. Despite being still a basic solution focusing at just bringing up the
quite basic configuration you get an HA cluster with proper HA-LVM configuration
for any combination of rgmanager/pacemaker clvm/tragging HA-LVM type which can
save time and headaches. Links for roles used are below:
WIth ultimate goal of setting up something more complex in clusters I have
came across the qeustion of shared storage. The immediate answer was to use
some iSCSI as a simple solution and as I wanted to stay at least a bit up to
date with CentOS/RHEL approach this means
I have found several roles for this but as expected they were using direct
shell commands with a sometimes questionable detection of current status.
So the result was a quick look into doing this a more proper way.
In the end the creation contains one ansible role taht is capable
of setting up targecli-based iSCSI target(server) on CentOS/RHEL 7.2.
Check link below for role:
For those being lazy and not wanting to setup the rgmanager or pacemaker
cluster in RHEL/CentOS I have a good news. You don't have to anymore.
I pursuit doing same stuff over and over again I have created 2 ansible roles
that should help in setting up the High_Availability clusters in RHEL/CentOS
environment with minimum of effort. Check links below for roles:
After long time I have resurected the blog and moved it to my home server on
different address. Enjoy that blog is now accesible through https. Moreover
there are small style changes which makes the page to be "responsive (=mobile
friendly) thanks to my friend Dman.
I'm in search of perfect wireless AP (Access Point) for 802.1n standard which would be based on highly configurable system like OpenWRT or similar.
Main requirements are:
This article is meant to be a crash course guide to problems that may
arise when building good, fast, cheap and large JBOD storage.
Recently i have been configuring new HP switch (2910-al series) and I decided to try IPv6 support.
My expectations were like It's the same as IPv4 except addresses are
For comparison, IPv4 configuration looks like this:
As part of preparation for upcoming semester I decided to try configuring
NetworkManager on Fedora 15 as primary source of network configuration.
There are basically two ways how to accomplish network configuration on Fedora using NetworkManager.