Ondrej's blog about various things
Followup on earlier experience with Odroids M1 board here is guide to get Gentoo with upstream 6.1.x kernel running on Odroids M1 board while running on top of LUKS encrypted disk with LVM. (examples for both eMMC and NVMe installation are included)
Waiting for 9 months since ordering pre-order discount for Radxa’s ROCK 5B order of mine finally came. I had some hopes for this as on paper this looked as something that can be “good enough(tm)” for day-to-day desktop replacement for me. So as soon as it arrived the first order of business was to get Gentoo on it running. Following is guide for getting Gentoo installed on eMMC of ROCK5b while running other system from SD card.
Few moths ago I got Odroid-M1 board (8GB version) in order to get some hand-on experience with ARM platform. As long term Gentoo user that couldn’t find manual for Odroid-M1 board I have prepared short Gentoo installation guide here - while simple it gets the system to boot from SD card into minimal environment.
Tired of remembering where to look for public fast-vm images (here)? The new
fast-vm-repo scripts comes to help by providing same content as the website with public fast-vm images without need to open up the browser. Bonus feature: auto-completion in terminal for repository and image names.
Working with one customer last year I got my interest in making the
fence_kdump work better in pacemaker cluster with multiple heartbeats (rings) sometimes also called RRP cluster. Motivation was to allow the
fence_kdump to accept messages not only from primary IP of crashed node (
ring0) but also any additional IPs (
ring2, …). Recently the changes for this were accepted and merged in PR 374. Here you will find example configurations on how to take advantage of this new feature.
Having two servers from which only one has long term stable IP address which make it inconvenient when trying to reach it every time it changes IP. To deal with this I have used before a free DynDNS service hosted on Internet, but to remove dependency on it I wanted to try configure this completely “in-house”. I’m running my DNS zone on knot DNS server on CentOS 7 and I have a server with non-stable IP address running CentOS 8.
As longtime Gentoo user I like to try out some of things I have worked with on different distros on Gentoo. This time the High Availability clustering with pacemaker came to my mind. I have worked mainly with Pacemaker 1.1.x stack on RHEL/CentOS 6/7. Looking at official repos of Gentoo these seems to be present but there is nothing for Pacemaker 2.0 which requires now Corosync 3.0 and can be tested in Fedora 28 or newer. So lets have a look how to get this to Gentoo.
As some projects come and go I got to situation where I needed to replace the old unmaintained Bloxsom script for generating this website with something comparably simple but still maintained. The system should be reasonable easy to install on some of the currently supported Linux distribution and allow me to use my layout and theme that I have used on site before.Doing a little search and try exercise I have ended with project Jekyll that had fit the needed requirements and seemed to be reasonably installable and customizable.
Refreshing the ansible roles around clusters made me to revisit the nearly
1,5 year old playbook for setting up cluster with HA-LVM. What has changed
since then? Playbook is longer :) But does more things in proper way than
previous versions did and mostly can run in "dry-run" (
mode. Also this time the rgmanager cluster is omitted based on adoption of
pacemaker instead of it.
Thanks to Ted Won I was able to give a short presentation on fast-vm and ansible playbooks
for pacemaker cluster creation for
JBUG Korea community in Seoul. More details and presentation can be access via links below.
Below you will find what new and exciting was added or changed in
fast-vm version 1.3.
fast-vm initial installation and configuration is
not the easiest one and has several places where choices can be made. However
for some people this might not be interesting or some would like to just get
the thing installed with some kind of "recommended configuration ready for
use". So to achieve this I have created 2 Ansible roles which if combined gets
all the needed setup and ready for use. In case you don't care about any
settings then you will receive applying these roles:
With upcoming version 1.2 of fast-vm you will be able to use custom sized
images and therefore I have decided to make some rebuild of older ones.
New images are smaller than previous ones (6GB vs 10GB) and are generated
using scripts rather than by hand. This provides more consistent results
and hopefully eliminate human errors where possible.
Performance wise images imports on my machine 30-40% faster compared to 10GB version
and are up to 30% smaller than previous versions when compressed.
Scripts used to create these images are available in same repository
as XML and hack files for other public fast-vm images in 'ks' subfolder.
Read further on how to use them.
Since the initial version of roles for ansible to configure pacemaker cluster
there was need to have modules that will interact with pacemaker in idempotent
way. Initial version of 'pcs-modules' was trying to achieve this via importing
functions used by 'pcs' utility from python. Unfortunatelly this was a
fragile approach as there is not a stable API for interacting with pcs. Still
pcs is a good idea to use as it handles most of the sanity checking before the
actual cluster altering commands are used so I wanted to stick with it.
New 'pcs-modules-2' are a complete rewrite into calls to 'pcs' script instead
of importing parts of it. This works much better and still allows the
idempotency to be achieved.
Thanks to people that attented the workshop presented on the DevConf 2017
on Sunday from 11:00 in A112. Below you will find the links for materials
that were used during workshop including also README file describing needed
changes for some of issues that we have hit.
One of the noticable features of the fast-vm 1.0 is support for UEFI booting.
From technical point of view only change on fast-vm side needed was automatic
deletion of nvram file.
To get some UEFI image working we will first need to provide UEFI firmware to
libvirt. Libivrt needs actually 2 files for UEFI: UEFI firmware itself and
UEFI variables template file. On first VM start libvirt will make a writable
copy of UEFI variable template file and provide it to VM as EUFI variables
store. On VM deletion fast-vm instructs libvirt to delete this UEFI variables
store file. Below is example on how to add the OVMF firmware to libvirt that
can be used also with fast-vm.
Thanks to linux.cz the fast-vm public images are now mirrored!
The page about
fast-vm was updated to contain table with copy&paste commands
using the new mirror for better convenience.
The title explains quite well what the following ansible playbook is cabable
of. Despite being still a basic solution focusing at just bringing up the
quite basic configuration you get an HA cluster with proper HA-LVM configuration
for any combination of rgmanager/pacemaker clvm/tragging HA-LVM type which can
save time and headaches. Links for roles used are below:
With ultimate goal of setting up something more complex in clusters I have
came across the qeustion of shared storage. The immediate answer was to use
some iSCSI as a simple solution and as I wanted to stay at least a bit up to
date with CentOS/RHEL approach this means
I have found several roles for this but as expected they were using direct
shell commands with a sometimes questionable detection of current status.
So the result was a quick look into doing this a more proper way.
In the end the creation contains one ansible role taht is capable
of setting up targecli-based iSCSI target(server) on CentOS/RHEL 7.2/7.7.
Check link below for role:
For those being lazy and not wanting to setup the rgmanager or pacemaker
cluster in RHEL/CentOS I have a good news. You don't have to anymore.
I pursuit doing same stuff over and over again I have created 2 ansible roles
that should help in setting up the High_Availability clusters in RHEL/CentOS
environment with minimum of effort. Check links below for roles:
After long time I have resurected the blog and moved it to my home server on
different address. Enjoy that blog is now accesible through https. Moreover
there are small style changes which makes the page to be "responsive (=mobile
friendly) thanks to my friend Dman.
I'm in search of perfect wireless AP (Access Point) for 802.1n standard which would be based on highly configurable system like OpenWRT or similar.
Main requirements are:
This article is meant to be a crash course guide to problems that may
arise when building good, fast, cheap and large JBOD storage.
Recently i have been configuring new HP switch (2910-al series) and I decided to try IPv6 support.
My expectations were like It's the same as IPv4 except addresses are
For comparison, IPv4 configuration looks like this:
As part of preparation for upcoming semester I decided to try configuring
NetworkManager on Fedora 15 as primary source of network configuration.
There are basically two ways how to accomplish network configuration on Fedora using NetworkManager.