You can do some sort of object oriented programming in the C programming language. However, that is very limited. But also very easy and straight forward.

Lets take a look at the following sample program methods-in-c.c. Basically all you have to do is to add a function pointer such as calculate to the definition of struct something_s. Later, during the struct initialization, assign a function address to that function pointer:

#include <stdio.h>

typedef struct {
    double (*calculate)(const double, const double);
    char *name;
} something_s;

double multiplication(const double a, const double b) {
    return a * b;
}

double division(const double a, const double b) {
    return a / b;
}

int main(void) {
    something_s mult = (something_s) {
        .calculate = multiplication,
        .name = "Multiplication"
    };

    something_s div = (something_s) {
        .calculate = division,
        .name = "Division"
    };

    const double a = 3, b = 2;

    printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b));
    printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b));
}

As you can see you can call the function (pointed by the function pointer) the same way as in C++ or Java via:

    printf("%s(%f, %f) => %f\n", mult.name, a, b, mult.calculate(a,b));
    printf("%s(%f, %f) => %f\n", div.name, a, b, div.calculate(a,b));

However, that is just syntactic sugar for:

    printf("%s(%f, %f) => %f\n", mult.name, a, b, (*mult.calculate)(a,b));
    printf("%s(%f, %f) => %f\n", div.name, a, b, (*div.calculate)(a,b));

Output:

pbuetow ~/git/blog/source [38268]% gcc methods-in-c.c -o methods-in-c
pbuetow ~/git/blog/source [38269]% ./methods-in-c
Multiplication(3.000000, 2.000000) => 6.000000
Division(3.000000, 2.000000) => 1.500000

Not complicated at all, but nice to know and helps to make the code easier to read!

Finally, I had time to deploy my own authoritative DNS servers (Master and Slave) for my domains buetow.org and buetow.zone. My domain name provider is Schlund Technologies. They allow their customers to manually edit the DNS records (BIND files). And they also give you the opportunity to specify your own authoritative DNS servers for your domains. From now I am making use of that option.

In order to setup my authoritative DNS servers I installed a FreeBSD Jail dedicated for DNS with Puppet on my root machine as follows:

  include freebsd
  freebsd::ipalias { '2a01:4f8:120:30e8::14':
    ensure    => up,
    proto     => 'inet6',
    preflen   => '64',
    interface => 're0',
    aliasnum  => '5',
  }

  include jail::freebsd

  class { 'jail':
    ensure              => present,
    jails_config        => {
      dns                     => {
        '_ensure'             => present,
        '_type'               => 'freebsd',
        '_mirror'             => 'ftp://ftp.de.freebsd.org',
        '_remote_path'        => 'FreeBSD/releases/amd64/10.1-RELEASE',
        '_dists'              => [ 'base.txz', 'doc.txz', ],
        '_ensure_directories' => [ '/opt', '/opt/enc' ],
        'host.hostname'       => "'dns.ian.buetow.org'",
        'ip4.addr'            => '192.168.0.15',
        'ip6.addr'            => '2a01:4f8:120:30e8::15',
      },
      .
      .
    }

Please note that dns.ian.buetow.org is just the Jail name of the Master DNS server (and caprica.ian.buetow.org the name of the Jail for the Slave DNS Server) and that I am using the DNS names dns1.buetow.org (Master) and dns2.buetow.org (Slave) for the actual “service names” (the DNS servers visible to the public). Please also note that the IPv4 address is an internal one. I setup PF to use NAT and PAT. The DNS ports are being forwarded (TCP and UDP wise) to that Jail. By default all ports are blocked, so I am adding an exception rule for the IPv6 address as well. These are the PF rules in use:

% cat /etc/pf.conf
.
.
# dns.ian.buetow.org 
rdr pass on re0 proto tcp from any to $pub_ip port {53} -> 192.168.0.15
rdr pass on re0 proto udp from any to $pub_ip port {53} -> 192.168.0.15
pass in on re0 inet6 proto tcp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
pass in on re0 inet6 proto udp from any to 2a01:4f8:120:30e8::15 port {53} flags S/SA keep state
.
.

In manifests/dns.pp (the Puppet manifest for the Master DNS Jail itself) I configured the BIND DNS server as follows:

  class { 'bind_freebsd':
    config         => "puppet:///files/bind/named.${::hostname}.conf",
    dynamic_config => "puppet:///files/bind/dynamic.${::hostname}",
  }

The Puppet module is actually a pretty simple one. You can find it here. It works with Puppet 4.4 or newer. It installs the file /usr/local/etc/namerd/named.conf and it populates the /usr/local/etc/named/dynamicdb directory with all my zone files.

Once (Puppet-) applied inside of the Jail I get this:

paul uranus:~/git/blog/source [4268]% ssh admin@dns1.buetow.org.buetow.org pgrep -lf named
60748 /usr/local/sbin/named -u bind -c /usr/local/etc/namedb/named.conf
paul uranus:~/git/blog/source [4269]% ssh admin@dns1.buetow.org.buetow.org tail -n 13 /usr/local/etc/namedb/named.conf
zone "buetow.org" {
    type master;
    notify yes;
    allow-update { key "buetoworgkey"; };
    file "/usr/local/etc/namedb/dynamic/buetow.org";
};

zone "buetow.zone" {
    type master;
    notify yes;
    allow-update { key "buetoworgkey"; };
    file "/usr/local/etc/namedb/dynamic/buetow.zone";
};
paul uranus:~/git/blog/source [4277]% ssh admin@dns1.buetow.org.buetow.org cat /usr/local/etc/namedb/dynamic/buetow.org
$TTL 3600
@    IN   SOA   dns1.buetow.org. domains.buetow.org. (
     25       ; Serial
     604800   ; Refresh
     86400    ; Retry
     2419200  ; Expire
     604800 ) ; Negative Cache TTL
; Infrastructure domains
@ IN NS dns1
@ IN NS dns2
* 300 IN CNAME web.ian
buetow.org. 86400 IN A 78.46.80.70
buetow.org. 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:11
buetow.org. 86400 IN MX 10 mail.ian
dns1 86400 IN A 78.46.80.70
dns1 86400 IN AAAA 2a01:4f8:120:30e8:0:0:0:15
dns2 86400 IN A 164.177.171.32
dns2 86400 IN AAAA 2a03:2500:1:6:20::
.
.
.
.

That is my Master DNS server. My Slave DNS server runs in another Jail on another bare metal server. Everything is setup similar to the Master DNS server. However that server is located in a different DC and in different IP subnets. The only difference is the named.conf. Its configured to be a Slave and that the dynamicdb gets populated by BIND itself while doing zone transfers from the Master.

paul uranus:~/git/blog/source [4279]% ssh admin@dns2.buetow.org tail -n 11 /usr/local/etc/namedb/named.conf
zone "buetow.org" {
    type slave;
    masters { 78.46.80.70; };
    file "/usr/local/etc/namedb/dynamic/buetow.org";
};

zone "buetow.zone" {
    type slave;
    masters { 78.46.80.70; };
    file "/usr/local/etc/namedb/dynamic/buetow.zone";
};

The end result looks like this now:

% dig -t ns buetow.org
; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t ns buetow.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37883
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;buetow.org.            IN  NS

;; ANSWER SECTION:
buetow.org.     600 IN  NS  dns2.buetow.org.
buetow.org.     600 IN  NS  dns1.buetow.org.

;; Query time: 41 msec
;; SERVER: 192.168.1.254#53(192.168.1.254)
;; WHEN: Sun May 22 11:34:11 BST 2016
;; MSG SIZE  rcvd: 77

% dig -t any buetow.org @dns1.buetow.org
; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> -t any buetow.org @dns1.buetow.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49876
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 7

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;buetow.org.            IN  ANY

;; ANSWER SECTION:
buetow.org.     86400   IN  A   78.46.80.70
buetow.org.     86400   IN  AAAA    2a01:4f8:120:30e8::11
buetow.org.     86400   IN  MX  10 mail.ian.buetow.org.
buetow.org.     3600    IN  SOA dns1.buetow.org. domains.buetow.org. 25 604800 86400 2419200 604800
buetow.org.     3600    IN  NS  dns2.buetow.org.
buetow.org.     3600    IN  NS  dns1.buetow.org.

;; ADDITIONAL SECTION:
mail.ian.buetow.org.    86400   IN  A   78.46.80.70
dns1.buetow.org.    86400   IN  A   78.46.80.70
dns2.buetow.org.    86400   IN  A   164.177.171.32
mail.ian.buetow.org.    86400   IN  AAAA    2a01:4f8:120:30e8::12
dns1.buetow.org.    86400   IN  AAAA    2a01:4f8:120:30e8::15
dns2.buetow.org.    86400   IN  AAAA    2a03:2500:1:6:20::

;; Query time: 42 msec
;; SERVER: 78.46.80.70#53(78.46.80.70)
;; WHEN: Sun May 22 11:34:41 BST 2016
;; MSG SIZE  rcvd: 322

For monitoring I am using Icinga2 (I am operating two Icinga2 instances in two different DCs). I may have to post another blog article about Icinga2 but to get the idea these were the snippets added to my Icinga2 configuration:

apply Service "dig" {
    import "generic-service"

    check_command = "dig"
    vars.dig_lookup = "buetow.org"
    vars.timeout = 30

    assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
}

apply Service "dig6" {
    import "generic-service"

    check_command = "dig"
    vars.dig_lookup = "buetow.org"
    vars.timeout = 30
    vars.check_ipv6 = true

    assign where host.name == "dns.ian.buetow.org" || host.name == "caprica.ian.buetow.org"
}

Whenever I have to change a DNS entry all have to do is:

  • Git clone or update the Puppet repository
  • Update/commit and push the zone file (e.g. buetow.org)
  • Wait for Puppet. Puppet will deploy that updated zone file. And it will reload the BIND server.
  • The BIND server will notify all Slave DNS servers (at the moment only one). And it will transfer the new version of the zone.

Thats much more comfortable now than manually clicking at some web UIs at Schlund Technologies.

You should read the first part before reading any further here…

I enhanced the procedure a bit. From now on I am having two external 2TB USB hard drives. Both are setup exactly the same way. To decrease the probability that they will not fail at about the same time both drives are of a different brand. One drive is being stored at the secure location. The other one is being stored at home right next to my HP MicroServer.

Whenever I have to update the offsite backup I am updating the backup to the drive which is being kept locally. Once done I bring it to the secure location and swap the drives and bring the other one back home. This ensures that I will always have an offiste backup. Even while updating it.

Furthermore, I added scrubbing (zpool scrub...) to the script. It is ensuring that the file system is consistent and that there are no bad blocks on the disk and on the file system. To increase the reliability I also run a zfs set copies=2 zroot. That setting is being automatically synchronized to the zoffsitetank when I run the backup script. ZFS stores every data block to disk twice now. Yes, it consumes twice as much disk capacity but it makes the data better fault tolerant against physical disk sector errors.

It seems like that the set copies=2 does not affect my zroot at all (as it is a ZRAID mirror of 3 disks already). But the disk usage on the external disk drives has doubled. Maybe, one day, I will limit the amount of copies. It really depends on how much data I want to have on the offsite data backup.

Here is the updated offsite backup script:

#!/bin/bash

# Nice howto: http://daveeddy.com/2015/12/04/zfs-zpool-encryption-with-geli-on-freebsd/

readonly NOSCRUB=$1 ; shift
readonly SRCPOOL=ztank
readonly DSTPOOL=zoffsitetank
readonly GELI_KEY=/some/secret/desitantion/to/the/geli.key
readonly TODAY=$(/bin/date +'%Y%m%d')

readonly USB_NAME0='Samsung M3 Portable'
readonly USB_NAME1='Seagate BUP Slim RD 0107'
declare  USB_DEV=''
declare  LAST=''

function error {
  echo "$@" >&2
  exit 1
}

function exists {
  local -r path=$1  ; shift
  local -r abort=$1 ; shift

  test -z "$path" && return 1

  echo "Checking for existance of $path"
  if [ ! -e $path ]; then
    message="$path does not exist"
    if [ "$abort" = "abort" ]; then
      error "$message"
    else
      echo "$message"
      return 1
    fi
  fi
  return 0
}

function get_usb_dev {
  local -r usb_name="$1"; shift
  camcontrol devlist |
    awk -v name="$usb_name" '$0 ~ name { print $NF }' |
    sed 's#[(),]##g; s#pass[0-9]*##; s#^#/dev/#;'
}

function scrub {
  local -r pool=$1; shift
  zpool status $pool | grep -q 'scrub in progress'
  if [ $? -ne 0 ]; then
    echo "Scrubbing $pool"
    zpool scrub $pool
  fi
  while : ; do
    zpool status $pool | grep -q 'scrub in progress'
    if [ $? -eq 0 ]; then
      echo "$pool is being scrubbed at the moment"
      sleep 1800
    else
      echo "Scrubbing completed"
      zpool status $pool
      break
    fi
  done
}

echo "Getting device with name '$USB_NAME0'" >&2
USB_DEV=$(get_usb_dev "$USB_NAME0")
exists $USB_DEV
if [ $? -ne 0 ]; then
  echo "Getting device with name '$USB_NAME1'" >&2
  USB_DEV=$(get_usb_dev "$USB_NAME1")
  exists $USB_DEV abort
fi

echo "Checking for GELI device"
exists $USB_DEV.eli
if [ $? -ne 0 ]; then
  echo "Checking for GELI key"
  exists $GELI_KEY abort

  echo "Attaching GELI"
  geli attach -k $GELI_KEY $USB_DEV
  exists $USB_DEV.eli abort
fi

echo "Checking wheter $DSTPOOL exists"
zpool list | grep -q $DSTPOOL

if [ $? -ne 0 ]; then
  echo "Importing $DSTPOOL"
  zpool import $DSTPOOL

  test $? -ne 0 ] && error "Could not import $DSTPOOL"
fi

echo "Checking if $DSTPOOL exists"
zpool list | grep -q $DSTPOOL

test $? -ne 0 && error "$DSTPOOL does not exist"

echo "Determine last snapshot on $DSTPOOL"
LAST=$(zfs list -t snapshot -o name |
  grep $DSTPOOL@ |
  sort -r | head -n 1 |
  sed "s/$DSTPOOL@//")

echo "Sending incremental update $SRCPOOL@$LAST...$TODAY -> $DSTPOOL"
zfs send -R -i $SRCPOOL@$LAST $SRCPOOL@$TODAY | zfs receive -v -F $DSTPOOL

test -z "$NOSCRUB" && scrub $DSTPOOL

echo "Exporting $DSTPOOL"
zpool export $DSTPOOL

echo "Detaching GELI"
geli detach $USB_DEV

Over the last years I wrote quite a few Puppet modules to manage my server infrastructure. One of them manages FreeBSD Jails and another one ZFS file systems. I thought I would give a brief overview in how it looks and feels.

ZFS

The ZFS module is a pretty basic one. It does not manage zpools yet as I am not creating them often enough in order to automate it. But lets see how we can create a ZFS file system (on a given Pool):

zfs::create { 'ztank/foo':
  ensure     => present,
  filesystem => '/srv/foo',

  require => File['/srv'],
}¬
admin alphacentauri:/opt/git/server/puppet/manifests [1212]% puppet.apply
Password:
Info: Loading facts
Info: Loading facts
Info: Loading facts
Info: Loading facts
Notice: Compiled catalog for alphacentauri.home in environment production in 7.14 seconds
Info: Applying configuration version '1460189837'
Info: mount[files]: allowing * access
Info: mount[restricted]: allowing * access
Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[ztank/foo_create]/returns: executed successfully
Notice: Finished catalog run in 25.41 seconds
admin alphacentauri:~ [1213]% zfs list | grep foo
ztank/foo                     96K  1.13T    96K  /srv/foo
admin alphacentauri:~ [1214]% df | grep foo
ztank/foo                  1214493520        96 1214493424     0%    /srv/foo
admin alphacentauri:~ [1215]% 

The destruction of the file system just requires a Puppet absent:

zfs::create { 'ztank/foo':
  ensure     => absent,
  filesystem => '/srv/foo',

  require => File['/srv'],
}¬
admin alphacentauri:/opt/git/server/puppet/manifests [1220]% puppet.apply
Password:
Info: Loading facts
Info: Loading facts
Info: Loading facts
Info: Loading facts
Notice: Compiled catalog for alphacentauri.home in environment production in 6.14 seconds
Info: Applying configuration version '1460190203'
Info: mount[files]: allowing * access
Info: mount[restricted]: allowing * access
Notice: /Stage[main]/Main/Node[alphacentauri]/Zfs::Create[ztank/foo]/Exec[zfs destroy -r ztank/foo]/returns: executed successfully
Notice: Finished catalog run in 22.72 seconds
admin alphacentauri:/opt/git/server/puppet/manifests [1221]% zfs list | grep foo
zsh: done       zfs list | 
zsh: exit 1     grep foo
admin alphacentauri:/opt/git/server/puppet/manifests [1222:1]% df | grep foo
zsh: done       df | 
zsh: exit 1     grep foo

Jails

Here is an example in how we can create Jails on FreeBSD. The Jail will have its own public IPv6 address. And it will have its own internal IPv4 address with IPv4 NAT to the internet (this is due to the limitation that the host server only got one public IPv4 address which requires sharing between all the Jails).

Furthermore, Puppet will ensure that the Jail will have its own ZFS file system (internally it is using the ZFS module mentioned earlier). Please notice that the NAT requires the packet filter to be setup correctly (its not mentioned in this blog post how to do that).

  include jail::freebsd

  # Cloned interface for Jail IPv4 NAT
  freebsd::rc_config { 'cloned_interfaces':
    value => 'lo1',
  }
  freebsd::rc_config { 'ipv4_addrs_lo1':
    value => '192.168.0.1-24/24'
  }

  freebsd::ipalias { '2a01:4f8:120:30e8::17':
    ensure    => up,
    proto     => 'inet6',
    preflen   => '64',
    interface => 're0',
    aliasnum  => '8',
  }

  class { 'jail':
    ensure              => present,
    jails_config        => {
      sync                     => {
        '_ensure'             => present,
        '_type'               => 'freebsd',
        '_mirror'             => 'ftp://ftp.de.freebsd.org',
        '_remote_path'        => 'FreeBSD/releases/amd64/10.1-RELEASE',
        '_dists'              => [ 'base.txz', 'doc.txz', ],
        '_ensure_directories' => [ '/opt', '/opt/enc' ],
        '_ensure_zfs'         => [ '/sync' ],
        'host.hostname'       => "'sync.ian.buetow.org'",
        'ip4.addr'            => '192.168.0.17',
        'ip6.addr'            => '2a01:4f8:120:30e8::17',
      },
    }
  }

This is how the result looks like:

admin sun:/etc [1939]% puppet.apply
Info: Loading facts
Info: Loading facts
Info: Loading facts
Info: Loading facts
Notice: Compiled catalog for sun.ian.buetow.org in environment production in 1.80 seconds
Info: Applying configuration version '1460190986'
Notice: /Stage[main]/Jail/File[/etc/jail.conf]/ensure: created
Info: mount[files]: allowing * access
Info: mount[restricted]: allowing * access
Info: Computing checksum on file /etc/motd
Info: /Stage[main]/Motd/File[/etc/motd]: Filebucketed /etc/motd to puppet with sum fced1b6e89f50ef2c40b0d7fba9defe8
Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Zfs::Create[zroot/jail/sync]/Exec[zroot/jail/sync_create]/returns: executed successfully
Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/File[/jail/sync/opt/enc]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Ensure_zfs[/sync]/Zfs::Create[zroot/jail/sync/sync]/Exec[zroot/jail/sync/sync_create]/returns: executed successfully
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/etc/fstab.jail.sync]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/File[/jail/sync/.jailbootstrap/bootstrap.sh]/ensure: created
Notice: /Stage[main]/Jail/Jail::Create[sync]/Jail::Freebsd::Create[sync]/Exec[sync_bootstrap]/returns: executed successfully
Notice: Finished catalog run in 49.72 seconds
admin sun:/etc [1942]% ls -l /jail/sync
total 154
-r--r--r--   1 root  wheel  6198 11 Nov  2014 COPYRIGHT
drwxr-xr-x   2 root  wheel    47 11 Nov  2014 bin
drwxr-xr-x   7 root  wheel    43 11 Nov  2014 boot
dr-xr-xr-x   2 root  wheel     2 11 Nov  2014 dev
drwxr-xr-x  23 root  wheel   101  9 Apr 10:37 etc
drwxr-xr-x   3 root  wheel    50 11 Nov  2014 lib
drwxr-xr-x   3 root  wheel     4 11 Nov  2014 libexec
drwxr-xr-x   2 root  wheel     2 11 Nov  2014 media
drwxr-xr-x   2 root  wheel     2 11 Nov  2014 mnt
drwxr-xr-x   3 root  wheel     3  9 Apr 10:36 opt
dr-xr-xr-x   2 root  wheel     2 11 Nov  2014 proc
drwxr-xr-x   2 root  wheel   143 11 Nov  2014 rescue
drwxr-xr-x   2 root  wheel     6 11 Nov  2014 root
drwxr-xr-x   2 root  wheel   132 11 Nov  2014 sbin
drwxr-xr-x   2 root  wheel     2  9 Apr 10:36 sync
lrwxr-xr-x   1 root  wheel    11 11 Nov  2014 sys -> usr/src/sys
drwxrwxrwt   2 root  wheel     2 11 Nov  2014 tmp
drwxr-xr-x  14 root  wheel    14 11 Nov  2014 usr
drwxr-xr-x  24 root  wheel    24 11 Nov  2014 var
admin sun:/etc [1943]% zfs list | grep sync;df | grep sync
zroot/jail/sync                 162M   343G   162M  /jail/sync
zroot/jail/sync/sync            144K   343G   144K  /jail/sync/sync
/opt/enc                                                 5061624     84248    4572448     2%    /jail/sync/opt/enc
zroot/jail/sync                                        360214972    166372  360048600     0%    /jail/sync
zroot/jail/sync/sync                                   360048744       144  360048600     0%    /jail/sync/sync
admin sun:/etc [1944]% cat /etc/fstab.jail.sync
# Generated by Puppet for a Jail.
# Can contain file systems to be mounted curing jail start.
admin sun:/etc [1945]% cat /etc/jail.conf
# Generated by Puppet

allow.chflags = true;
exec.start = '/bin/sh /etc/rc';
exec.stop = '/bin/sh /etc/rc.shutdown';
mount.devfs = true;
mount.fstab = "/etc/fstab.jail.$name";
path = "/jail/$name";

sync {
      host.hostname = 'sync.ian.buetow.org';
      ip4.addr = 192.168.0.17;
      ip6.addr = 2a01:4f8:120:30e8::17;
}
admin sun:/etc [1955]% sudo service jail start sync
Password:
Starting jails: sync.
admin sun:/etc [1956]% jls | grep sync
   103  192.168.0.17    sync.ian.buetow.org           /jail/sync
admin sun:/etc [1957]% sudo jexec 103 /bin/csh
root@sync:/ # ifconfig -a
re0: flags=8843 metric 0 mtu 1500
     options=8209b
     ether 50:46:5d:9f:fd:1e
     inet6 2a01:4f8:120:30e8::17 prefixlen 64 
     nd6 options=8021
     media: Ethernet autoselect (1000baseT )
     status: active
lo0: flags=8049 metric 0 mtu 16384
     options=600003
     nd6 options=21
     lo1: flags=8049 metric 0 mtu 16384
     options=600003
     inet 192.168.0.17 netmask 0xffffffff 
     nd6 options=29

To automatically setup the applications running in the Jail I am using Puppet as well. I wrote a few scripts which bootstrap Puppet inside of a newly created Jail. It is mounting an encrypted container (containing the secret Puppet manifests [git repository]), activating pkgng, installing Puppet and all dependencies, updating the System via freebsd-update to the latest version, restarts the jail and runs Puppet. Puppet will schedule a periodic cron job for the next Puppet runs.

admin sun:~ [1951]% sudo /opt/snonux/local/etc/init.d/enc activate sync
Starting jails: dns.
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
[sync.ian.buetow.org] Installing pkg-1.7.2...
[sync.ian.buetow.org] Extracting pkg-1.7.2: 100%
Updating FreeBSD repository catalogue...
[sync.ian.buetow.org] Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
[sync.ian.buetow.org] Fetching packagesite.txz: 100%    5 MiB   5.6MB/s    00:01   
Processing entries: 100%
FreeBSD repository update completed. 25091 packages processed.
Updating database digests format: 100%
The following 20 package(s) will be affected (of 0 checked):

  New packages to be INSTALLED:
          git: 2.7.4_1
          expat: 2.1.0_3
          python27: 2.7.11_1
          libffi: 3.2.1
          indexinfo: 0.2.4
          gettext-runtime: 0.19.7
          p5-Error: 0.17024
          perl5: 5.20.3_9
          cvsps: 2.1_1
          p5-Authen-SASL: 2.16_1
          p5-Digest-HMAC: 1.03_1
          p5-GSSAPI: 0.28_1
          curl: 7.48.0_1
          ca_root_nss: 3.22.2
          p5-Net-SMTP-SSL: 1.03
          p5-IO-Socket-SSL: 2.024
          p5-Net-SSLeay: 1.72
          p5-IO-Socket-IP: 0.37
          p5-Socket: 2.021
          p5-Mozilla-CA: 20160104

          The process will require 144 MiB more space.
          30 MiB to be downloaded.
[sync.ian.buetow.org] Fetching git-2.7.4_1.txz: 100%    4 MiB   3.7MB/s    00:01    
[sync.ian.buetow.org] Fetching expat-2.1.0_3.txz: 100%   98 KiB 100.2kB/s    00:01    
[sync.ian.buetow.org] Fetching python27-2.7.11_1.txz: 100%   10 MiB  10.7MB/s    00:01    
[sync.ian.buetow.org] Fetching libffi-3.2.1.txz: 100%   35 KiB  36.2kB/s    00:01    
[sync.ian.buetow.org] Fetching indexinfo-0.2.4.txz: 100%    5 KiB   5.0kB/s    00:01    
[sync.ian.buetow.org] Fetching gettext-runtime-0.19.7.txz: 100%  148 KiB 151.1kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-Error-0.17024.txz: 100%   24 KiB  24.8kB/s    00:01    
[sync.ian.buetow.org] Fetching perl5-5.20.3_9.txz: 100%   13 MiB   6.9MB/s    00:02    
[sync.ian.buetow.org] Fetching cvsps-2.1_1.txz: 100%   41 KiB  42.1kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-Authen-SASL-2.16_1.txz: 100%   44 KiB  45.1kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-Digest-HMAC-1.03_1.txz: 100%    9 KiB   9.5kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-GSSAPI-0.28_1.txz: 100%   41 KiB  41.7kB/s    00:01    
[sync.ian.buetow.org] Fetching curl-7.48.0_1.txz: 100%    2 MiB   2.2MB/s    00:01    
[sync.ian.buetow.org] Fetching ca_root_nss-3.22.2.txz: 100%  324 KiB 331.4kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-Net-SMTP-SSL-1.03.txz: 100%   11 KiB  10.8kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-IO-Socket-SSL-2.024.txz: 100%  153 KiB 156.4kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-Net-SSLeay-1.72.txz: 100%  234 KiB 239.3kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-IO-Socket-IP-0.37.txz: 100%   27 KiB  27.4kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-Socket-2.021.txz: 100%   37 KiB  38.0kB/s    00:01    
[sync.ian.buetow.org] Fetching p5-Mozilla-CA-20160104.txz: 100%  147 KiB 150.8kB/s    00:01    
Checking integrity...
[sync.ian.buetow.org] [1/12] Installing libyaml-0.1.6_2...
[sync.ian.buetow.org] [1/12] Extracting libyaml-0.1.6_2: 100%
[sync.ian.buetow.org] [2/12] Installing libedit-3.1.20150325_2...
[sync.ian.buetow.org] [2/12] Extracting libedit-3.1.20150325_2: 100%
[sync.ian.buetow.org] [3/12] Installing ruby-2.2.4,1...
[sync.ian.buetow.org] [3/12] Extracting ruby-2.2.4,1: 100%
[sync.ian.buetow.org] [4/12] Installing ruby22-gems-2.6.2...
[sync.ian.buetow.org] [4/12] Extracting ruby22-gems-2.6.2: 100%
[sync.ian.buetow.org] [5/12] Installing libxml2-2.9.3...
[sync.ian.buetow.org] [5/12] Extracting libxml2-2.9.3: 100%
[sync.ian.buetow.org] [6/12] Installing dmidecode-3.0...
[sync.ian.buetow.org] [6/12] Extracting dmidecode-3.0: 100%
[sync.ian.buetow.org] [7/12] Installing rubygem-json_pure-1.8.3...
[sync.ian.buetow.org] [7/12] Extracting rubygem-json_pure-1.8.3: 100%
[sync.ian.buetow.org] [8/12] Installing augeas-1.4.0...
[sync.ian.buetow.org] [8/12] Extracting augeas-1.4.0: 100%
[sync.ian.buetow.org] [9/12] Installing rubygem-facter-2.4.4...
[sync.ian.buetow.org] [9/12] Extracting rubygem-facter-2.4.4: 100%
[sync.ian.buetow.org] [10/12] Installing rubygem-hiera1-1.3.4_1...
[sync.ian.buetow.org] [10/12] Extracting rubygem-hiera1-1.3.4_1: 100%
[sync.ian.buetow.org] [11/12] Installing rubygem-ruby-augeas-0.5.0_2...
[sync.ian.buetow.org] [11/12] Extracting rubygem-ruby-augeas-0.5.0_2: 100%
[sync.ian.buetow.org] [12/12] Installing puppet38-3.8.4_1...
===> Creating users and/or groups.
Creating group 'puppet' with gid '814'.
Creating user 'puppet' with uid '814'.
[sync.ian.buetow.org] [12/12] Extracting puppet38-3.8.4_1: 100%
.
.
.
.
.
Looking up update.FreeBSD.org mirrors... 4 mirrors found.
Fetching public key from update4.freebsd.org... done.
Fetching metadata signature for 10.1-RELEASE from update4.freebsd.org... done.
Fetching metadata index... done.
Fetching 2 metadata files... done.
Inspecting system... done.
Preparing to download files... done.
Fetching 874 patches.....10....20....30....
.
.
.
Applying patches... done.
Fetching 1594 files... 
Installing updates...
done.
Info: Loading facts
Info: Loading facts
Info: Loading facts
Info: Loading facts
Could not retrieve fact='pkgng_version', resolution='': undefined method `pkgng_enabled' for Facter:Module
Warning: Config file /usr/local/etc/puppet/hiera.yaml not found, using Hiera defaults
Notice: Compiled catalog for sync.ian.buetow.org in environment production in 1.31 seconds
Warning: Found multiple default providers for package: pkgng, gem, pip; using pkgng
Info: Applying configuration version '1460192563'
Notice: /Stage[main]/S_base_freebsd/User[root]/shell: shell changed '/bin/csh' to '/bin/tcsh'
Notice: /Stage[main]/S_user::Root_files/S_user::All_files[root_user]/File[/root/user]/ensure: created
Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/userfiles]/ensure: created
Notice: /Stage[main]/S_user::Root_files/S_user::My_files[root]/File[/root/.task]/ensure: created
.
.
.
.
Notice: Finished catalog run in 206.09 seconds

Of course I am operating multiple Jails. Its possible to add as many Jails as desired.

Puppet modules:

When it comes to data storage and potential data loss I am a paranoid person. It is not just due to my job (Engineer in a backend team at a storage and archive company) but also due to a private experience I encountered over 10 years ago: A single drive failure and loss of all my data (pictures, music, ….).

A little bit about my private infrastructure: I am running my own (mostly FreeBSD based) root servers (across several countries: Two in Germany, one in Canada, one in Bulgaria) which store all my email and my git repositories. I am syncing incremental (and encrypted) ZFS snapshots between them. One server (lets call it the master server) replicates all the data to two disk via a ZFS ZRAID (mirror mode).

Additionally, I am operating a local server (an HP MicroServer) at my England home (London). That one is rsyncing over SSH any ZFS snapshots. (Encrypted) full snapshots of every ZFS mount point every other week and the (encrypted) incremental snapshots for every day. That local server got a ZFS ZRAID across 3 disks. All the data is stored 3 times to physical disks with daily ZFS snapshots dating back for half a year. That local server also holds all my offline data such as pictures, private documents, videos, books, various other backups, etc.

Once a week all the data of that local server is being backuped to two attached USB drives (without the snapshots). For simplicity (maybe after an operating system upgrade there is a bug in ZFS) these USB drives are not formatted with ZFS but good old UFS. This gives me a chance to recover from a (potential) ZFS disaster. ZFS is a complex thing. Sometimes it is good not to trust complex things!

Now I was thinking about a offsite backup of all this local data. The problem is, that all the data remains on a single physical location: My local MicroServer. What happens when the house burns or someone steals my server including the internal disks and the attached USB disks? My first thought was to backup everything into the “cloud”. The major issue here is however the limited amount of available upload bandwidth (only 1MBit/s). The solution was another USB drive (2TB) with an encryption container (GELI) and a ZFS pool on it. The GELI encryption requires a secret key and a secret passphrase. I am syncing the data of that drive every 3 months (Google Calendar is reminding me doing it) and afterwards I am keeping that drive at a secure location outside of my apartment. All the information needed to decrypt (mounting the GELI container) is stored at another (secure) place. Key and passphrase are kept at different places though. Even if someone would know of it, he would not be able to decrypt it as some insider knowledge is required as well.

Click here for a nice tutorial for initially setting up ZFS encryption on FreeBSD with GELI.

And here (see below) is my script I am doing my offsite backup with. It auto detects the device name of the USB drive, sets up GELI, imports the ZFS pool (named zoffsitetank), determines the last snapshot, incremental updates zoffsitetank from ztank (the latter is the pool used on the local server), exports zoffsitetank and detaches GELI.

I am thinking of buying a second 2TB USB drive and to set it up the same way as the first one. So I could alternate the backups. One drive would be at the secure location, and the other drive would be at home. And these drives would swap location after each cycle. This would give some security about the failure of that drive and I would have to go to the secure location only once (swapping the drives) instead of twice (picking that drive up in order to update the data + bringing it back to the secure location).

#!/bin/bash

readonly SRCPOOL=ztank
readonly DSTPOOL=zoffsitetank
readonly GELI_KEY=/some/secure/path/with/the/geli/encryption.key
readonly TODAY=$(/bin/date +'%Y%m%d')

readonly USB_NAME='Samsung M3 Portable'
declare  USB_DEV=''
declare  LAST=''

function error {
  echo "$@"
  exit 1
}

function exists {
  local -r path=$1  ; shift
  local -r abort=$1 ; shift

  echo "Checking for existance of $path"
  if [ ! -e $path ]; then
    message="$path does not exist"
    if [ "$abort" = "abort" ]; then
      error "$message"
    else
      echo "$message"
      return 1
    fi
  fi
  return 0
}

echo "Getting device with name $USB_NAME"
USB_DEV=$(camcontrol devlist |
  awk -v name="$USB_NAME" '$0 ~ name { print $NF }' |
  sed 's#.*,#/dev/#; s/)//;')
exists $USB_DEV abort

echo Checking for GELI device
exists $USB_DEV.eli
if [ $? -ne 0 ]; then
  echo Checking for GELI key
  exists $GELI_KEY abort

  echo Attaching GELI
  geli attach -k $GELI_KEY $USB_DEV
  exists $USB_DEV.eli abort
fi

echo Checking if $DSTPOOL exists
zpool list | grep -q $DSTPOOL

if [ $? -ne 0 ]; then
  echo Importing $DSTPOOL
  zpool import $DSTPOOL

  if [ $? -ne 0 ]; then
    error Could not import $DSTPOOL
  fi
fi

echo Checking if $DSTPOOL exists
zpool list | grep -q $DSTPOOL

if [ $? -ne 0 ]; then
  error $DSTPOOL does not exist
fi

echo Determine last snapshot on $DSTPOOL
LAST=$(zfs list -t snapshot -o name |
  grep $DSTPOOL@ |
  sort -r | head -n 1 |
  sed "s/$DSTPOOL@//")

echo "Sending incremental update $SRCPOOL@$LAST...$TODAY -> $DSTPOOL"
zfs send -R -i $SRCPOOL@$LAST $SRCPOOL@$TODAY | zfs receive -v -F $DSTPOOL

echo Exporting $DSTPOOL
zpool export $DSTPOOL

echo Detaching GELI
geli detach $USB_DEV

After buying a Lenovo ThinkPad X240 I was a bit disappointed as I hated the TrackPad (very much actually). The main issue was, that there were no separate mouse buttons. Lenovo seemed to realize that later on as the X250 got it fixed.

But there is a solution. Why not replace it yourself?

X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240 X240

Well, I lost the warranty now (would be out of warranty anyway soon). The replacement part can be bought from eBay for about 50 Euros. And you need to replace it yourself. Its not cheap. And its not easy to do either (a little tricky). But it is worth it!

PerlDaemon is a minimal daemon for Linux and other UNIX a like operating systems programmed in Perl (programmed by me some time ago). It can be extended to fit any task…

It supports:

  • Automatic daemonizing
  • Logging and logrotate support (SIGHUP)
  • Clean shutdown support (SIGTERM)
  • Pidfile support (incl. check on startup)
  • Easy to configure
  • Easy to extend (writing your own modules within PerlDaemonModules::)

The PerlDaemon website is located at https://perldaemon.buetow.org and the source is located at Github.

Quick Guide

# Starting
 ./bin/perldaemon start (or shortcut ./control start)

# Stopping
 ./bin/perldaemon stop (or shortcut ./control stop)

# Alternatively: Starting in foreground 
./bin/perldaemon start daemon.daemonize=no (or shortcut ./control foreground)

To stop the daemon then just hit Ctrl+C. To see more available startup options enter ./control without any argument.

Configurations can be set in ./conf/perldaemon.conf. If you want to change a property only once it is possible to specify it on command line. All available properties can be seen using ./control keys (caution: this list in this document may be outdated, please run ./control keys by yourself too!)

pb@titania:~/svn/utils/perldaemon/trunk$ ./control keys
# Path to the logfile
daemon.logfile=./log/perldaemon.log

# The amount of seconds until the next event look takes place
daemon.loopinterval=1

# Path to the modules dir
daemon.modules.dir=./lib/PerlDaemonModules

# Specifies either the daemon should run in daemon or foreground mode
daemon.daemonize=yes

# Path to the pidfile
daemon.pidfile=./run/perldaemon.pid

# Each module should run every runinterval seconds
daemon.modules.runinterval=3

# Path to the alive file (is touched every loopinterval seconds, usable to monitor)
daemon.alivefile=./run/perldaemon.alive

# Specifies the working directory
daemon.wd=./

So lets start the daemon using as its loop interval 10 seconds:

$ ./control keys | grep daemon.loopinterval
daemon.loopinterval=1
$ ./control keys daemon.loopinterval=10 | grep daemon.loopinterval
daemon.loopinterval=10
$ ./control start daemon.loopinterval=10; sleep 10; tail -n 2 log/perldaemon.log
Starting daemon now...
Mon Jun 13 11:29:27 2011 (PID 2838): Triggering PerlDaemonModules::ExampleModule 
(last triggered before 10.002106s; carry: 7.002106s; wanted interval: 3s)
Mon Jun 13 11:29:27 2011 (PID 2838): ExampleModule Test 2
$ ./control stop
Stopping daemon now...

If you want to change that property forever either edit perldaemon.conf or do this:

$ ./control keys daemon.loopinterval=10 > new.conf; mv new.conf conf/perldaemon.conf

PerlDaemon uses Time::HiRes to make sure that all the events run in correct intervals. Each loop run a time carry value is recorded and added to the next loop run in order to catch up lost time (in future there will be an option to turn that off and on).

Writing your own modules

 cd ./lib/PerlDaemonModules/
 cp ExampleModule.pm YourModule.pm
 vi YourModule.pm
 cd -
 ./bin/perldaemon restart (or shortcurt ./control restart)

Now watch ./log/perldaemon.log to see everything runs fine. It is a good practise to test your modules in ‘foreground mode’ (see above how to do that).

BTW: You can install as many modules in parallel as wished. But they are run in sequential order (in future they can also run in parallel using several threads or processes).

Gotop (https://gotop.buetow.org) is a simple disk IO stats program programmed in Go for Linux. It can be used as a replacement of iotop. I programmed Gotop the other day in order to learn a bit more about the Go programming language.

Gotop exposes the procfs (the proc file system) for gathering all the stats needed. Therefore it is very efficient. However, there are limits in doing so as Linux does not export everything to procfs. To go further I would recommend using SystemTap or sysdig. These are two profiling instruments which are using probes inside of the Linux Kernel in order to expose more information from the system (e.g. top N most busy files, etc.).

I like Go, but at the moment there is no real application for me in using this programming language any further. At work, most of my things are handled with Puppet DSL, Ruby and Shell scripting. Privately I am mostly using Perl, Puppet DSL, Shell and the C programming language.

On Github I put a small howto for installing a full blown Debian chroot environment on your CyanogenMod powered Android Smartphone.

This basically turns the Smartphone into a full blown ARM CPU powered Linux device.

It works on Fedora 23 (the system you are using to install the chroot environment on the phone) with Debian GNU/Linux Jessie (the OS of the chroot environment) on a LG G3 D855 Smartphone (which runs the CyanogenMod Android distribution).

Enjoy!

Wondering whether you are using IPv6 or IPv4? Check out the IPv6/IPv4 connectivity site at ipv6.buetow.org.

IPv6 Connectivity Test

Check out the Puppet module at Github.