RPI4 debian server

This page describes the various steps to install my rpi4 debian server.

Hardware configuration and initial setup

A rpi4 with 4GB has been selected with an external NVME SSD with a USB-C enclosure.
I followed these instructions to get USB boot configuration.

Starting point is to install image on muSD and ssd:
Note that both disks have same partitions uuids at this point.

Do not forget to create an empty ssh file (via touch) on boot partition to boot headless and get ssh server running on the rpi4.

Here is the partitioning that I applied on the disk and how I changed uuid:

fdisk /dev/sda

Check change via:
fdisk -l 
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk model: JMS583
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xd34db33f

Device     Boot  Start     End Sectors  Size Id Type
/dev/sda1         8192  532479  524288  256M  c W95 FAT32 (LBA)
/dev/sda2       532480 4390911 3858432  1.9G 83 Linux

Check ids via blkid, then on muSD change root via vi /boot/cmdline.txt changing root=PARTUUID=6c586e13-02 by root=PARTUUID=d34db33f-02 in /boot/cmdline.txt.

At this point change uuid mount points via:
mkdir /mnt/rootfs
mount /dev/sda2 /mnt/rootfs
vi /mnt/rootfs/etc/fstab
change only rootfs
PARTUUID=6c586e13-02  /               ext4    defaults,noatime  0       1
PARTUUID=6c586e13-02  /               ext4    defaults,noatime  0       1d34db33f-02

Check that it boots and then resize partitions via fdisk /dev/sda to align with the following result:

fdisk -l
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: D3B3EDDF-DC19-473E-A358-D1BDA74FE6D1

Device         Start       End   Sectors   Size Type
/dev/sda1       8192    532479    524288   256M Microsoft basic data
/dev/sda2     532480 480583167 480050688 228.9G Linux filesystem
/dev/sda3  480583168 488397134   7813967   3.7G Linux swap

Set labels:
fatlabel /dev/sda1 boot
e2label /dev/sda2 rootfs
Then upgrade boot eeprom:

apt install rpi-eeprom
vi /etc/default/rpi-eeprom-update
rpi-eeprom-update -a

Now we are ready to clone the muSD to the SSD thus from muSD:
apt update
apt upgrade
apt install git
rpi-clone sda

Edit fstab for /boot on muSD:
proc            /proc           proc    defaults          0       0
PARTUUID=6c586e13-01  /boot           vfat    defaults          0       2
PARTUUID=d34db33f-02  /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that

Modify on muSD /boot/cmdline.txt
console=serial0,115200 console=tty1 root=PARTUUID=d34db33f-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

And you should be good to go.
cp /lib/firmware/raspberrypi/bootloader/beta/pieeprom-2020-05-15.bin pieeprom.bin
vcgencmd bootloader_config
rpi-eeprom-config pieeprom.bin > bootconf.txt
vi bootconf.txt
rpi-eeprom-config --out pieeprom-new.bin --config bootconf.txt pieeprom.bin
rpi-eeprom-update -d -f ./pieeprom-new.bin
mount /dev/sda1 /mnt/boot
cp -r /boot/* /mnt/boot/
vi /etc/fstab

In order to update the raspbian distrib, do it via
To get latest 64bit kernel add in /boot/config.txt

All boot config options are detailed here: https://www.raspberrypi.org/documentation/configuration/config-txt/boot.md

Get temperature

vcgencmd measure_temp

Debian distribution setup

Set hostname and and timezone:
localectl set-locale LANG=fr_FR.UTF-8
timedatectl set-timezone Europe/Paris
hostnamectl set-hostname $MYHOSTNAME

hostname $MYHOSTNAME
vi /etc/hostname
vi /etc/hosts localhost hyperion

Configure the right locales:

apt install locales
locale-gen en_US en_US.UTF-8 en_US.ISO-8859-1 en_US.ISO-8859-15 fr_FR fr_FR.UTF-8 fr_FR.ISO-8859-15 fr_FR.ISO-8859-1
dpkg-reconfigure locales
Generating locales (this might take a while)...
  en_US.ISO-8859-1... done
  en_US.ISO-8859-15... done
  en_US.UTF-8... done
  fr_FR.UTF-8... done
  fr_FR.UTF-8... done
  fr_FR.ISO-8859-15@euro... done

Reconfigure your time zone to get proper time:
dpkg-reconfigure tzdata

Basic installation:
apt install mosh mutt irssi vim subversion-tools rsync mosh less tig git openssh-server build-essential python python-setuptools python-pip convmv sqlite urlview w3m par metastore curl most ispell ifrench mtp-tools apg dnsutils retext obconf apt-file octave maxima locate tmux nmap catdoc wv elinks links lynx pmount mpc ncmpc epstool gnupg pinentry-tty pinentry-curses silversearcher-ag pwgen

Install DNS service: bind

Install bind9 DNS daemon:

apt install bind9
service bind9 restart

Configure your domain entries creating files e.g. /etc/bind/db.0.168.192 and /etc/bind/db.yourdomain.com and reference it inside /etc/bind/name.conf.local 

Install DHCP daemon

Install and configure dhcp daemon:

apt install isc-dhcp-server
vi /etc/dhcp/dhcpd.conf
service isc-dhcp-server restart

Configure ethernet MAC address and host mapping in compliance with DNS bind configuration files: /etc/dhcp/dhcpd.conf

Process to add a new host
  • edit subdomain dns entries
  • add ether MAC address in dhcp table

Access seamlessly NAS

Solution is to use autofs between rpi4 and synology NAS:

apt install autofs
cat /etc/auto.master
/media/nfs /etc/auto.nfs --ghost
cat /etc/auto.nfs

Map synology domain users to debian one:

adduser --system --no-create-home --uid 1026 media
addgroup --gid 65536 mediagroup

On the synology as root in the synology domain (not chroot) do:

vi /etc/exports
exportfs -a

Install web server

apt install nginx apache2-utils

Configuration files are located there /etc/nginx/sites-available/default


htpasswd /etc/nginx/htpasswd username

Install proper signed certificates with letsencrypt

In order to use the letsencrypt free ssl certificate service, I went through the following manual steps to avoid any errors
apt install python-certbot-nginx certbot
certbot --nginx

ssl cert:

cd /etc/ssl
cat openssl.cnf | grep _default                                                                                                              |
countryName_default             = FR
stateOrProvinceName_default     = IDF
localityName_default            = Paris
0.organizationName_default      = Courville.org
organizationalUnitName_default  = Software
commonName_default              = courville.org
emailAddress_default            = software@courville.org

Install news feed

In order to have centralized rss feeds one can install tinyrss server:

Mariadb is now the default db server under debian (bye bye mysql).

apt install mariadb-server

Install php:
apt install nginx php7.3-fpm php7.3-gd php7.3-mysql php7.3-curl php7.3-imap php7.3-mbstring php7.3-xml
apt install php-intl
vi /etc/php/7.3/fpm/php.ini

add in /etc/nginx/sites-available/default
location ~ \.php$ {
    root /var/www/html;
    try_files $uri =404;
    fastcgi_index index.php;
    fastcgi_pass unix:/run/php/php7.3-fpm.sock;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;

Do not forget to add too:
index index.html index.htm index.nginx-debian.html index.php;
Start php:
service php7.3-fpm restart
service nginx restart
Create database
mysql -p -u root
mysql> CREATE USER 'ttrss'@'localhost' IDENTIFIED BY 'somepassword';
mysql> CREATE DATABASE ttrss;
mysql> GRANT ALL PRIVILEGES ON ttrss.* TO "ttrss"@"localhost" IDENTIFIED BY 'somepassword';

Get tt-rss:
cd /var/www/html
git clone https://tt-rss.org/git/tt-rss.git tt-rss
chown -R www-data:www-data tt-rss
cd tt-rss

vi config.php

Enable service with systemd:
vi /etc/systemd/system/ttrss-update.service
Description=Tiny Tiny RSS update daemon
After=network.target mysql.service



To activate at start
systemctl enable ttrss-update.service
systemctl --system daemon-reload

Activate it for testing:
systemctl start ttrss-update.service
systemctl status ttrss-update.service

For an update of tt-rss code just do:
cd /var/www/html/tt-rss
sudo git pull origin master
sudo chown -R www-data:www-data .
In order to add a user go through admin account configuration menu.
If an update of the db is reaquired:
su www-data -c "/var/www/html/tt-rss/update.php --update-schema"
Test if all is fine:
./update.php --feeds --quiet


Follow https://www.domoticz.com/wiki/Raspberry_Pi and run through the installer:

sudo adduser domoticz
curl -sSL install.domoticz.com | sudo bash
Ports 8080 and 8443 have been chosen.
Get a list of all the switches
curl\&param=getlightswitches | grep 'Name\|idx'| paste -d " "  - -

I added a couple of shell scripts launched by a scene switch to disable heaters and roller shutter timers while I am present or while away on vacations and launched via script:///usr/local/bin/domoticz-vacances.sh 0 

cat /usr/local/bin/domoticz-presence.sh
# domoticz-presence
# 1 enable timers 0 disable timers

if [ "$1" = "1" ]
elif [ "$1" = "0" ]
  curl "http://localhost:8080/json.htm?type=command&param=addlogmessage&message=Invalid%20parameter"
  echo "Invalid parameter"
  exit 1

# update heater timers
for t in 9 13
  curl "http://localhost:8080/json.htm?type=command&param=$ACTION&idx=$t"
# set heater to ECO/COMFORT
curl "$level"

cat /usr/local/bin/domoticz-vacances.sh
# domoticz-vacances
# 1 enable timers 0 disable timers

if [ "$1" = "1" ]
elif [ "$1" = "0" ]
  curl "http://localhost:8080/json.htm?type=command&param=addlogmessage&message=Invalid%20parameter"
  echo "Invalid parameter"
  exit 1

# update heater timers
for t in 2 3 4 8
  curl "http://localhost:8080/json.htm?type=command&param=$ACTION1&idx=$t"
# set heater to ECO/COMFORT
curl "$level"
# disable scene roller shutters timers
for t in 2 3 4
  curl "http://localhost:8080/json.htm?type=command&param=$ACTION2&idx=$t"

I use a kubino Z-Wave DIN Pilot Wire to control my electric heaters through a virtual selector (multi-level) switch with following settings:

OF        0
HG       10
ECO      20
C-2      30
C-1      40

Now in order to automate closing the roller shutter at a suitable time i.e. at civil dusk but not before 8pm and before 10pm only if not on vacations (by checking vacation switch), here is the little shell script I use to update the scenetimer everyday that relies on sunwait tool:

# domoticz-fermeturevolets
# use sunwait to get civil dusk time to calculates when to close roller shutters with the constraints to be in this interval >20h <22h, civildusk i.e. min(max(civildusk,20h),22h)

tdusk=`/usr/local/bin/sunwait  -p 48.866667N 2.333333W | grep Civil | sed "s/^.* ends \([0-9]*\) .*$/\1/g"`
hdusk=`echo $tdusk | cut -c 1-2`
mdusk=`echo $tdusk | cut -c 3-4`
sdusk=`date -d"$hdusk:$mdusk" +%s`
notbefore=`date -d"20:00" +%s`
notafter=`date -d"22:00" +%s`
# max(sdusk,notbefore)
# min(ans,notafter)
hdown=`date -d@$result +%H`
mdown=`date -d@$result +%M`

isvacances=`curl -s "http://localhost:8080/json.htm?type=devices&rid=43" | grep Status | sed 's/^.*Status.* : "\([^"]*\)",$/\1/g'`
if [ "$isvacances" = "Off" ]
  echo we are not on vacation, fine: setting shutter closing time to $hdown:$mdown
  # close roller shutters
  curl "http://localhost:8080/json.htm?type=command&param=$ACTION&idx=$t&active=true&timertype=2&hour=$hdown&min=$mdown&randomness=false&command=0&days=1234567"
  echo lucky us: we are on vacation, no change

Get the rpi temperature via creating a virtual dummy sensor in hardware tab and identifying its index then using this script in a crontab every 5 minutes (*/5 * * * *):

cpuTemp0=$(cat /sys/class/thermal/thermal_zone0/temp)
cpuTempM=$(($cpuTemp2 % $cpuTemp1))
echo CPU temp"="$cpuTemp1"."$cpuTempM"'C"
curl -S "$cpuTemp1.$cpuTempM"

rfplayer plugin installation

The basic rfplayer support is very limited, in order to get a better support of this RF dongle install the following plugin: https://github.com/sasu-drooz/Domoticz-Rfplayer

su - domoticz
mkdir -p domoticz/plugins/rfplayer
curl -L https://raw.githubusercontent.com/sasu-drooz/Domoticz-Rfplayer/master/plugin.py > domoticz/plugins/rfplayer/plugin.py
chmod 755 domoticz/plugins/rfplayer/plugin.py
sudo systemctl restart domoticz.service
Install now under domiticz setup RFplayer plugin (to be distinguished from legacy ZiBlue RFPlayer USB) and make sure you tick the "Enable learning mode" option.

Link your sensors/actuators with Google Home via google actions

Expose domoticz to Google Home using dzga https://github.com/DewGew/Domoticz-Google-Assistant and following https://www.domoticz.com/wiki/Google_Assistant instructions.
Simply run the installer:
su - domoticz
bash <(curl -s https://raw.githubusercontent.com/DewGew/dzga-installer/master/install.sh)
sudo systemctl enable dzga
sudo systemctl start dzga
sudo systemctl stop dzga
Check if service is running:
sudo systemctl status dzga
To update run installer again:
bash <(curl -s https://raw.githubusercontent.com/DewGew/dzga-installer/master/install.sh)
Make domoticz visible from outside:
vi /etc/nginx/sites-available/default
location /domoticz/ {
rewrite ^/domoticz/(.*)$ /$1 break;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:8080/; #Local ipno to dzga
proxy_read_timeout 90;

To configure the google actions do follow ONLY the instructions available on the 'Setup Actions on Google Console Instructions" tab and triple check every step. Additionnaly use capital 'C' in ClientID and ClientSecret.

In order to resync the devices you can call https://home.courville.org/assistant/sync

Solve usb serial devices enumeration issue at boot

The problem is that I have a zwave, rxccom RFXtrx433E USB, tic, Conbee II zigbee and smartreader usb serial dongles and I want to be sure that I can distinguish each of them independently of the enumeration process.
In order to overcome this, create proper udev url editing /etc/udev/rules.d/50-usb-marc.rules containing the following:

SUBSYSTEM=="tty", ATTRS{idVendor}=="0658", ATTRS{idProduct}=="0200", SYMLINK+="ttyACM-zwave", GROUP="root", MODE="0666"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6015", SYMLINK+="ttyUSB-tic", GROUP="root", MODE="0666"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="Smartreader2 plus", SYMLINK+="ttyUSB-smartreader", GROUP="root", MODE="0666"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="RFXtrx433", SYMLINK+="ttyUSB-rfxcom", GROUP="root", MODE="0666"
SUBSYSTEM=="tty", ATTRS{idVendor}=="1cf1", ATTRS{idProduct}=="0030", ATTRS{product}=="ConBee II", SYMLINK+="ttyACM-zigbee", GROUP="root", MODE="0666"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="RFPLAYER", SYMLINK+="ttyACM-rfplayer", GROUP="root", MODE="0666"

In your case adapt the idVendor, idProduct depending on the result of lsusb.
Note that SYMLINK needs to be prefixed by ttyUSB in order to be recognized by domoticz (no exotic names) as specified here.

Zigbee gateway on the raspberry and domoticz integration

Conbee II USB stick replaces my Philips Hue bridge, installation instructions are available here: https://phoscon.de/en/raspbee/install#raspbian
On a headless configuration:
wget -O - http://phoscon.de/apt/deconz.pub.key | sudo apt-key add -
sh -c "echo 'deb http://phoscon.de/apt/deconz $(lsb_release -cs) main' > /etc/apt/sources.list.d/deconz.list"
apt update
apt install deconz
systemctl disable deconz-gui
systemctl stop deconz-gui
vi /etc/default/deconz
DECONZ_OPTS="-–http-port=8090 --ws-port=50733 --upnp=0 --auto-connect=1 --dbg-error=1 -–dev=/dev/ttyACM-zigbee"
vi /etc/systemd/system/multi-user.target.wants/deconz.service
# domoticz user
ExecStart=/usr/bin/deCONZ -platform minimal ${DECONZ_OPTS}
systemctl daemon-reload
systemctl enable deconz
systemctl start deconz
systemctl status deconz
curl -H 'Content-Type: application/json' -X PUT -d '{"websocketnotifyall": true}' http://localhost:8090/api/APIKEY/config

Configure the zigbee gateway via:
In order to create an API key to remotely access the gateway:
python3 API_KEY.py create
If you are lost you can retrieve the gateway configuration from the LAN querying: https://dresden-light.appspot.com/discover 
Gateway configuration can be viewed via
curl -X GET http://localhost:8090/api/$APIKEY/config  | jq '.' | less
Manual testing:
deCONZ -platform minimal -–http-port=8090 --ws-port=50733 --upnp=0 --auto-connect=1 --dbg-error=1 --device=dev/ttyACM-zigbee
Integration into domoticz is achieved following https://github.com/Smanar/Domoticz-deCONZ 
su - domoticz
cd domoticz/plugins
git clone https://github.com/Smanar/Domoticz-deCONZ
chmod +x Domoticz-deCONZ/plugin.py

Getting away from a fixed IP survival guide (sigh)
Since my beloved ISP (free) was getting too long to provide me with fiber optic access (yes xDSL is too slow), I switched back to the evil Orange that tricked me with an attractive web offer.

Of course I had to give up my fixed IP address in this process which was a big problem for me.
In order to overcome this problem I had to go through the following steps:
  • modify gandi DNS entry to redirect www.courville.org to ghs.googlehosted.com using CNAME
  • have google apps to map my sub site to https://sites.google.com/a/courville.org/courville in google apps console -> applications -> sites -> mapping
  • create in gandi a DNS entry for my IP address home.courville.org with small TTL (e.g. 300s i.e. 5mn)
  • have this record updated automatically with dyn-gandi python3 script using gandi API livedns production key generated updated through crontab job
git clone https://github.com/Danamir/dyn-gandi
cd dyn-gandi
python3 ./setup.py install
python3 ./setup.py develop

vi /etc/dyn-gandi.ini
url = https://dns.api.gandi.net/api/v5
; Generate your Gandi API key via : https://account.gandi.net/en/users/<user>/security
key = lalala

domain = courville.org
; comma-separated records list
records = @,home
ttl = 3600

; Choose an IP resolver : either plain text, or web page containing a single IP
resolver_url = http://ipecho.net/plain
; resolver_url = http://ifconfig.me/ip
; resolver_url = http://www.mon-ip.fr

; Optional alternative IP resolver, called on timeout
resolver_url_alt =

crontab -e
*/5 *   * * *                   /usr/local/bin/dyn_gandi --conf=/etc/dyn-gandi.ini >/dev/null 2>&1
  • since I am an happy user of synology use synology dyndns free service and make courville.synology.me point to my home

Install git server
The easiest lightwave solution is gitolight following the instructions of https://www.vultr.com/docs/setup-git-repositories-with-gitolite-on-debian-wheezy

adduser gitolite
su gitolite
git clone git://github.com/sitaramc/gitolite
mkdir -p $HOME/bin
gitolite/install -to $HOME/bin
As user
scp yourkey.pub gitolite@localhost:yourname.pub
As gitolite
bin/gitolite setup -pk yourname.pub
As user clone gitolite admin repo
git clone gitolite@localhost:gitolite-admin

Subtitles downloader
subliminal is a python tool that downloads subtitles from various sources for video files.

cd /agraver/git
apt install python python-setuptools python-pip
git clone https://github.com/Diaoul/subliminal
cd subliminal
python setup.py install

Subtitles download can be automated downloading first english subs if only available to be replaced by french ones once available using a little script like this one:


[ -z "$TV_DIR" ] && TV_DIR=/media/nfs/ds414/video/serie
[ -z "$BACKTIME" ] && BACKTIME=14
cd $TV_DIR
# try to get the subtitle only for 14 days to avoid exploding list of files to process
for file in $(find . -mtime -${BACKTIME} -type f \( -iname \*.avi -o -iname \*.mkv -o -iname \*.mp4 \) )
  if [ ! -f "$subtitle" ]
    echo processing $file
    # download without .fr.srt or .en.srt to be sure it is from addic7ed in french first in -s single mode
    $GETSUB -s -l fr -p addic7ed --addic7ed-username username --addic7ed-password password -q "$file" && rm "$file".en.srt
    # if it failed with addict7ed get from any other source but with .fr.srt and .en.srt
    [ ! -f "$subtitle" ]  && $GETSUB -l en --addic7ed-username username --addic7ed-password password -q "$file"

If you want to update subliminal in the future just do:

cd /agraver/git/subliminal
git pull
python setup.py install

TV series file renamer
tvnamer is a very powerful python program that enables to rename tv series.

cd /agraver/git
git clone https://github.com/dbr/tvnamer
cd tvnamer
python setup.py install

An example of user configuration of tvnamer capturing the most useful features of the tool can be found below:

cat .tvnamer.json
    "language": "en",
    "search_all_languages": false,
    "always_rename": false,
    "batch": true,
    "episode_separator": "-",
    "episode_single": "%02d",
    "filename_with_date_and_episode": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
    "filename_with_date_without_episode": "%(seriesname)s-e%(episode)s%(ext)s",
    "filename_with_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s-%(episodename)s%(ext)s",
    "filename_with_episode_no_season": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
    "filename_without_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s%(ext)s",
    "filename_without_episode_no_season": "%(seriesname)s-e%(episode)s%(ext)s",
    "lowercase_filename": false,
    "move_files_confirmation": true,
    "move_files_destination": "/media/nfs/ds414/video/serie/%(seriesname)s/%(seriesname)s-s%(seasonnumber)02d",
    "move_files_enable": true,
    "multiep_join_name_with": ", ",
    "normalize_unicode_filenames": false,
    "recursive": false,
    "custom_filename_character_blacklist": ":<>?*",
    "replace_invalid_characters_with": "_",
    "move_files_fullpath_replacements": [
        {"is_regex": false, "match": " ", "replacement": "_"},
{"is_regex": false, "match": "_-_", "replacement": "-"},
{"is_regex": false, "match": ":", "replacement": ""},
        {"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
    "output_filename_replacements": [
        {"is_regex": false, "match": "?", "replacement": ""},
        {"is_regex": false, "match": " ", "replacement": "_"},
{"is_regex": false, "match": "_-_", "replacement": "-"},
{"is_regex": false, "match": ":", "replacement": ""},
        {"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
    "input_filename_replacements": [
        {"is_regex": true, "match": "Serie.US", "replacement": "Serie"},
        {"is_regex": true, "match": "Serie([^:])", "replacement": "Serie_expanded_name\\1"}

If you want to update tvnamer in the future just do:

cd /agraver/git/tvnamer
git pull
python setup.py install

Transmission tweaks
Install transmission and command line related tools via:

apt install transmission transmission-daemon transmission-cli transmission-remote-cli python-transmissionrpc

Transmission daemon configuration files are located: /var/lib/transmission-daemon/info. Make sure to stop transmission on the synology web interface or using the below commands before editing this file so that your modifications are taken into account:

service transmission-daemon stop
vi /var/lib/transmission-daemon/info/settings.json
service transmission-daemon start
  • learn to use transmission-remote script
  • in order to execute some actions at the end of a file download a script can be triggered e.g. for moving files around based on tracker name
    "script-torrent-done-enabled": true,
    "script-torrent-done-filename": "/usr/local/bin/transmission-done.sh",

With /usr/local/bin/transmission-done.sh for instance being:





if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_ID}" -o -z "${TR_TORRENT_DIR}" ]
  [ -z "$1" ] && exit 1

tremote ()
  /usr/bin/transmission-remote -n USER:PASSWORD $HOST -t ${TR_TORRENT_ID} $* # 2>&1 | tee -a $DBGF

if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_DIR}" ]
  TR_TORRENT_NAME=$( tremote -t $1 -i | grep Name: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2 )
  TR_TORRENT_DIR=$( tremote -t $1 -i | grep Location: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2)

echo `date +%Y-%m-%d\ %H:%M` processing id ${TR_TORRENT_ID} file ${TR_TORRENT_NAME} in dir ${TR_TORRENT_DIR} 2>&1 | tee -a $DBGF

( tremote -i | grep "Public torrent" | grep -qi No ) && isprivate=true
( tremote -it | grep -qi "tracker.a" ) && istrackera=true
( tremote -it | grep -qi "tracker.b" ) && istrackera=true
# if flexget sets Location to serie then it must be a tvshow
( tremote -i | grep Location | grep -q $TV_SHOW_DIR ) && istvshow=true
if ( echo "${TR_TORRENT_NAME}" | grep -Eqi "${TV_SHOW_ID}" )
  ( ( echo "${TR_TORRENT_NAME}" | grep -Eqi "REPACK" ) || ( echo "${TR_TORRENT_NAME}" | grep -Eqi "PROPER" ) ) && istvshowrpk=true || istvshowrpk=false

if $isprivate
  # private torrent
  if $istrackera
    echo "trackera"
    mkdir -p "$DOWNLOAD_DIR/trackera"
    tremote --move "$DOWNLOAD_DIR/trackera"
  elif $istrackerb
    echo "trackerb"
    mkdir -p "$DOWNLOAD_DIR/trackerb"
    tremote --move "$DOWNLOAD_DIR/trackerb"
  # public torrent
  if $istvshow
    echo "tvshow detected, move and remove ${TR_TORRENT_DIR}/${TR_TORRENT_NAME}" "${TV_SHOW_DIR}" 2>&1 | tee -a $DBGF
    tremote --move "$DOWNLOAD_DIR/serie"
    tremote -r
    echo "other public"
    tremote -S
    mkdir -p "$DOWNLOAD_DIR/public"
    tremote --move "$DOWNLOAD_DIR/public"
  • periodically you can check if your download and seed is complete based on your seedratio and then decide to move the completed files somewhere and remove it from transmission. This script does this for you:

tremote ()
 transmission-remote -n user:password $*

for t in `tremote -l | grep Done | sed -e 's/^ *//' | sed 's/\*//' | cut -s -d " " -f1`
  if ( tremote -t $t -it | grep -qi "tracker.a" );
    echo processing $t
    tremote -t $t --move "/media/nfs/ds414/files/trackera"
    tremote -t $t -r
  • enabling blocklist in transmission is a sensible thing to do for this add the following option to /var/lib/transmission-daemon/info/settings.json
    "blocklist-enabled": true,
    "blocklist-url": "http://list.iblocklist.com/?list=bt_level1&fileformat=p2p&archiveformat=gz",
You can periodically update the blocklist via cron job by issuing sudo crontab -e and adding:
30      19      *       *       *               root    /usr/bin/transmission-remote -n user:password --blocklist-update
and restarting cron:
service cron restart

  • keeping a fair use of the ADSL connection to allow others to have good traffic without too latency edit /var/lib/transmission-daemon/info/settings.json
"download-queue-size:" 5,
"peer-limit-per-torrent:" 5,
"peer-limit-global:" 30,

In order to have good permission rights, in synology domain add transmission gid 107 in /etc/group that should match the gid of debian-transmission in debian domain. Add users that should get access to download directory to this group in synology domain. Fix on debian domain the directory rights to allow writing for debian-transmission:

addgroup debian-transmission mediagroup
chown -R debian-transmission:mediagroup /media/nfs/ds414/download
chmod g+rwX -R /media/nfs/ds414/download

  • transmission logs suggests for udp and utp buffer optimizations (when adding in /etc/default/transmission-daemon to OPTION --logfile /media/nfs/ds414/download/transmission/transmission.log and reading logs) put in /etc/sysctl.conf 
echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf
echo "net.core.wmem_max = 1048576" >> /etc/sysctl.conf
sysctl -p

Test with transmission-daemon in foreground via transmission-daemon --foreground

Be sure that correct owner on chown -R debian-transmission:debian-transmission /var/lib/transmission/* /etc/transmission-daemon/*

Make transmission visible from outside:
vi /etc/nginx/sites-available/default
location /transmission/ {
  auth_basic "Restricted";
  auth_basic_user_file /etc/nginx/htpasswd;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Scheme $scheme;
  proxy_pass http://localhost:9091/transmission/; #Local ipno to dzga
  proxy_read_timeout 90;

Automatic rss based file downloading

flexget is a good tool and easy to configure to automate file download based on rss feed matching names.
It can be installed by:
apt instalflel sqlite
easy_install flexget
easy_install --upgrade transmissionrpc
A typical user configuration can be the following:
cat .flexget/config.yml
      host: localhost
      port: 9091
      username: username
      password: password
      addpaused: no
      active: True
      from: flexget@yourdomain.com
      to: user@yourdomain.com
      smtp_host: smtp.yourdomain.com
      smtp_port: 25
      smtp_login: false
    quality: webrip|webdl|hdtv <720p
        - Blah Confidential
      reject: '*.avi'
      path: /media/nfs/ds414/download/transmission/serie
        - '*.nfo'
        - '*.sfv'
        - '*[sS]ample*'
        - '*.txt'
      include_subs: yes
     - Serie name one
     - Serie name two
    rss: http://streamsiteone.com/feed/
    template: tv
    priority: 10
    rss: http://streamsitetwo.com/?cat=9
    template: tv
    priority: 20

In order to check flexget configuration you can issue (when it fails have a look here):

flexget check

Launch of flexget task can be automated using cron job, by issuing sudo crontab -e and adding:

0 07 * * * root /usr/local/bin/flexget --cron >/dev/null 2>&1

Relaunch cron service through:

service cron restart

If you want to update flexget in the future just do:

pip install --upgrade flexget
#pip install --force-reinstall --ignore-installed flexget
#pip install --force-reinstall --ignore-installed transmissionrpc

If you experience some issues with seen status or database you can reset it with the following command that should be used only as a last resort:

flexget database reset --sure

Backup your data with history: rsnapshot

rsnapshot is an rsync based backup system that I like: it is simple yet efficient. 

  • Install it with:
apt install rsnapshot
My configuration is the following /etc/rsnapshot.conf:

config_version  1.2
snapshot_root   /media/nfs/backup/rsnapshot
cmd_cp          /bin/cp
cmd_rm          /bin/rm
#cmd_rsync      /usr/syno/bin/rsync
cmd_rsync       /usr/bin/rsync
cmd_ssh /opt/bin/ssh
cmd_du          /usr/bin/du
interval        daily   7
interval        weekly  4
interval        monthly 6
link_dest       1
verbose         2
loglevel        3
logfile         /media/nfs/backup/rsnapshot.log
#rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded --syno-acl
rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded
lockfile        /var/run/rsnapshot.pid
# little trick in .ssh/config to map syno to host localhost with port 2222. WARNING: you need to enable backup services i.e. rsync on synology's web interface
backup    root@syno:/    synosys/       exclude_file=/etc/incexcl-synosys
backup    /              debiansys/     exclude_file=/etc/incexcl-debiansys
backup    /              debiandata/    exclude_file=/etc/incexcl-debiandata
backup    /              synodata/      exclude_file=/etc/incexcl-synodata

In order to backup synology system domain, I use an alias for syno in .ssh/config file

# for rsnapshot to access ds414:2222
host syno
  port 2222
  hostname ds414

You will also need to enable rsync (network backup) service on synology web interface through main menu->backup & replication->backup services specifying SSH encryption port as 2222.
You can notice that, to be safe, I am performing the backup to an external USB disk that I attach to the synology (located in /volumeUSB1/usbshare).
All the directories (include and exclude) are specified in the /etc/incexcl-debiansys file which has the following format:

+ /etc/
+ /root/
+ /usr/
+ /usr/local/
+ /usr/local/etc/
- /usr/local/*
- /usr/*
+ /var/
+ /var/www/
+ /var/spool/
+ /var/spool/cron/
- /var/spool/*
+ /var/lib/
+ /var/lib/mysql/
- /var/lib/*
- /var/*
- /*

Synology's system configuration file tree to be backup is stored in /etc/incexcl-synosys:

+ /etc/
+ /root/
+ /usr/
+ /usr/syno/
+ /usr/syno/etc/
+ /usr/syno/apache/
- /usr/syno/*
- /usr/*
+ /opt/
+ /opt/etc/
+ /opt/share/
+ /opt/share/lib/
- /opt/share/*
+ /opt/lib/
+ /opt/lib/ipkg/
- /opt/lib/*
- /opt/*
- /*
  • Install and configure a simple mail client: mutt to receive notifications of failure
apt install mutt

Edit /root/.muttrc to reflect your smtp server and credentials (here the domain is a google apps business one):

unset metoo
unset save_empty
my_hdr From: backup <user@yourdomain.com>
my_hdr X-Organization: YOURDOMAIN
set from="user@yourdomain.com"
set imap_user="user@yourdomain.com" imap_pass="yourPassword"
set smtp_url="smtp://user\@yourdomain.com@smtp.gmail.com:587/" smtp_pass="yourPassword"
set folder=imaps://user\@yourdomain.com@imap.gmail.com/

  • Automate your backups launching rsnapshot in the crontab: issue sudo crontab -e to add:
#minute hour    mday    month   wday    who     command
55      23      *       *       0,1,2,3,4,5,6   root    /usr/bin/rsnapshot -v daily || mutt -s "bkp `date +%Y%m%d` daily" user@yourdomain.com < /media/nfs/backup/rsnapshot.log
0       02      *       *       0               root    /usr/bin/rsnapshot -v weekly || mutt -s "bkp `date +%Y%m%d` weekly" user@yourdomain.com < /media/nfs/rsnapshot.log
0       04      1       *       *               root    /usr/bin/rsnapshot -v monthly || mutt -s "bkp `date +%Y%m%d` monthly" user@yourdomain.com < /media/nfs/rsnapshot.log

Restart cron:

service restart cron

Full manual backup

On synology, synology domain:

rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/incexcl-imac marc@imac.courville.org:/ /volumeUSB2/usbshare/backup/imac/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexcl-synodata / /volumeUSB2/usbshare/backup/synodata/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexcl-synosys / /volumeUSB2/usbshare/backup/synosys/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexcl-debiansys / /volumeUSB2/usbshare/backup/debiansys/
rsync -av --progress --delete --numeric-ids --exclude-from=/volume1/incexcl-mediatiny /volume1/video/hd/ /volumeUSB2/usbshare/video/

for i in ds211-6T-ipkg hyperion-linux ggdrive mail
  rsync -av --progress --delete --numeric-ids --relative /volume1/backup/$i/ /volumeUSB2/usbshare/backup/$i/

On imarc:
rsync -av --progress --delete --exclude-from=/Users/marc/dev/backup/incexcl-imarc / /Volumes/Backup/backup/mba/

Do sync two disks:

-av --progress --delete --numeric-ids 

For maximum speed following https://gist.github.com/KartikTalwar/4393116 on synology fastest setting on synology is to disable compression and use chacha20 cypher:
rsync -aHXxv --numeric-ids --delete --progress --exclude-from=/volume1/incexcl-synomum -e "ssh -T -c chacha20-poly1305@openssh.com -o Compression=no -x" /volume1/ moulinvert:/volume1/
For local fast backups you can use buffer with tar: (cd / && tar cPpSslf -) | buffer -m 64m | (cd /mnt/toto/backup/ && tar xvPpSslf -)
Fast non incremental backup: on synology, synology domain, install 
ipkg install pv netcat less gcc make zlib rsync tar wget buffer
wget http://zlib.net/pigz/pigz-2.3.3.tar.gz
tar zxvf pigz-2.3.3.tar.gz
cd pigz
sed -e "s/^CC=cc/CC=gcc/g" Makefile
cp -f pigz unpigz /opt/bin/

# non compressible source
dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | nc -l -p 12121
nc 12121 | tar xvPpSslf -

# compressible source
dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | pigz | nc -q 10 -l -p 12121
nc -w 10 12121 | pigz -d | tar xvPpSslf -

Backup your emails with offlineimap

Though there is a debian package for offlineimap you better install latest git since it enables to sync up the gmail labels:
git clone git://github.com/OfflineIMAP/offlineimap.git
cd offlineimap
python ./setup.py install
cd ..
easy_install --upgrade offlineimap
Now you can automate sync for an everyday scheduling:
crontab -e
00 22  * * *    /usr/local/bin/offlineimap -o > /dev/null 2>&1

Mutt is the ultimate email client

I have been using mutt since 1995 and still use it (less extensively though). Please refer to the following good links in order to have it running smoothly with gmail: consolify your gmail with mutt and the homely mutt.

Here are the tools that I installed:

aptitude install mutt-patched urlview muttprint muttprint-manual w3m notmuch par
easy_install goobook

Files renaming

I use two tools that I find handy:
  • perl-file-rename that I find extremely useful and powerful that handles regexp (it is installed by default on debian). One example of usage is the following renaming series in directory that starts with the episode numbed:

rename 's/^([0-9]*) - /serie-s01e$1-/g' *.mp4
rename 's/ /_/g' *
rename "s/l_homme/l\'homme/g" *
  • convmv to get rid of non UTF-8 characters
apt install convmv

Here is an example of how to use convmv recursively on current directory to fix encoding:

convmv -f iso-8859-1 -t utf8 --notest -r .

  • files with question marks really do upset samba shares it seems and in order to remove these you can use the following renaming script:
for i in `find . | grep "\?"`; do rename "s/\?//g" "$i"; done

Low power Ethernet based headless server
In order to optimize power consumption, let's disable Wi-Fi and Bluetooth:

echo "dtoverlay=disable-wifi" | sudo tee -a /boot/config.txt
echo "dtoverlay=disable-bt" | sudo tee -a /boot/config.txt
sudo systemctl disable hciuart
sudo systemctl disable hciuart
Leds consume power too: dark mode it is: add in /boot/config.txt
##turn off ethernet port LEDs
##turn off mainboard LEDs
##disable ACT LED
##disable PWR LED
Power off HDMI (-p for on again)
/usr/bin/tvservice -o

Automount known USB disks exfat formatted and make it available with samba

Add exfat support:
apt install exfat-fuse exfat-utils
Add removable section in in /etc/auto.master
vi /etc/auto.master
/media/removable /etc/auto.removable --timeout=2,sync,nodev,nosuid,iuid=1031
Automount only known disk, identify it first:
blkid /dev/sda1
/dev/sda1: UUID="A5DA-5739" TYPE="exfat" PTTYPE="dos" PARTUUID="5fc68fae-7d99-49d0-8a3c-2d56497223dd"
udevadm info --query all --path /sys/block/sda/sda1/ | grep UUID
E: ID_PART_TABLE_UUID=7bbc5ab5-8ae1-4405-9981-cc9d458bb09b
E: ID_PART_ENTRY_UUID=5fc68fae-7d99-49d0-8a3c-2d56497223dd
Now activate it in autofs:
vi /etc/auto.removable
nas     -fstype=auto    UUID=A5DA-5739
Install samba
apt install samba samba-common-bin
vi /etc/samba/smb.conf
security = user
read only = no
  comment= NAS
  path = /media/removable/nas
  valid users = media
  force group = users
  create mask = 0660
  directory mask = 0771
  read only = no
smbpasswd -a media
/etc/init.d/smbd restart

Make rpi easily discoverable via bonjour

For this use avahi:
apt install avahi-daemon
vi /etc/avahi/avahi-daemon.conf
service avahi-daemon restart
Now you can browse to http://hyperion.local or ping the rpi ping hyperion.local

newsbin support: sabnzb

wget http://www.rarlab.com/rar/unrarsrc-5.9.3.tar.gz
tar zxvf unrarsrc-5.9.3.tar.gz
cd unrar
install -v -m755 unrar /usr/local/bin
apt install sabnzbdplus
vi /etc/default/sabnzbdplus
cd /media/nfs/ds414/download
chown -R media:mediagroup sabnzb
chmod -R ug+rwX sabnzb
cd sabnzb
mkdir tv movie audio software complete incomplete incoming
vi /etc/nginx/sites-available/default
    location /sabnzbd {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme $scheme;
        proxy_pass http://localhost:6080/sabnzbd; #Local ipno to dzga
        proxy_read_timeout 90;
apt install nzbget

sonarr for nzb TV shows

Sonarr is a smart PVR for newsgroup and bittorrent users https://github.com/Sonarr/Sonarr
apt install apt-transport-https dirmngr
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
apt install mono-devel
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0xA236C58F409091A18ACA53CBEBFF6B99D9B78493
echo "deb http://apt.sonarr.tv/ master main" | sudo tee /etc/apt/sources.list.d/sonarr.list
apt install nzbdrone
chown -R media:mediagroup /opt/NzbDrone
vi /etc/systemd/system/sonarr.service
Description=Sonarr Daemon
After=syslog.target network-online.target


ExecStart=/usr/bin/mono --debug /opt/NzbDrone/NzbDrone.exe

systemctl enable sonarr.service
systemctl start sonarr.service
systemctl status sonarr.service
vi /etc/nginx/sites-available/default
    location /sonarr {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme $scheme;
        proxy_pass http://localhost:8989; #Local ipno to dzga
        proxy_read_timeout 90;

In sonarr web interface do set in settings general url base to /sonarr

radaar for nzb movies

Radaar is a fork of Sonarr to work with movies à la Couchpotato https://github.com/Radarr/Radarr
cd /opt/
apt install curl
curl -L -O $( curl -s https://api.github.com/repos/Radarr/Radarr/releases | grep linux.tar.gz | grep browser_download_url | head -1 | cut -d \" -f 4 )
tar -xvzf Radarr.develop.*.linux.tar.gz
chown -R media:mediagroup /opt/Radarr
vi /etc/systemd/system/radarr.service
Description=Radarr Daemon
After=syslog.target network.target


ExecStart=/usr/bin/mono --debug /opt/Radarr/Radarr.exe -nobrowser

systemctl enable radarr.service
systemctl start radarr.service
systemctl status radarr.service
vi /etc/nginx/sites-available/default
    location /radarr {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme $scheme;
        proxy_pass http://localhost:7878; #Local ipno to dzga
        proxy_read_timeout 90;

In radarr web interface do set in settings general url base to /radarr

Acrarium backend

Acrarium is a debug backend for Android applications using acra.

mysql -u root -p
create database acrarium;
GRANT ALL PRIVILEGES ON acrarium.* TO 'acrarium'@'localhost' identified by 'PASSWORD';

cat /home/media/.acra/application.properties
apt install default-jdk

wget https://github.com/F43nd1r/Acrarium/releases/download/v0.10.7/acrarium-0.10.7.jar
mkdir /var/lib/acrarium
cp acrarium-0.10.7.jar /var/lib/acrarium
vi /etc/systemd/system/acrarium.service
Description=Acrarium Daemon
After=syslog.target network.target


ExecStart=/usr/bin/java -jar /var/lib/acrarium/acrarium-0.10.7.jar

systemctl enable acrarium.service
systemctl start acrarium.service
systemctl status acrarium.service
vi /etc/nginx/sites-available/default
    location /acrarium {
        rewrite ^/acrarium/?(.*) /$1 break;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Scheme $scheme;
        proxy_pass http://localhost:9090; #Local ipno to dzga
        proxy_read_timeout  90;

BD streaming

YACReader seems to be one solution:

cd unarr-deb
dpkg-buildpackage -b -rfakeroot -us -uc
dpkg -i ../libunarr_1.0.1-2_armhf.deb
dpkg -i ../libunarr-dev_1.0.1-2_armhf.deb
cd yacreader-deb
apt install qtmultimedia5-dev qtscript5-dev qtdeclarative5-dev libqt5multimedia5
dpkg-buildpackage -b -rfakeroot -us -uc
vi /etc/systemd/system/YACReaderLibraryServer.service
Description=YACReaderLibraryServer Daemon
After=syslog.target network.target


ExecStart=/usr/bin/YACReaderLibraryServer start

systemctl enable YACReaderLibraryServer.service
systemctl start YACReaderLibraryServer.service
systemctl status YACReaderLibraryServer.service
cat cat /home/media/.local/share/YACReader/YACReaderLibrary/YACReaderLibrary.ini


curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker pi
# you need to logout/login
sudo apt install libffi-dev libssl-dev
sudo apt remove python-configparser
sudo pip3 install docker-compose