Main Page‎ > ‎

Synology debian chroot

This page regroups stuff that I have done on my synology to make it my "low power" server at home using a debian chroot

Why not the optware/ipkg way?

I used to stick with the optware way but when I switched to a synology using an armadaxp arm hard float (armhf) architecture and after spending nights recompiling my own bootstrap and deprecated packages, I realized it was time to switch to something really maintained and make the leap: I am using now debian on synology in a chrooted environment. These notes regroup the steps that I followed to make it happen.

Just for the record of this tiring useless effort to get optware/ipkg on armadaxp architecture working here are the diff modifications I did to the optware svn creating an armadaxp new architecture.

I have not been able to find a good ipkg repository for armadaxp (armhf) and using the usual softfp arm ones was resulting in compatibility (segfault) issues with /lib/libc.so.

Nevertheless, if you want to recompile the whole ipkg packages you will need the armadaxp toolchain from synology:

wget http://sourceforge.net/projects/dsgpl/files/DSM%205.0%20Tool%20Chains/Marvell%20armada%20xp%20Linux%203.2.40/gcc464_glibc215_hard_armada-GPL.tgz/download
tar zxf gcc464_glibc215_hard_armada-GPL.tgz

Note that suitable compilation flags for armadaxp are: -mcpu=marvell-pj4 -march=armv7-a -mfpu=vfpv3-d16 -mfloat-abi=hard -DSYNO_MARVELL_ARMADA370 -DSYNO_PLATFORM=MARVELL_ARMADA370 (borrowed from synocommunity git).

In order to compile all the packages you need to apply the patch provided above and follow these instructions.
However this is not the purpose of this page where I detail how to use instead a debian chroot on synology.

Bootstrap your synology: install debian and get it coexist with synology proprietary linux OS

First step is to install a debian armhf in a clean chroot environment and get it to coexist with synology.
This has to be done on a debian linux host or virtual machine (note that since this first install notes, system has been upgraded from wheezy to jessie):

apt install debootstrap
mkdir debian
debootstrap --foreign --arch armhf wheezy debian
tar cvzfp debian.tar.gz debian

On the synology untar the created archive after enabling temporarily telnet access on the web interface in the control panel terminal section and login in on the synology station via telnet command:

cd /volume1
tar jxvpf debian.tar.gz

Create debian/usr/bin/policy-rc.d:

vi debian/usr/bin/policy-rc.d
#!/bin/sh
exit 101
#EOF

chmod a+rx debian/usr/bin/policy-rc.d

Mount the various relevant partitions:

CHROOT=/volume1/debian
mount -o bind /dev $CHROOT/dev
mount -o bind /proc $CHROOT/proc
mount -o bind /dev/pts $CHROOT/dev/pts
mount -o bind /sys $CHROOT/sys

Get DNS resolution:

cp /etc/resolv.conf $CHROOT/etc/resolv.conf

Chroot in order to bootstrap second stage:

chroot $CHROOT /bin/bash
unset LD_LIBRARY_PATH
/debootstrap/debootstrap --second-stage
passwd root

Here we go: we can now install some must have packages (note that since this first install notes, system has been upgraded from wheezy to jessie):

vi /etc/apt/sources.list
## wheezy
deb http://ftp.fr.debian.org/debian/ wheezy main contrib non-free
deb-src http://ftp.fr.debian.org/debian/ wheezy main contrib non-free
## wheezy security
deb http://security.debian.org/ wheezy/updates main contrib non-free
deb-src http://security.debian.org/ wheezy/updates main contrib non-free
# wheezy update
deb http://ftp.fr.debian.org/debian/ wheezy-updates main contrib non-free
deb-src http://ftp.fr.debian.org/debian/ wheezy-updates main contrib non-free
#EOF

apt update
apt install openssh-server rsync screen less vim mosh htop most uptimed mutt irssi vim unrar tig

Configure the right locales:

apt install locales
locale-gen en_US en_US.UTF-8 en_US.ISO-8859-1 en_US.ISO-8859-15 fr_FR fr_FR.UTF-8 fr_FR.ISO-8859-15 fr_FR.ISO-8859-1
dpkg-reconfigure locales
Generating locales (this might take a while)...
  en_US.ISO-8859-1... done
  en_US.ISO-8859-15... done
  en_US.UTF-8... done
  fr_FR.UTF-8... done
  fr_FR.UTF-8... done
  fr_FR.ISO-8859-15@euro... done

Run some necessary services:

/etc/init.d/rsyslog start
/etc/init.d/mtab.sh start
/etc/init.d/cron start
/etc/init.d/ssh start
/etc/init.d/uptimed start

Automate launch of debian chroot at each synology boot: /etc/rc.local will start chroot.sh script that will run the basic services that will be killed at chroot.sh termination (from this tutorial. Create following files in synology linux OS domain (note that starting DSM v5.1 you need also to modify /etc/synoinfo.conf):

vi /etc/rc.local
#!/bin/sh
# Optware setup
#[ -x /etc/rc.optware ] && /etc/rc.optware start
sed -e "s/^#Port 22$/Port 2222/g" -i /etc.defaults/ssh/sshd_config /etc/ssh/sshd_config
sed -e s/ssh_port=\"22\"/ssh_port=\"2222\"/ -e s/sftpPort=\"22\"/sftpPort=\"2222\"/ -e s/rsync_sshd_port=\"22\"/rsync_sshd_port=\"2222\"/ -i /etc/synoinfo.conf
/usr/syno/sbin/synoservicecfg --restart ssh-shell
#DSM5.1 way
#sed -e "s/^Listen 80$/Listen 8080/g" -i /etc/httpd/conf/httpd.conf-user
#sed -e "s/^Listen 443/Listen 8443/g" -i /etc/httpd/conf/extra/httpd-ssl.conf-user
#/usr/syno/sbin/synoservicecfg --restart httpd-user
#DSM6.x way
# wait for nginx to be launched to override configuration otherwise changes will be erased by nginx-conf-generator that needs to be escaped
while (! ps -auwxx | grep nginx | grep -v grep | grep -v conf-generator )
do
  echo wainting for nginx to be running >> /volume1/debian/root/chroot.log 2>&1
  sleep 5
done

sed -e "s/listen 80 default_server;$/listen 8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:80 default_server;$/listen [::]:8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen 443 default_server ssl;$/listen 8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:443 default_server ssl;$/listen [::]:8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
nginx -s reload
# Launch chroot
sh /volume1/debian/root/chroot.sh
exit 0
#EOF

vi /volume1/debian/root/chroot.sh
#!/bin/sh

CHROOT=/volume1/debian

mount -o bind /dev $CHROOT/dev
mount -o bind /proc $CHROOT/proc
mount -o bind /dev/pts $CHROOT/dev/pts
mount -o bind /sys $CHROOT/sys
[ ! -d $CHROOT/volume1 ] && mkdir -p $CHROOT/volume1
[ ! -d $CHROOT/volumeUSB1/usbshare ] && mkdir -p $CHROOT/volumeUSB1/usbshare
mount -o bind /volume1 $CHROOT/volume1
mount -o bind /volumeUSB1/usbshare $CHROOT/volumeUSB1/usbshare
cp /etc/resolv.conf $CHROOT/etc/resolv.conf
grep -v rootfs /proc/mounts > $CHROOT/etc/mtab
chroot $CHROOT /root/chroot-services.pl
#EOF

vi /volume1/debian/root/chroot-services.pl
#!/usr/bin/perl -W

use strict;

my @services = ('/etc/init.d/rsyslog',
                '/etc/init.d/sudo',
                '/etc/init.d/uptimed',
                '/etc/init.d/cron',
                '/etc/init.d/bind9',
                '/etc/init.d/isc-dhcp-server',
                '/etc/init.d/sslh',
                '/etc/init.d/apache2',
                '/etc/init.d/transmission-daemon',
                '/etc/init.d/ssh',
                '/etc/init.d/rsync',
                '/etc/init.d/mysql',
                '/etc/init.d/tt-rss');

$SIG{'INT'} = $SIG{'TERM'} = 'kill_running_services';

my @running_services = ();

foreach my $s (@services) {
        print "Starting $s ... ";
        `$s start`;
        if ($? == 0) {
                push @running_services, $s;
                print "done.\n";
        }
        else {
                print "failed.\n";
        }
}
while(my $in = <STDIN>) {};
exit;


sub kill_running_services
{
        my @services = reverse @running_services;
        while (my $s = shift @services) {
                print "Stopping $s ... ";
                `$s stop`;
                print "done.\n";
        }
        exit;
}
#EOF

chmod a+rx /volume1/debian/root/chroot*

Now to launch debian you just need to call from synology linux OS the script /volume1/debian/root/chroot.sh

Get synology default linux ssh to respond to port 2222 and synology debian one on port 22 such that we have created "two domains": i) the synology one and ii) the debian one. For this you need to enable ssh to run on synology web interface and perform the following modifications (note that starting DSM v5.1 you need also to modify /etc/synoinfo.conf):

#synology domain
vi /etc/ssh/sshd_config
Port 2222
vi /etc/synoinfo.conf
ssh_port="2222"
sftpPort="2222"
rsync_sshd_port="2222"
#EOF

/usr/syno/sbin/synoservicecfg --restart ssh-shell
Then disable telnet service on synology web interface control panel in terminal section.

Reconfigure your time zone to get proper time:
dpkg-reconfigure tzdata

Create users in debian domain

Now you need to recreate all the synology users on debian domain using the same uid and gid and the following command can help you:

adduser userwhologsin --uid xxxx --home /volume1/homes/userwhologsin
adduser userwhoneverlogsin --uid xxxx --home /volume1/homes/userwhoneverlogsin --disabled-login

Now create a special group under synology web interface: mediagroup to which all the users that you wish to grant access to media files will get access and identify which gid it is assigned to through this command: 

#synology domain
grep mediagroup /etc/group
mediagroup:x:65536:user1,user2

In debian domain create the same gid:

#debian domain
addgroup --gid 65536 mediagroup

Install an apache server coexisting with synology's internal http server

Synology domain: DSM 6.x way

In order to get coexistence you need to restrict synology's nginx server to listen respectively to ports 8080 and 8443 instead of 80 and 443:

#/sbin/initctl stop nginx
sed -e "s/listen 80 default_server;$/listen 8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:80 default_server;$/listen [::]:8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen 443 default_server ssl;$/listen 8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:443 default_server ssl;$/listen [::]:8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
#/usr/syno/sbin/synoservicecfg --restart nginx
nginx -s reload
#/sbin/initctl start nginx

Note that nginx.conf is generated on the fly by /usr/syno/etc.defaults/rc.sysv/nginx-conf-generator.sh

Synology domain: DSM 5.x old way

In order to get coexistence you need to restrict synology's "user" apache server to listen respectively to ports 8080 and 8443 instead of 80 and 443:

#syno domain
vi /etc/httpd/conf/httpd.conf-user /etc.defaults/httpd/conf/httpd.conf-user
Listen 8080
#EOF
vi /etc/httpd/conf/extra/httpd-ssl.conf-user /etc.defaults/httpd/conf/extra/httpd-ssl.conf-user
Listen 8443
#EOF

/usr/syno/sbin/synoservicecfg --restart httpd-user

Debian domain

What follows is to be performed in debian domain:
  • Install apache
apt install apache2
  • disable default apache site before configuring another one and enable both https and https (ports 80 and 443) but limit listen to 127.0.0.1:443 for sslh multiplexer to work:
a2dissite 000-default
cd /etc/apache2/sites-available/
cp default 0080-main
cp default-ssl 0443-main
a2ensite 0080-main
a2ensite 0443-main
a2enmod ssl
vi /etc/apache2/ports.conf
<IfModule ssl_module>
        Listen 127.0.0.1:443
</IfModule>
<IfModule mod_gnutls.c>
        Listen 127.0.0.1:443
</IfModule>
#EOF
service apache2 reload

  • do not forget to add this new service to the ones to be launched at debian chroot startup by editing /root/chroot-services.pl and adding in my @service array:  '/etc/init.d/apache2',
  • install sslh multiplexer to be able to use ssh over port when you are in unfriendly environment such airports
apt install sslh
vi /etc/default/sslh
RUN=yes
DAEMON_OPTS="--user sslh --listen 192.168.0.2:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid"
#EOF
service sslh restart

Install self signed custom certificates for your domain on synology domain

There is now a dedicated script to generate your own self sign certificate, all you need is to:
cd /usr/syno/etc/ssl
vi mkcert.sh
FR
IDF
Paris
Courville.org
Courville
www.courville.org
nospam@courville.org
EOT
./mkcert.sh

An alternative to avoid self signed certificates is to use the letsencrypt way.

Install proper signed certificates with letsencrypt

  • In order to use the letsencrypt free ssl certificate service, I went through the following manual steps to avoid any errors
apt install python-certbot-apache letsencrypt
certbot certonly --manual -d home.courville.org
  • In another terminal follow the above command output before hitting enter
webdir=/var/www/html
mkdir -p $webdir/.well-known/acme-challenge
printf "%s" COMPLEXSTRINGFROMABOVECOMMAND > $webdir/.well-known/acme-challenge/COMPLEXSTRINGFROMABOVECOMMAND
  • Now certs are store there /etc/letsencrypt/live/home.courville.org/ and you can include them in the apache2 configuration by editing /etc/apache2/sites-available/0443-main
SSLCertificateFile /etc/letsencrypt/live/home.courville.org/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/home.courville.org/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/home.courville.org/fullchain.pem
  • reload apache2 config with service apache2 reload
  • to renew certificate use certbot renew but it needs to be done manuall through certbot certonly --manual -d home.courville.org
      again and cannot be automated via crontab

    Install self signed custom certificates for your domain on debian domain

    Since I was tired of chrome's complaints that my domain was using synology.com certificates, I decided to generate my own custom ones following this link: http://forum.synology.com/wiki/index.php/How_to_generate_custom_SSL_certificates. Here is what I did:

    • generate the certificate authority key ca.key (remember the passphrase)
    cd /etc/ssl
    openssl genrsa -des3 -out ca.key 2048
    • generate the key certificate
    openssl req -nodes -newkey rsa:2048 -sha256 -keyout ca.key -out ca.csr
    • generate the final certificate authority key (valid 10 years):
    openssl x509 -days 3650 -signkey ca.key -in ca.csr -req -out ca.crt
    • generate the server-certificate key:
    openssl genrsa -out server.key 2048
    • generate the key certificate, note that the CommonName needs to match your DNS domainname (wildcards are allowed e.g. *.yourdomain.com)
    openssl req -nodes -newkey rsa:2048 -sha256 -keyout server.key -out server.csr
    • generate the server certificate
    openssl x509 -days 3650 -CA ca.crt -CAkey ca.key -set_serial 01 -in server.csr -req -out server.crt
    • be sure that apache does use them by editing /etc/apache2/sites-available/0443-main and checking that the right patch is used
    SSLCertificateFile /etc/ssl/server.crt
    SSLCertificateKeyFile /etc/ssl/server.key
    • restart apache
    service apache2 restart

    • install now the same certificates on your synology, the following actions need to be performed in synology domain (not debian one)
    # synology domain not debian one
    cd /usr/syno/etc/ssl
    mkdir bak
    cp -r ssl.crt bak
    cp -r ssl.csr bak
    cp -r ssl.key bak
    DEB=/volume1/debian
    cp $DEB/etc/ssl/ca.crt ssl.crt
    cp $DEB/etc/ssl/server.crt ssl.crt
    cp $DEB/etc/ssl/ca.csr ssl.csr
    cp $DEB/etc/ssl/server.csr ssl.csr
    cp $DEB/etc/ssl/ca.key ssl.key
    cp $DEB/etc/ssl/server.key ssl.key
    cp $DEB/etc/ssl/ca.crt /volume1/public
    Note that now there is a script to regenerate your synology certificates
    cd /usr/syno/etc/ssl/
    ./mkcert.sh

    Protect via https some internal http services

    Somtimes you have some service that has a web interface on a specific port (e.g. http://192.168.0.2:1234/servicehttp) but not supporting https that you do not want to expose directly to the outside world in order not to have your password in the clear. In order to overcome this issue you can use the proxy feature of apache by editing /etc/apache2/sites-available/0443-main the following way:

      SSLProxyEngine on
      ProxyRequests Off
      ProxyVia Off
      <Proxy *>
         Order deny,allow
         Allow from all
      </Proxy> 
     <Location /servicehttp>
        Order deny,allow
        Allow from all
        AuthName "Private area"
        AuthType Basic
        AuthUserFile /opt/etc/apache2/htpasswd
        Require valid-user
        ProxyPass http://192.168.0.2:1234/servicehttp
        ProxyPassReverse http://192.168.0.2:1234/servicehttp
        SSLRequireSSL
      </Location> 
    and enabling apache modules proxy and rewrite:
    a2enmod proxy rewrite proxy_http
    service apache2 restart
    Create login and password using the following command: 
    htpasswd -c /etc/apache2/htpasswd username 

    Perform some https redirections towards synology default services without opening the ports on router

    If you do not want to open some ports on your router and forward them to your synology NAS to enable remote access (e.g. 7001 and 5001) you can use the proxy feature of apache to do it by simply editing /etc/apache2/sites-available/0080-main the following way:

      ProxyPass /filestation https://192.168.0.2:7001/
      ProxyPassReverse /filestation/ https://192.168.0.2:7001/
      ProxyPass /synology https://192.168.0.2:5001/
      ProxyPassReverse /synology/ https://192.168.0.2:5001/ 

    Redirect your main web server towards your own google site or some secure links

    If you want to redirect all http request towards your web server to another one (e.g. google sites) you can use the rewrite feature of apache by simply editing /etc/apache2/sites-available/0080-main the following way:

    RedirectMatch ^/$ http://sites.google.com/a/website.org/website/  
    Redirect permanent /filestation https://www.website.org/filestation

    Install subversion server

    If you do not want to open some ports on your router and forward them to your synology NAS to enable remote access (e.g. 7001 and 5001) you can use the proxy feature of apache to do it by simply editing /etc/apache2/sites-available/0443-main the following way:

    apt install libapache2-svn subversion subversion-tools
    a2enmod dav dav_svn
    service apache2 restart
    mkdir -p /volume1/svn/
    chown -R www-data:www-data /volume1/svn/

    Enable svn webdav service in /etc/apache2/sites-available/0443-main

      <Location /svn>
        DAV svn
        SVNParentPath /volume1/svn
        Order deny,allow
        Allow from all
        AuthName "Subversion repository"
        AuthType Basic
        AuthUserFile /etc/apache2/htpasswd
        Require valid-user
        SSLRequireSSL
      </Location>

    Install git server

    Do not use webdav/apache: use ssh backend... Here is how to create a repository:
    apt install git-core
    adduser git
    su - git
    cd /home/git
    mkdir -p depot.git
    cd depot.git
    git init --bare
    git update-server-info

    In order to import a project (existing code) into depot.git from a remote site where code is stored in depot directory:

    cd depot
    git init
    git add .
    git commit -m "First commit"
    git remote add origin git@gitserver.com:depot.git
    git remote -v
    git push -u origin master

    In order to clone the project:
    git clone git@gitserver.com:depot.git

    For group contributions and multi repository, you need to install gitosis replacement: gitolite or gogs for a full personal hosted github like experience.

    Install private github: gogs

    Gogs is a clone of github for private hosting of git repositories. I kept local version of go instead of downloading new one
    su - git
    cd $HOME
    mkdir repositories
    echo 'export GOPATH=$HOME/go' >> $HOME/.bashrc
    echo 'export PATH=$PATH:$GOPATH/bin' >> $HOME/.bashrc
    source $HOME/.bashrc
    go get -u github.com/gogits/gogs
    cd $GOPATH/src/github.com/gogits/gogs
    go build
    wget https://raw.githubusercontent.com/gogits/gogs/master/scripts/mysql.sql
    mysql --user=root --password=YOURPASSWORD < mysql.sql
    ./gogs web

    Configure via URL http://localhost:3000 configuration file is stored there and do the following modifications:

    vi custom/conf/app.ini
    ROOT_URL = https://your.web.server/gogs/

    Link the whole via apache2 proxy configuration:

    vi /etc/apache2/sites-available/0443-main.conf
      <Location /gogs>
        Order deny,allow
        Allow from all
        AuthName "Private area"
        AuthType Basic
        AuthUserFile /etc/apache2/htpasswd
        Require valid-user
        ProxyPass http://192.168.0.2:3000/
        ProxyPassReverse http://192.168.0.2:3000/
        SSLRequireSSL
      </Location>

    Launch the service automatically:
    cd /etc/init.d
    wget https://raw.githubusercontent.com/gogits/gogs/master/scripts/init/debian/gogs
    chmod a+rx gogs
    vi gogs
    #WORKINGDIR=/home/git/gogs
    WORKINGDIR=/home/git/go/src/github.com/gogits/gogs
    /etc/init.d/gogs start

    Add gogs service to chroot debian domain: add the appropriate entry to

    vi /root/chroot-services.pl

    Install other sources of packages on synology web interface

    Synocommunity is a great source of packages to add to the ones provided by synology.
    Interesting packages like transmission can be found that is a good replacement of synology's download station.
    Note that I do not use these packages since I prefer to stick with official debian ones.

    Get ready to compile native

    Install all the tools you need:
    apt install build-essential

    Rsync version that detects renaming

    If you reorder your files and photos normally rsync would take ages to perform the backup but there is a patch not integrated in mainstream that deals with that:

    cd /volume1/@src/
    wget "http://www.samba.org/ftp/rsync/src/rsync-3.1.1.tar.gz"
    wget "http://www.samba.org/ftp/rsync/src/rsync-patches-3.1.1.tar.gz"
    #wget "https://bugzilla.samba.org/attachment.cgi?id=7435" -O detect-renamed.diff
    tar zxf rsync-3.1.1.tar.gz
    tar zxf rsync-patches-3.1.1.tar.gz
    #cp detect-renamed.diff rsync-3.0.9/patches
    cd rsync-3.1.1
    patch -p1 <patches/detect-renamed.diff 
    patch -p1 <patches/detect-renamed-lax.diff
    ./configure --prefix=/usr/local
    make install
    strip /usr/local/bin/rsync
    find /usr/local -mmin -5 > ../install-rsync-files.lst
    Now you can use the following option rsync -av --detect-renamed

    Backup your data with history: rsnapshot

    rsnapshot is an rsync based backup system that I like: it is simple yet efficient. 
    • Install it with:
    apt install rsnapshot
    • Note that using debian domain rsync will lack one option that is special to synology's /usr/syno/bin/rsync: the --syno-acl option used to preserve any windows ACL in the backup process (not my case because I want the rename detection feature explained above).
    My configuration is the following /etc/rsnapshot.conf:

    config_version  1.2
    snapshot_root   /volumeUSB1/usbshare/backup/rsnapshot
    cmd_cp          /bin/cp
    cmd_rm          /bin/rm
    #cmd_rsync      /usr/syno/bin/rsync
    cmd_rsync       /usr/bin/rsync
    cmd_ssh /opt/bin/ssh
    cmd_du          /usr/bin/du
    interval        daily   7
    interval        weekly  4
    interval        monthly 6
    link_dest       1
    verbose         2
    loglevel        3
    logfile         /volume1/backup/rsnapshot.log
    #rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded --syno-acl
    rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded
    lockfile        /var/run/rsnapshot.pid
    # little trick in .ssh/config to map syno to host localhost with port 2222. WARNING: you need to enable backup services i.e. rsync on synology's web interface
    backup    root@syno:/    synosys/       exclude_file=/etc/incexcl-synosys
    backup    /              debiansys/     exclude_file=/etc/incexcl-debiansys
    backup    /              synodata/      exclude_file=/etc/incexcl-synodata

    In order to backup synology system domain, I use an alias for syno in .ssh/config file

    # for rsnapshot to access localhost:2222
    host syno
      port 2222
      hostname localhost

    You will also need to enable rsync (network backup) service on synology web interface through main menu->backup & replication->backup services specifying SSH encryption port as 2222.
    You can notice that, to be safe, I am performing the backup to an external USB disk that I attach to the synology (located in /volumeUSB1/usbshare).
    All the directories (include and exclude) are specified in the /etc/incexcl-debiansys file which has the following format:

    + /etc/
    + /root/
    + /usr/
    + /usr/local/
    + /usr/local/etc/
    - /usr/local/*
    - /usr/*
    + /var/
    + /var/www/
    + /var/spool/
    + /var/spool/cron/
    - /var/spool/*
    + /var/lib/
    + /var/lib/mysql/
    - /var/lib/*
    - /var/*
    - /*

    Synology's system configuration file tree to be backup is stored in /etc/incexcl-synosys:

    + /etc/
    + /root/
    + /usr/
    + /usr/syno/
    + /usr/syno/etc/
    + /usr/syno/apache/
    - /usr/syno/*
    - /usr/*
    + /opt/
    + /opt/etc/
    + /opt/share/
    + /opt/share/lib/
    - /opt/share/*
    + /opt/lib/
    + /opt/lib/ipkg/
    - /opt/lib/*
    - /opt/*
    - /*
    • Install and configure a simple mail client: nail heirloom-mailx to receive notifications of failure
    apt install heirloom-mailx

    Edit /etc/nail.rc to reflect your smtp server and credentials (here the domain is a google apps business one):

    set hold
    set append
    set ask
    set crt
    set dot
    set keep
    set emptybox
    set indentprefix="> "
    set quote
    set sendcharsets=iso-8859-1,utf-8
    set showname
    set showto
    set newmail=nopoll
    set autocollapse
    ignore received in-reply-to message-id references
    ignore mime-version content-transfer-encoding
    fwdretain subject date from to
    set smtp=smtp.gmail.com:587
    set smtp-use-starttls
    set ssl-verify=ignore
    set ssl-ca-file=/etc/ssl/certs/thawte_Primary_Root_CA.pem
    set from="user@yourdomain.com"
    set smtp-auth-user=user@yourdomain.com
    set smtp-auth-password="YOURPASSWORD"
    • Automate your backups launching rsnapshot in the crontab: issue sudo crontab -e to add:
    #minute hour    mday    month   wday    who     command
    55      23      *       *       0,1,2,3,4,5,6   root    /usr/bin/rsnapshot -v daily || herloom-mailx -s "bkp `date +%Y%m%d` daily" user@yourdomain.com < /volume1/backup/rsnapshot.log
    0       02      *       *       0               root    /usr/bin/rsnapshot -v weekly || herloom-mailx -s "bkp `date +%Y%m%d` weekly" user@yourdomain.com < /volume1/backup/rsnapshot.log
    0       04      1       *       *               root    /usr/bin/rsnapshot -v montly || herloom-mailx -s "bkp `date +%Y%m%d` monthly" user@yourdomain.com < /volume1/backup/rsnapshot.log

    Restart cron:

    service restart cron

    Files renaming

    I use two tools that I find handy;
    • perl-file-rename that I find extremely useful and powerful that handles regexp (it is installed by default on debian). One example of usage is the following renaming series in directory that starts with the episode numbed:
    rename 's/^([0-9]*) - /serie-s01e$1-/g' *.mp4
    rename 's/ /_/g' *
    rename "s/l_homme/l\'homme/g" *
    • convmv to get rid of non UTF-8 characters
    apt install convmv

    Here is an example of how to use convmv recursively on current directory to fix encoding:

    convmv -f iso-8859-1 -t utf8 --notest -r .

    • files with question marks really do upset samba shares it seems and in order to remove these you can use the following renaming script:
    #!/bin/bash
    for i in `find . | grep "\?"`; do rename "s/\?//g" "$i"; done

    Subtitles downloader

    subliminal is a python tool that downloads subtitles from various sources for video files.

    cd /root/src
    apt install python python-setuptools python-pip
    git clone https://github.com/Diaoul/subliminal
    cd subliminal
    python setup.py install

    Subtitles download can be automated downloading first english subs if only available to be replaced by french ones once available using a little script like this one:

    #!/opt/bin/bash

    TV_DIR=$1
    BACKTIME=$2
    [ -z "$TV_DIR" ] && TV_DIR=/volume1/video/serie
    [ -z "$BACKTIME" ] && BACKTIME=14
    GETSUB=/opt/bin/subliminal
    cd $TV_DIR
    # try to get the subtitle only for 14 days to avoid exploding list of files to process
    for file in $(find . -mtime -${BACKTIME} -type f \( -iname \*.avi -o -iname \*.mkv -o -iname \*.mp4 \) )
    do
      subtitle="${file%.*}.srt"
      if [ ! -f "$subtitle" ]
      then
        echo processing $file
        # download without .fr.srt or .en.srt to be sure it is from addic7ed in french first in -s single mode
        $GETSUB -s -l fr -p addic7ed --addic7ed-username username --addic7ed-password password -q "$file" && rm "$file".en.srt
        # if it failed with addict7ed get from any other source but with .fr.srt and .en.srt
        [ ! -f "$subtitle" ]  && $GETSUB -l en --addic7ed-username username --addic7ed-password password -q "$file"
      fi
    done

    If you want to update subliminal in the future just do:

    cd /root/src/subliminal
    git pull
    python setup.py install

    TV series file renamer

    tvnamer is a very powerful python program that enables to rename tv series.

    cd /root/src
    git clone https://github.com/dbr/tvnamer
    cd tvnamer
    python setup.py install

    An example of user configuration of tvnamer capturing the most useful features of the tool can be found below:

    cat .tvnamer.json
    {
        "language": "en",
        "search_all_languages": false,
        "always_rename": false,
        "batch": true,
        "episode_separator": "-",
        "episode_single": "%02d",
        "filename_with_date_and_episode": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
        "filename_with_date_without_episode": "%(seriesname)s-e%(episode)s%(ext)s",
        "filename_with_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s-%(episodename)s%(ext)s",
        "filename_with_episode_no_season": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
        "filename_without_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s%(ext)s",
        "filename_without_episode_no_season": "%(seriesname)s-e%(episode)s%(ext)s",
        "lowercase_filename": false,
        "move_files_confirmation": true,
        "move_files_destination": "/volume1/video/serie/%(seriesname)s/%(seriesname)s-s%(seasonnumber)02d",
        "move_files_enable": true,
        "multiep_join_name_with": ", ",
        "normalize_unicode_filenames": false,
        "recursive": false,
        "custom_filename_character_blacklist": ":<>?*",
        "replace_invalid_characters_with": "_",
        "move_files_fullpath_replacements": [
            {"is_regex": false, "match": " ", "replacement": "_"},
    {"is_regex": false, "match": "_-_", "replacement": "-"},
    {"is_regex": false, "match": ":", "replacement": ""},
            {"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
        ],
        "output_filename_replacements": [
            {"is_regex": false, "match": "?", "replacement": ""},
            {"is_regex": false, "match": " ", "replacement": "_"},
    {"is_regex": false, "match": "_-_", "replacement": "-"},
    {"is_regex": false, "match": ":", "replacement": ""},
            {"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
        ],
        "input_filename_replacements": [
            {"is_regex": true, "match": "Serie.US", "replacement": "Serie"},
            {"is_regex": true, "match": "Serie([^:])", "replacement": "Serie_expanded_name\\1"}
        ]
    }

    If you want to update tvnamer in the future just do:

    cd /root/src/tvnamer
    git pull
    python setup.py install

    Transmission tweaks

    Install transmission and command line related tools via:

    apt install transmission transmission-daemon transmission-cli transmission-remote-cli python-transmissionrpc

    Do not forget to add this new service to the ones to be launched at debian chroot startup by editing /root/chroot-services.pl and adding in my @service array:  '/etc/init.d/transmission-daemon',

    Transmission daemon configuration files are located: /var/lib/transmission-daemon/info. Make sure to stop transmission on the synology web interface or using the below commands before editing this file so that your modifications are taken into account:

    service transmission-daemon stop
    vi /var/lib/transmission-daemon/info/settings.json
    service transmission-daemon start
    • learn to use transmission-remote script
    • in order to execute some actions at the end of a file download a script can be triggered e.g. for moving files around based on tracker name
        "script-torrent-done-enabled": true,
        "script-torrent-done-filename": "/usr/local/bin/transmission-done.sh",

    With /usr/local/bin/transmission-done.sh for instance being:

    #!/bin/bash

    # 'TR_TORRENT_NAME'
    # 'TR_TORRENT_DIR'
    # 'TR_TORRENT_ID'
    # 'TR_APP_VERSION'
    # 'TR_TORRENT_HASH'
    # 'TR_TIME_LOCALTIME'

    TV_SHOW_ID="hdtv"
    MEDIA_DIR="/volume1"
    DOWNLOAD_DIR="${MEDIA_DIR}/download"
    TV_SHOW_DIR="${MEDIA_DIR}/download/serie"
    DBGF=${DOWNLOAD_DIR}/log

    istvshow=false
    istvshowrpk=false
    isprivate=false
    istrackera=false
    istrackerb=false

    if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_ID}" -o -z "${TR_TORRENT_DIR}" ]
    then
      TR_TORRENT_ID=$1
      [ -z "$1" ] && exit 1
    fi

    tremote ()
    {
      /usr/bin/transmission-remote -n USER:PASSWORD $HOST -t ${TR_TORRENT_ID} $* # 2>&1 | tee -a $DBGF
    }

    if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_DIR}" ]
    then
      TR_TORRENT_NAME=$( tremote -t $1 -i | grep Name: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2 )
      TR_TORRENT_DIR=$( tremote -t $1 -i | grep Location: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2)
    fi

    echo `date +%Y-%m-%d\ %H:%M` processing id ${TR_TORRENT_ID} file ${TR_TORRENT_NAME} in dir ${TR_TORRENT_DIR} 2>&1 | tee -a $DBGF

    ( tremote -i | grep "Public torrent" | grep -qi No ) && isprivate=true
    ( tremote -it | grep -qi "tracker.a" ) && istrackera=true
    ( tremote -it | grep -qi "tracker.b" ) && istrackera=true
    # if flexget sets Location to serie then it must be a tvshow
    ( tremote -i | grep Location | grep -q $TV_SHOW_DIR ) && istvshow=true
    if ( echo "${TR_TORRENT_NAME}" | grep -Eqi "${TV_SHOW_ID}" )
    then
      istvshow=true
      ( ( echo "${TR_TORRENT_NAME}" | grep -Eqi "REPACK" ) || ( echo "${TR_TORRENT_NAME}" | grep -Eqi "PROPER" ) ) && istvshowrpk=true || istvshowrpk=false
    fi

    if $isprivate
    then
      # private torrent
      if $istrackera
      then
        echo "trackera"
        mkdir -p "$DOWNLOAD_DIR/trackera"
        tremote --move "$DOWNLOAD_DIR/trackera"
      elif $istrackerb
      then
        echo "trackerb"
        mkdir -p "$DOWNLOAD_DIR/trackerb"
        tremote --move "$DOWNLOAD_DIR/trackerb"
      fi
    else
      # public torrent
      if $istvshow
      then
        echo "tvshow detected, move and remove ${TR_TORRENT_DIR}/${TR_TORRENT_NAME}" "${TV_SHOW_DIR}" 2>&1 | tee -a $DBGF
        tremote --move "$DOWNLOAD_DIR/serie"
        tremote -r
      else
        echo "other public"
        tremote -S
        mkdir -p "$DOWNLOAD_DIR/public"
        tremote --move "$DOWNLOAD_DIR/public"
      fi
    fi
    • periodically you can check if your download and seed is complete based on your seedratio and then decide to move the completed files somewhere and remove it from transmission. This script does this for you:
    #!/opt/bin/bash

    tremote ()
    {
     transmission-remote -n user:password $*
    }

    for t in `tremote -l | grep Done | sed -e 's/^ *//' | sed 's/\*//' | cut -s -d " " -f1`
    do
      if ( tremote -t $t -it | grep -qi "tracker.a" );
      then
        echo processing $t
        tremote -t $t --move "/volume1/files/trackera"
        tremote -t $t -r
      fi
    done
    • enabling blocklist in transmission is a sensible thing to do for this add the following option to /var/lib/transmission-daemon/info/settings.json
        "blocklist-enabled": true,
        "blocklist-url": "http://list.iblocklist.com/?list=bt_level1&fileformat=p2p&archiveformat=gz",
    You can periodically update the blocklist via cron job by issuing sudo crontab -e and adding:
    30      19      *       *       *               root    /usr/bin/transmission-remote -n user:password --blocklist-update
    and restarting cron:
    service cron restart

    • keeping a fair use of the ADSL connection to allow others to have good traffic without too latency edit /var/lib/transmission-daemon/info/settings.json
    "download-queue-size:" 5,
    "peer-limit-per-torrent:" 5,
    "peer-limit-global:" 30,

    In order to have good permission rights, in synology domain add transmission gid 107 in /etc/group that should match the gid of debian-transmission in debian domain. Add users that should get access to download directory to this group in synology domain. Fix on debian domain the directory rights to allow writing for debian-transmission:

    addgroup debian-transmission mediagroup
    chown -R debian-transmission:mediagroup /volume1/download
    chmod g+rwX -R /volume1/download

    • transmission logs suggests for udp and utp buffer optimizations (when adding in /etc/default/transmission-daemon to OPTION --logfile /volume1/download/transmission/transmission.log and reading logs) put in /etc/sysctl.conf 
    echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf
    echo "net.core.wmem_max = 1048576" >> /etc/sysctl.conf
    sysctl -p

    Automatic rss based file downloading

    flexget is a good tool and easy to configure to automate file download based on rss feed matching names.
    It can be installed by:
    apt install sqlite
    easy_install flexget
    easy_install --upgrade transmissionrpc
    A typical user configuration can be the following:
    cat .flexget/config.yml
    templates:
      global:
        transmission:
          host: localhost
          port: 9091
          username: username
          password: password
          addpaused: no
        email:
          active: True
          from: flexget@yourdomain.com
          to: user@yourdomain.com
          smtp_host: smtp.yourdomain.com
          smtp_port: 25
          smtp_login: false
      tv:
        quality: webrip|webdl|hdtv <720p
        regexp:
          reject:
            - Blah Confidential
        content_filter:
          reject: '*.avi'
        set:
          path: /volume1/download/transmission/serie
          skip_files:
            - '*.nfo'
            - '*.sfv'
            - '*[sS]ample*'
            - '*.txt'
          include_subs: yes
        series:
         - Serie name one
         - Serie name two
    tasks:
      stream1:
        rss: http://streamsiteone.com/feed/
        template: tv
        priority: 10
      stream2:
        rss: http://streamsitetwo.com/?cat=9
        template: tv
        priority: 20

    In order to check flexget configuration you can issue (when it fails have a look here):

    flexget check

    Launch of flexget task can be automated using cron job, by issuing sudo crontab -e and adding:

    0 07 * * * root /usr/local/bin/flexget --cron >/dev/null 2>&1

    Relaunch cron service through:

    service cron restart

    If you want to update flexget in the future just do:

    pip install --upgrade flexget
    #pip install --force-reinstall --ignore-installed flexget
    #pip install --force-reinstall --ignore-installed transmissionrpc

    If you experience some issues with seen status or database you can reset it with the following command that should be used only as a last resort:

    flexget database reset --sure

    Fill mkv stereo-mode field correctly for Archos Video Player to turn your TV into right 3D mode

    If you happen to have an Archos TV connect and a 3D capable TV (I strongly recommend 3D passive technology) and want to have your TV switched to the appropriate 3D mode automagically you can fill the right field value of your mkv file using this process:
    • install mkv tools: 
    apt install mkvtoolnix
    • find the right video track using:
    mkvinfo file.mkv
    Let's assume for the example sake that track1 contains the video
    • assign the right value for the stereo-mode field of the mkv depending on the type of video: 
    mkvpropedit --edit track:1 -s stereo-mode=1 file.mkv
    The possible values for stereo-mode are 0 for mono, 1 for left_right, 11 for right_left, 2 for bottom_top, and 3 for top_bottom, cf. below (http://3dvision-blog.com/tag/mkvtoolnix/#sthash.bXxmB5E1.dpuf)

    StereoMode field values:
    0: mono
    1: side by side (left eye is first)
    2: top-bottom (right eye is first)
    3: top-bottom (left eye is first)
    4: checkboard (right is first)
    5: checkboard (left is first)
    6: row interleaved (right is first)
    7: row interleaved (left is first)
    8: column interleaved (right is first)
    9: column interleaved (left is first)
    10: anaglyph (cyan/red)
    11: side by side (right eye is first)
    12: anaglyph (green/magenta)
    13: both eyes laced in one Block (left eye is first) (field sequential mode)
    14: both eyes laced in one Block (right eye is first) (field sequential mode)

    Install DNS service: bind

    Install bind9 DNS daemon:

    apt install bind9
    service bind9 restart

    Configure your domain entries creating files e.g. /etc/bind/db.0.168.192 and /etc/bind/db.yourdomain.com and reference it inside /etc/bind/name.conf.local 

    Do not forget to add this new service to the ones to be launched at debian chroot startup by editing /root/chroot-services.pl and adding in my @service array:  '/etc/init.d/bind9',

    Install DHCP daemon

    Install and configure dhcp daemon:

    apt install isc-dhcp-server
    vi /etc/dhcp/dhcpd.conf
    service isc-dhcp-server restart

    Configure ethernet MAC address and host mapping in compliance with DNS bind configuration files: /etc/dhcp/dhcpd.conf

    Do not forget to add this new service to the ones to be launched at debian chroot startup by editing /root/chroot-services.pl and adding in my @service array:  '/etc/init.d/isc-hdcp-server',

    Process to add a new host
    • edit subdomain dns entries
    • add ether MAC address in dhcp table

    Tasks to be performed at each synology update

    Since all the configuration files are overwritten at each synology update you need to redo the following tasks in synology domain (note that starting DSM v5.1 you need also to modify /etc/synoinfo.conf):
    sed -e "s/^#Port 22$/Port 2222/g" -i /etc.defaults/ssh/sshd_config /etc/ssh/sshd_config
    sed -e s/ssh_port=\"22\"/ssh_port=\"2222\"/ -e s/sftpPort=\"22\"/sftpPort=\"2222\"/ -e s/rsync_sshd_port=\"22\"/rsync_sshd_port=\"2222\"/ -i /etc/synoinfo.conf
    /usr/syno/sbin/synoservicecfg --restart ssh-shell
    # DSM5.x
    #sed -e "s/^Listen 80$/Listen 8080/g" -i /etc/httpd/conf/httpd.conf-user
    #sed -e "s/^Listen 443/Listen 8443/g" -i /etc/httpd/conf/extra/httpd-ssl.conf-user
    #/usr/syno/sbin/synoservicecfg --restart httpd-user
    # DSM6.x
    sed -e "s/listen 80 default_server;$/listen 8080 default_server;/g" -i /etc/nginx/nginx.conf
    sed -e "s/listen \[::\]:80 default_server;$/listen [::]:8080 default_server;/g" -i /etc/nginx/nginx.conf
    sed -e "s/listen 443 default_server ssl;$/listen 8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
    sed -e "s/listen \[::\]:443 default_server ssl;$/listen [::]:8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
    nginx -s reload

    You can even automate this by adding these lines in the synology domain /etc/rc.local file to be safe (before the chroot.sh call).

    Full manual backup

    On synology, synology domain:

    rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/incexcl-imac marc@imac.courville.org:/ /volumeUSB2/usbshare/backup/imac/
    rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexcl-synodata / /volumeUSB2/usbshare/backup/synodata/
    rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexcl-synosys / /volumeUSB2/usbshare/backup/synosys/
    rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexcl-debiansys / /volumeUSB2/usbshare/backup/debiansys/
    rsync -av --progress --delete --numeric-ids --exclude-from=/volume1/incexcl-mediatiny /volume1/video/hd/ /volumeUSB2/usbshare/video/

    for i in ds211-6T-ipkg hyperion-linux ggdrive mail
    do
      rsync -av --progress --delete --numeric-ids --relative /volume1/backup/$i/ /volumeUSB2/usbshare/backup/$i/
    done

    On imarc:
    rsync -av --progress --delete --exclude-from=/Users/marc/dev/backup/incexcl-imarc / /Volumes/Backup/backup/mba/

    Do sync two disks:

    -av --progress --delete --numeric-ids 

    Fast non incremental backup: on synology, synology domain, install 
    ipkg install pv netcat less gcc make zlib rsync tar wget buffer
    wget http://zlib.net/pigz/pigz-2.3.3.tar.gz
    tar zxvf pigz-2.3.3.tar.gz
    cd pigz
    sed -e "s/^CC=cc/CC=gcc/g" Makefile
    make
    cp -f pigz unpigz /opt/bin/

    # non compressible source
    #SOURCE: 192.168.0.2
    dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | nc -l -p 12121
    #DESTINATION:
    nc 192.168.0.2 12121 | tar xvPpSslf -

    # compressible source
    #SOURCE: 192.168.0.2
    dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | pigz | nc -q 10 -l -p 12121
    #DESTINATION:
    nc -w 10 192.168.0.2 12121 | pigz -d | tar xvPpSslf -

    Backup your emails with offlineimap

    Though there is a debian package for offlineimap you better install latest git since it enables to sync up the gmail labels:
    git clone git://github.com/OfflineIMAP/offlineimap.git
    cd offlineimap
    python ./setup.py install
    cd ..
    easy_install --upgrade offlineimap
    Now you can automate sync for an everyday scheduling:
    crontab -e
    00 22  * * *    /usr/local/bin/offlineimap -o > /dev/null 2>&1

    Mutt is the ultimate email client

    I have been using mutt since 1995 and still use it (less extensively though). Please refer to the following good links in order to have it running smoothly with gmail: consolify your gmail with mutt and the homely mutt.

    Here are the tools that I installed:

    aptitude install mutt-patched urlview muttprint muttprint-manual w3m notmuch par
    easy_install goobook

    Sync your ggdrive

    Various tools exist to backup your google drive: I started with grive(2) and now I am using rclone.

    grive method

    Under linux you need to use grive to be able to sync up your ggdrive folders which is handy for backup sake.
    Installation is not that trivial since it has a dependency to yajl and there is no debian package (yet). Here is what I had to do:

    git clone git clone https://github.com/vitalif/grive2
    cd yajl
    ./configure
    cmake .
    make
    checkinstall
    dpkg -i yajl_20140719-1_armhf.deb

    Then for grive itself:
    • get grive source code and install some pre-requisite packages for compilation:
    git clone https://github.com/vitalif/grive2
    cd grive2
    apt install git cmake libgcrypt11-dev libjson0-dev libcurl4-openssl-dev libexpat1-dev libboost-filesystem-dev libboost-program-options-dev libboost-all-dev build-essential automake autoconf libtool pkg-config libcurl4-openssl-dev intltool libxml2-dev libgtk2.0-dev libnotify-dev libglib2.0-dev libevent-dev checkinstall libqt4-dev
    • patch the source code following this diff
    • now compile and install:
    cmake .
    make
    cp ./grive/grive /usr/local/bin
    • get setup with cd /volume1/backup/ggdrive && grive -a
    • automate the backup (download only to avoid conflicts)
    crontab -e 
    00 22 * * * cd /volume1/backup/ggdrive && /usr/local/bin/grive -f > /dev/null 2>&1

    rclone way

    I now switched to rclone since it supports the google docs format (i.e. converts it to office format during sync).
    apt install golang
    mkdir $HOME/work
    export GOPATH=$HOME/work
    go get -u -v github.com/ncw/rclone
    cp $HOME/work/bin/rclone /usr/local/bin
    rclone config
    At this point please follow rclone wiki instructions for proper configuration.

    Tiny tiny RSS server

    In order to have centralized rss feeds one can install tinyrss server:

    apt install mysql-server mysql-client php5 php5-mysql php5-curl
    cd /var/www/html
    git clone https://tt-rss.org/git/tt-rss.git tt-rss
    cd tt-rss
    vi config.php
    chmod -R 777 cache/images cache/upload/ cache/export/ cache/js/ feed-icons lock
    cd ..
    chown -R www-data:www-data tt-rss
    # secure directories
    vi /etc/apache2/sites-available/0443-main
    <Directory /var/www/html/tt-rss/cache>
      Require all denied
    </Directory>

    <Directory /var/www/html/tt-rss>
      <Files "config.php">
        Require all denied
      </Files>
    </Directory>
    # FOR UPDATE git pull origin master
    chown -R www-data:www-data tt-rss
    diff config.php-dist config.php # manually merge the new configs
    # FOR UPDATE ./update.php --update-schema (su www-data -c "/var/www/html/tt-rss/update.php --update-schema")
    # FOR TESTING ./update.php --feeds --quiet
    • reload apache2 /etc/init.d/apache2 reload
    • open https://www.website.org/tt-rss/install for configuration then login as admin:password and change the password before defining new users
    To automate start of the service at bootup create /etc/init.d/tt-rss and /etc/default/tt-rss scripts following https://github.com/biapy/howto.biapy.com/tree/master/web-applications/tiny-tiny-rss example:

    cat /etc/default/tt-rss
    ## Defaults for Tiny Tiny RSS update daemon init.d script

    # Set DISABLED to 1 to prevent the daemon from starting.
    DISABLED=0

    # Emplacement of your Tiny Tiny RSS installation.
    TTRSS_PATH="/var/www/html/tt-rss"

    # Set FORKING to 1 to use the forking daemon (update_daemon2.php) in stead of
    # the standard one.
    # This option is only available for Tiny Tiny RSS 1.2.20 and over.
    FORKING=0
    #EOF

    cat /etc/init.d/tt-rss
    #! /bin/bash
    ### BEGIN INIT INFO
    # Provides:          ttrss-DOMAIN
    # Required-Start:    $local_fs $remote_fs networking mysql
    # Required-Stop:     $local_fs $remote_fs
    # Default-Start:     2 3 4 5
    # Default-Stop:      0 1 6
    # Short-Description: Tiny Tiny RSS update daemon for DOMAIN
    # Description:       Update the Tiny Tiny RSS subscribed syndication feeds.
    ### END INIT INFO

    # Author: Pierre-Yves Landuré <pierre-yves@landure.org>

    # Do NOT "set -e"

    # PATH should only include /usr/* if it runs after the mountnfs.sh script
    PATH="/sbin:/usr/sbin:/bin:/usr/bin"
    DESC="Tiny Tiny RSS update daemon"
    NAME="$(command basename "$(command readlink -f "${0}")")"
    DISABLED=0
    FORKING=0

    # Read configuration variable file if it is present
    [ -r "/etc/default/${NAME}" ] && . "/etc/default/${NAME}"

    DAEMON_SCRIPT="update.php --daemon"

    if [ "${FORKING}" != "0" ]; then
            DAEMON_SCRIPT="update_daemon2.php"
    fi

    DAEMON="/usr/bin/php"
    DAEMON_ARGS="${TTRSS_PATH}/${DAEMON_SCRIPT}"
    DAEMON_DIR="${TTRSS_PATH}"
    PIDFILE="/var/run/${NAME}.pid"
    SCRIPTNAME="/etc/init.d/${NAME}"
    USER="www-data"
    GROUP="www-data"

    # Exit if the package is not installed
    [ -x "${DAEMON}" ] || exit 0

    # Load the VERBOSE setting and other rcS variables
    . /lib/init/vars.sh
    # Define LSB log_* functions.
    # Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
    . /lib/lsb/init-functions

    if [ "${DISABLED}" != "0" -a "${1}" != "stop" ]; then
            command log_warning_msg "Not starting ${DESC} - edit /etc/default/tt-rss-DOMAIN and change DISABLED to be 0.";
            exit 0;
    fi

    #
    # Function that starts the daemon/service
    #
    do_start()
    {
            # Return
            #   0 if daemon has been started
            #   1 if daemon was already running
            #   2 if daemon could not be started
            command start-stop-daemon --start --make-pidfile --chuid "${USER}" --group "${GROUP}" --background --quiet --chdir "${DAEMON_DIR}" --pidfile "${PIDFILE}" --exec "${DAEMON}" --test > /dev/null \
                    || return 1

            command start-stop-daemon --start --make-pidfile --chuid "${USER}" --group "${GROUP}" --background --quiet --chdir "${DAEMON_DIR}" --pidfile "${PIDFILE}" --exec "${DAEMON}" -- \
                    ${DAEMON_ARGS} \
                    || return 2
            # Add code here, if necessary, that waits for the process to be ready
            # to handle requests from services started subsequently which depend
            # on this one.  As a last resort, sleep for some time.
    }

    #
    # Function that stops the daemon/service
    #
    do_stop()
    {
            # Return
            #   0 if daemon has been stopped
            #   1 if daemon was already stopped
            #   2 if daemon could not be stopped
            #   other if a failure occurred
            command start-stop-daemon --stop --make-pidfile --user "${USER}" --group "${GROUP}" --quiet --retry=TERM/1/KILL/5 --pidfile "${PIDFILE}"
            RETVAL="$?"
            [ "${RETVAL}" = 2 ] && return 2
            # Wait for children to finish too if this is a daemon that forks
            # and if the daemon is only ever run from this initscript.
            # If the above conditions are not satisfied then add some other code
            # that waits for the process to drop all resources that could be
            # needed by services started subsequently.  A last resort is to
            # sleep for some time.
            command start-stop-daemon --stop --quiet --user "${USER}" --group "${GROUP}" --oknodo --retry=0/1/KILL/5 --pidfile "${PIDFILE}"
            [ "$?" = 2 ] && return 2
            # Many daemons don't delete their pidfiles when they exit.
            command rm -f "${PIDFILE}"
            return "${RETVAL}"
    }


    case "${1}" in
      start)
            [ "${VERBOSE}" != no ] && log_daemon_msg "Starting ${DESC}" "${NAME}"
            do_start
            case "$?" in
                    0|1) [ "${VERBOSE}" != no ] && log_end_msg 0 ;;
                    2) [ "${VERBOSE}" != no ] && log_end_msg 1 ;;
            esac
            ;;
      stop)
            [ "${VERBOSE}" != no ] && log_daemon_msg "Stopping ${DESC}" "${NAME}"
            do_stop
            case "$?" in
                    0|1) [ "${VERBOSE}" != no ] && log_end_msg 0 ;;
                    2) [ "${VERBOSE}" != no ] && log_end_msg 1 ;;
            esac
            ;;
      restart|force-reload)
            #
            # If the "reload" option is implemented then remove the
            # 'force-reload' alias
            #
            log_daemon_msg "Restarting ${DESC}" "${NAME}"
            do_stop
            case "$?" in
              0|1)
                    do_start
                    case "$?" in
                            0) log_end_msg 0 ;;
                            1) log_end_msg 1 ;; # Old process is still running
                            *) log_end_msg 1 ;; # Failed to start
                    esac
                    ;;
              *)
                    # Failed to stop
                    log_end_msg 1
                    ;;
            esac
            ;;
      *)
            echo "Usage: ${SCRIPTNAME} {start|stop|restart|force-reload}" >&2
            exit 3
            ;;
    esac
    #EOF

    Add in /root/chroot-services.pl in services started /etc/init.d/mysql and /etc/init.d/tt-rss 
    • For an update of tt-rss code just do:
    cd /var/www/html/tt-rss
    sudo git pull origin master
    sudo chown -R www-data:www-data .
    • In order to add a user go through admin account configuration menu.

    Automount usb disk on debian domain

    This section is still WIP. The lead that I found Follow http://forum.synology.com/enu/viewtopic.php?f=90&t=29677 by modifying /etc/rc.scanusbdev and /etc.defaults/rc.scanusbdev and /usr/syno/hotplug.d/default/ in synology domain.

    Samba: get rid of mangled names

    Samba does not like special characters like: (>, <, *, :, ", ?, \, and |)  and changes file names perceived from samba clients as "mangled"... 
    Find the files that will be mangled by evil samba:
    find . | grep "[\*\?\:<>\\\"\|]"
    Perform the renaming (can be automated every day via cron job):
    find . -type d -name \*"[\*\?\:<>\\\"\|]"\* -print0 | xargs -0 rename 's/[\*\?\:<>\\\"\|]/_/g'
    find . -type f -name \*"[\*\?\:<>\\\"\|]"\* -print0 | xargs -0 rename 's/[\*\?\:<>\\\"\|]/_/g'
    #touch toto\|tutu.mkv tata\<tutu.mkv titi\|tutu.mkv tete\"tutu.mkv
    NOT WORKING for me(tm): some claims that it can be fixed through special parameters in samba config files but I never managed to get it to work because of the lack of support in samba clients that I use on embedded devices. Anyway the way to go would have been to add in /usr/syno/etc.defaults/smb.conf and /usr/syno/etc/smb.conf in gobal section the following command to solve the issue:
    mangled names = no
    mangled map =(: _)
    Restart samba
    /usr/syno/etc/rc.sysv/S80samba.sh restart

    Remove all nfo and posters files

    find . -name \*.nfo -o -name \*archos.jpg -exec rm {} \; 

    Debian upgrade process

    • Weezy to jessie debian upgrade
    cp /etc/apt/sources.list{,.wheezy-bak}
    sed -i -e 's/ \(stable\|wheezy\)/ jessie/ig' /etc/apt/sources.list
    apt update
    apt --download-only dist-upgrade
    apt dist-upgrade
    cd /etc/apache2/sites-available
    mv 0080-main 0080-main.conf
    mv 0443-main 0443-main.conf
    a2ensite 0080-main
    a2ensite 0443-main
    service apache2 reload

    Getting away from a fixed IP survival guide (sigh)

    Since my beloved ISP (free) was getting too long to provide me with fiber optic access (yes xDSL is too slow), I switched back to the evil Orange that tricked me with an attractive web offer.
    Of course I had to give up my fixed IP address in this process which was a big problem for me.
    In order to overcome this problem I had to go through the following steps:
    • modify gandi DNS entry to redirect www.courville.org to ghs.googlehosted.com using CNAME
    • have google apps to map my sub site to https://sites.google.com/a/courville.org/courville in google apps console -> applications -> sites -> mapping
    • create in gandi a DNS entry for my IP address home.courville.org with small TTL (e.g. 300s i.e. 5mn)
    • have this record updated automatically with gandyn python3 script using gandi API production key generated updated through crontab job
    git clone https://github.com/Chralu/gandyn
    apt install python3-aioxmlrpc
    cd gandyn
    python3 ./setup.py install

    vi /etc/gandyn.conf
    #API key generated by Gandi
    API_KEY = 'USEGANDIAPIKEYFORYOURDOMAIN'
    #Name of the domain to update
    DOMAIN_NAME = 'courville.org'
    #Time to live of the updated record
    TTL = 300
    #Filters used to find the record to update.
    #By default, the updated record is "@   A   xxx.xxx.xxx.xxx"
    #Where 'xxx.xxx.xxx.xxx' is the updated value
    #RECORD = {'type':'A', 'name':'@'}
    RECORD = {'type':'A', 'name':'home'}
    #Log level of the script. Values are :
    #   logging.DEBUG
    #   logging.INFO
    #   logging.WARNING
    #   logging.ERROR
    #   logging.CRITICAL
    LOG_LEVEL = logging.DEBUG
    #Path of the log file
    LOG_FILE = '/var/log/gandyn.log'

    crontab -e
    */5 *   * * *                   gandyn.py --config /etc/gandyn.conf >/dev/null 2>&1
    • since I am an happy user of synology use synology dyndns free service and make courville.synology.me point to my home
    • change svn repositories from www to home: svn info | grep "Repository Root" | cut -d ' ' -f 3 | sed "s/www/home/g" | xargs svn relocate

    ftp server

    Install proftpd launched from inetd:

    apt install proftpd-basic

    Android IPv6 nightmare

    It seems that android if you check by getprop | grep dns sometimes chooses to have net.dns1 as an IPv6 resolver at least with the awful Orange Livebox.
    Even if you disable the IPv6 support on the web interface at 192.168.1.1 it will re-enable it periodically (yay, sigh, sniff): you thus need to disable it when it does not work.
    In order to periodically disable IPv6 support you can automate this task via the following script to add in crontab:
    #!/bin/sh
    curl -o /tmp/livebox_context -X POST -i -H "Content-type: application/json" -c /tmp/livebox_cookies.txt "http://192.168.1.1/authenticate?username=admin&password=YOURPASSWORDHERE"
    ID=$(tail -n1 /tmp/livebox_context | sed 's/{"status":0,"data":{"contextID":"//1'| sed 's/"}}//1')
    curl -i -b /tmp/livebox_cookies.txt -X POST -H 'Content-Type: application/json' -H 'X-Context: '$ID'' -d '{"parameters":{"Enable":false}}' http://192.168.1.1/sysbus/NMC/IPv6:set
    rm /tmp/livebox_context /tmp/livebox_cookies.txt

    More fun stuff can be done with livebox via their sysbus interface and there is even a python framework for this: https://github.com/rene-d/sysbus
    On your server also stop ipv6 isc-dhcp-server that is confusing android:

    netstat -apn|grep -i dhcp
    service isc-dhcp-server stop
    vi /etc/default/isc-dhcp-server
    OPTIONS="-4"
    INTERFACESv4="eth0"

    Also you need to disable ipv6 on the synology network interface...

    Home automation system: domoticz

    Let's install an open source home automation system. I picked domoticz. I bought a rxccom RFXtrx433E USB HA controller and a Z-Stick S2 from Aeotech to play with the various sensors/actuators of my home/

    apt install build-essential nano cmake git libboost-dev libboost-thread-dev libboost-system-dev libsqlite3-dev curl libcurl4-openssl-dev libssl-dev libusb-dev zlib1g-dev python3-dev openzwave usbutils
    git clone https://github.com/domoticz/domoticz.git domoticz
    cd domoticz
    #cmake -DCMAKE_BUILD_TYPE=Release .
    cmake -DCMAKE_BUILD_TYPE=Release -DUSE_PYTHON_PLUGINS=NO CMakeLists.txt
    make


    end
    ċ
    optware-armadaxp.diff
    (62k)
    Marc de Courville,
    Jul 5, 2014, 3:04 AM
    Comments