I have not been able to find a good ipkg repository for armadaxp (armhf) and using the usual softfp arm ones was resulting in compatibility (segfault) issues with /lib/libc.so.
Nevertheless, if you want to recompile the whole ipkg packages you will need the armadaxp toolchain from synology:
This has to be done on a debian linux host or virtual machine (note that since this first install notes, system has been upgraded from wheezy to jessie):
On the synology untar the created archive after enabling temporarily telnet access on the web interface in the control panel terminal section and login in on the synology station via telnet command:
Get synology default linux ssh to respond to port 2222 and synology debian one on port 22 such that we have created "two domains": i) the synology one and ii) the debian one. For this you need to enable ssh to run on synology web interface and perform the following modifications (note that starting DSM v5.1 you need also to modify /etc/synoinfo.conf):
Then disable telnet service on synology web interface control panel in terminal section.
Now you need to recreate all the synology users on debian domain using the same uid and gid and the following command can help you:
Now create a special group under synology web interface: mediagroup to which all the users that you wish to grant access to media files will get access and identify which gid it is assigned to through this command:
#synology domain
grep mediagroup /etc/group
mediagroup:x:65536:user1,user2
In debian domain create the same gid:
#debian domain
addgroup --gid 65536 mediagroup
Install an apache server coexisting with synology's internal http server
Synology domain: DSM 6.x way
In order to get coexistence you need to restrict synology's nginx server to listen respectively to ports 8080 and 8443 instead of 80 and 443:
#/sbin/initctl stop nginx
sed -e "s/listen 80 default_server;$/listen 8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:80 default_server;$/listen [::]:8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen 443 default_server ssl;$/listen 8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:443 default_server ssl;$/listen [::]:8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
#/usr/syno/sbin/synoservicecfg --restart nginx
nginx -s reload
#/sbin/initctl start nginx
Note that nginx.conf
is generated on the fly by /usr/syno/etc.defaults/rc.sysv/nginx-conf-generator.sh
Synology domain: DSM 5.x old way
In order to get coexistence you need to restrict synology's "user" apache server to listen respectively to ports 8080 and 8443 instead of 80 and 443:
#syno domain
vi /etc/httpd/conf/httpd.conf-user /etc.defaults/httpd/conf/httpd.conf-user
Listen 8080
#EOF
vi /etc/httpd/conf/extra/httpd-ssl.conf-user /etc.defaults/httpd/conf/extra/httpd-ssl.conf-user
Listen 8443
#EOF
/usr/syno/sbin/synoservicecfg --restart httpd-user
Debian domain
What follows is to be performed in debian domain: - disable default apache site before configuring another one and enable both https and https (ports 80 and 443) but limit listen to 127.0.0.1:443 for sslh multiplexer to work:
a2dissite 000-default
cd /etc/apache2/sites-available/
cp default 0080-main
cp default-ssl 0443-main
a2ensite 0080-main
a2ensite 0443-main
a2enmod ssl
vi /etc/apache2/ports.conf
<IfModule ssl_module>
Listen 127.0.0.1:443
</IfModule>
<IfModule mod_gnutls.c>
Listen 127.0.0.1:443
</IfModule>
#EOF
service apache2 reload
- do not forget to add this new service to the ones to be launched at debian chroot startup by editing
/root/chroot-services.pl
and adding in my @service
array: '/etc/init.d/apache2',
- install sslh multiplexer to be able to use ssh over port when you are in unfriendly environment such airports
apt install sslh
vi /etc/default/sslh
RUN=yes
DAEMON_OPTS="--user sslh --listen 192.168.0.2:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid"
#EOF
service sslh restart
Install self signed custom certificates for your domain on synology domain
There is now a dedicated script to generate your own self sign certificate, all you need is to:
cd /usr/syno/etc/ssl
vi mkcert.sh
FR
IDF
Paris
Courville.org
Courville
www.courville.org
nospam@courville.org
EOT
./mkcert.sh
An alternative to avoid self signed certificates is to use the letsencrypt way.
Install proper signed certificates with letsencrypt
- In order to use the letsencrypt free ssl certificate service, I went through the following manual steps to avoid any errors
apt install python-certbot-apache letsencrypt
/usr/bin/certbot certonly -a webroot -i apache -w /var/www/html -n -d home.courville.org
- In another terminal follow the above command output before hitting enter
webdir=/var/www/html
mkdir -p $webdir/.well-known/acme-challenge
printf "%s" COMPLEXSTRINGFROMABOVECOMMAND > $webdir/.well-known/acme-challenge/COMPLEXSTRINGFROMABOVECOMMAND
- Now certs are store there
/etc/letsencrypt/live/home.courville.org/
and you can include them in the apache2 configuration by editing /etc/apache2/sites-available/0443-main
SSLCertificateFile /etc/letsencrypt/live/home.courville.org/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/home.courville.org/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/home.courville.org/fullchain.pem
- reload apache2 config with
service apache2 reload
- to renew certificate use
certbot renew
but it needs to be done manuall through /usr/bin/certbot certonly -a webroot -i apache -w /var/www/html -n -d home.courville.orgagain and cannot be automated via crontab - automatic renewal of the certificate should work with:
sudo
/usr/bin/certbot certonly -a webroot -i apahce -w /var/www/html -n -d home.courville.org
Install self signed custom certificates for your domain on debian domain
- generate the certificate authority key ca.key (remember the passphrase)
cd /etc/ssl
openssl genrsa -des3 -out ca.key 2048
- generate the key certificate
openssl req -nodes -newkey rsa:2048 -sha256 -keyout ca.key -out ca.csr
- generate the final certificate authority key (valid 10 years):
openssl x509 -days 3650 -signkey ca.key -in ca.csr -req -out ca.crt
- generate the server-certificate key:
openssl genrsa -out server.key 2048
- generate the key certificate, note that the CommonName needs to match your DNS domainname (wildcards are allowed e.g. *.yourdomain.com)
openssl req -nodes -newkey rsa:2048 -sha256 -keyout server.key -out server.csr
- generate the server certificate
openssl x509 -days 3650 -CA ca.crt -CAkey ca.key -set_serial 01 -in server.csr -req -out server.crt
- be sure that apache does use them by editing
/etc/apache2/sites-available/0443-main
and checking that the right patch is used
SSLCertificateFile /etc/ssl/server.crt
SSLCertificateKeyFile /etc/ssl/server.key
- install now the same certificates on your synology, the following actions need to be performed in synology domain (not debian one)
# synology domain not debian one
cd /usr/syno/etc/ssl
mkdir bak
cp -r ssl.crt bak
cp -r ssl.csr bak
cp -r ssl.key bak
DEB=/volume1/debian
cp $DEB/etc/ssl/ca.crt ssl.crt
cp $DEB/etc/ssl/server.crt ssl.crt
cp $DEB/etc/ssl/ca.csr ssl.csr
cp $DEB/etc/ssl/server.csr ssl.csr
cp $DEB/etc/ssl/ca.key ssl.key
cp $DEB/etc/ssl/server.key ssl.key
cp $DEB/etc/ssl/ca.crt /volume1/public
Note that now there is a script to regenerate your synology certificates
cd /usr/syno/etc/ssl/
./mkcert.sh
Protect via https some internal http services
Somtimes you have some service that has a web interface on a specific port (e.g. http://192.168.0.2:1234/servicehttp
) but not supporting https that you do not want to expose directly to the outside world in order not to have your password in the clear. In order to overcome this issue you can use the proxy feature of apache by editing /etc/apache2/sites-available/0443-main
the following way:
SSLProxyEngine on
ProxyRequests Off
ProxyVia Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
<Location /servicehttp>
Order deny,allow
Allow from all
AuthName "Private area"
AuthType Basic
AuthUserFile /opt/etc/apache2/htpasswd
Require valid-user
ProxyPass http://192.168.0.2:1234/servicehttp
ProxyPassReverse http://192.168.0.2:1234/servicehttp
SSLRequireSSL
</Location>
and enabling apache modules proxy and rewrite:
a2enmod proxy rewrite proxy_http
service apache2 restart
Create login and password using the following command:
htpasswd -c /etc/apache2/htpasswd username
Perform some https redirections towards synology default services without opening the ports on router
If you do not want to open some ports on your router and forward them to your synology NAS to enable remote access (e.g. 7001 and 5001) you can use the proxy feature of apache to do it by simply editing /etc/apache2/sites-available/0080-main the following way:
ProxyPass /filestation https://192.168.0.2:7001/
ProxyPassReverse /filestation/ https://192.168.0.2:7001/
ProxyPass /synology https://192.168.0.2:5001/
ProxyPassReverse /synology/ https://192.168.0.2:5001/
Redirect your main web server towards your own google site or some secure links
If you want to redirect all http request towards your web server to another one (e.g. google sites) you can use the rewrite feature of apache by simply editing /etc/apache2/sites-available/0080-main the following way:
RedirectMatch ^/$ http://sites.google.com/a/website.org/website/
Redirect permanent /filestation https://www.website.org/filestation
Install subversion server
If you do not want to open some ports on your router and forward them to your synology NAS to enable remote access (e.g. 7001 and 5001) you can use the proxy feature of apache to do it by simply editing /etc/apache2/sites-available/0443-main the following way:
apt install libapache2-svn subversion subversion-tools
a2enmod dav dav_svn
service apache2 restart
mkdir -p /volume1/svn/
chown -R www-data:www-data /volume1/svn/
Enable svn webdav service in /etc/apache2/sites-available/0443-main
<Location /svn>
DAV svn
SVNParentPath /volume1/svn
Order deny,allow
Allow from all
AuthName "Subversion repository"
AuthType Basic
AuthUserFile /etc/apache2/htpasswd
Require valid-user
SSLRequireSSL
</Location>
Install git server
Do not use webdav/apache: use ssh backend... Here is how to create a repository:
apt install git-core
adduser git
su - git
cd /home/git
mkdir -p depot.git
cd
depot.git
git init --bare
git update-server-info
In order to import a project (existing code) into depot.git from a remote site where code is stored in depot directory:
cd depot
git init
git add .
git commit -m "First commit"
git remote add origin git@gitserver.com:depot.git
git remote -v
git push -u origin master
In order to clone the project:
git clone git@gitserver.com:depot.git
For group contributions and multi repository, you need to install gitosis replacement: gitolite or gogs for a full personal hosted github like experience.
Install private github: gogs
Gogs is a clone of github for private hosting of git repositories. I kept local version of go instead of downloading new one
su - git
cd $HOME
mkdir repositories
echo 'export GOPATH=$HOME/go' >> $HOME/.bashrc
echo 'export PATH=$PATH:$GOPATH/bin' >> $HOME/.bashrc
source $HOME/.bashrc
go get -u github.com/gogits/gogs
cd $GOPATH/src/github.com/gogits/gogs
go build
wget https://raw.githubusercontent.com/gogits/gogs/master/scripts/mysql.sql
mysql --user=root --password=YOURPASSWORD < mysql.sql
./gogs web
Configure via URL
http://localhost:3000 configuration file is stored there and do the following modifications:
ROOT_URL = https://your.web.server/gogs/
Link the whole via apache2 proxy configuration:
vi /etc/apache2/sites-available/0443-main.conf
<Location /gogs>
Order deny,allow
Allow from all
AuthName "Private area"
AuthType Basic
AuthUserFile /etc/apache2/htpasswd
Require valid-user
ProxyPass http://192.168.0.2:3000/
ProxyPassReverse http://192.168.0.2:3000/
SSLRequireSSL
</Location>
Launch the service automatically:
cd /etc/init.d
wget https://raw.githubusercontent.com/gogits/gogs/master/scripts/init/debian/gogs
chmod a+rx gogs
vi gogs
#WORKINGDIR=/home/git/gogs
WORKINGDIR=/home/git/go/src/github.com/gogits/gogs
/etc/init.d/gogs start
Add gogs service to chroot debian domain: add the appropriate entry to
vi /root/
chroot-services.pl
Install other sources of packages on synology web interface
Synocommunity is a great source of packages to add to the ones provided by synology.
Interesting packages like transmission can be found that is a good replacement of synology's download station.
Note that I do not use these packages since I prefer to stick with official debian ones.
Get ready to compile native
Install all the tools you need:
apt install build-essential
Rsync version that detects renaming
If you reorder your files and photos normally rsync would take ages to perform the backup but there is a patch not integrated in mainstream that deals with that:
cd /volume1/@src/
wget "http://www.samba.org/ftp/rsync/src/rsync-3.1.1.tar.gz"
wget "http://www.samba.org/ftp/rsync/src/rsync-patches-3.1.1.tar.gz"
#wget "https://bugzilla.samba.org/attachment.cgi?id=7435" -O detect-renamed.diff
tar zxf rsync-3.1.1.tar.gz
tar zxf rsync-patches-3.1.1.tar.gz
#cp detect-renamed.diff rsync-3.0.9/patches
cd rsync-3.1.1
patch -p1 <patches/detect-renamed.diff
patch -p1 <patches/detect-renamed-lax.diff
./configure --prefix=/usr/local
make install
strip /usr/local/bin/rsync
find /usr/local -mmin -5 > ../install-rsync-files.lst
Now you can use the following option rsync -av --detect-renamed
Backup your data with history: rsnapshot
rsnapshot is an rsync based backup system that I like: it is simple yet efficient.
apt install rsnapshot
- Note that using debian domain rsync will lack one option that is special to synology's
/usr/syno/bin/rsync
: the --syno-acl
option used to preserve any windows ACL in the backup process (not my case because I want the rename detection feature explained above).
My configuration is the following /etc/rsnapshot.conf
:
config_version 1.2
snapshot_root /volumeUSB1/usbshare/backup/rsnapshot
cmd_cp /bin/cp
cmd_rm /bin/rm
#cmd_rsync /usr/syno/bin/rsync
cmd_rsync /usr/bin/rsync
cmd_ssh /opt/bin/ssh
cmd_du /usr/bin/du
interval daily 7
interval weekly 4
interval monthly 6
link_dest 1
verbose 2
loglevel 3
logfile /volume1/backup/rsnapshot.log
#rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded --syno-acl
rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded
lockfile /var/run/rsnapshot.pid
# little trick in .ssh/config to map syno to host localhost with port 2222.
WARNING: you need to enable backup services i.e. rsync on synology's web interface
backup root@syno:/ synosys/ exclude_file=/etc/incexcl-synosys
backup / debiansys/ exclude_file=/etc/incexcl-debiansys
backup / synodata/ exclude_file=/etc/incexcl-synodata
In order to backup synology system domain, I use an alias for syno in .ssh/config
file
# for rsnapshot to access localhost:2222
host syno
port 2222
hostname localhost
You will also need to enable rsync (network backup) service on synology web interface through main menu->backup & replication->backup services specifying SSH encryption port as 2222.
You can notice that, to be safe, I am performing the backup to an external USB disk that I attach to the synology (located in /volumeUSB1/usbshare
).
All the directories (include and exclude) are specified in the /etc/incexcl-debiansys file which has the following format:
+ /etc/
+ /root/
+ /usr/
+ /usr/local/
+ /usr/local/etc/
- /usr/local/*
- /usr/*
+ /var/
+ /var/www/
+ /var/spool/
+ /var/spool/cron/
- /var/spool/*
+ /var/lib/
+ /var/lib/mysql/
- /var/lib/*
- /var/*
- /*
Synology's system configuration file tree to be backup is stored in /etc/incexcl-synosys
:
+ /etc/
+ /root/
+ /usr/
+ /usr/syno/
+ /usr/syno/etc/
+ /usr/syno/apache/
- /usr/syno/*
- /usr/*
+ /opt/
+ /opt/etc/
+ /opt/share/
+ /opt/share/lib/
- /opt/share/*
+ /opt/lib/
+ /opt/lib/ipkg/
- /opt/lib/*
- /opt/*
- /*
- Install and configure a simple mail client: nail heirloom-mailx to receive notifications of failure
apt install heirloom-mailx
Edit /etc/nail.rc
to reflect your smtp server and credentials (here the domain is a google apps business one):set hold
set append
set ask
set crt
set dot
set keep
set emptybox
set indentprefix="> "
set quote
set sendcharsets=iso-8859-1,utf-8
set showname
set showto
set newmail=nopoll
set autocollapse
ignore received in-reply-to message-id references
ignore mime-version content-transfer-encoding
fwdretain subject date from to
set smtp=smtp.gmail.com:587
set smtp-use-starttls
set ssl-verify=ignore
set ssl-ca-file=/etc/ssl/certs/thawte_Primary_Root_CA.pem
set from="user@yourdomain.com"
set smtp-auth-user=user@yourdomain.com
set smtp-auth-password="YOURPASSWORD"
- Automate your backups launching rsnapshot in the crontab: issue
sudo crontab -e
to add:
#minute hour mday month wday who command
55 23 * * 0,1,2,3,4,5,6 root /usr/bin/rsnapshot -v daily || herloom-mailx -s "bkp `date +%Y%m%d` daily" user@yourdomain.com < /volume1/backup/rsnapshot.log
0 02 * * 0 root /usr/bin/rsnapshot -v weekly || herloom-mailx -s "bkp `date +%Y%m%d` weekly" user@yourdomain.com < /volume1/backup/rsnapshot.log
0 04 1 * * root /usr/bin/rsnapshot -v montly || herloom-mailx -s "bkp `date +%Y%m%d` monthly" user@yourdomain.com < /volume1/backup/rsnapshot.log
Restart cron:
Files renaming
I use two tools that I find handy;
- perl-file-rename that I find extremely useful and powerful that handles regexp (it is installed by default on debian). One example of usage is the following renaming series in directory that starts with the episode numbed:
rename 's/^([0-9]*) - /serie-s01e$1-/g' *.mp4
rename 's/ /_/g' *
rename "s/l_homme/l\'homme/g" *
- convmv to get rid of non UTF-8 characters
Here is an example of how to use convmv recursively on current directory to fix encoding:
convmv -f iso-8859-1 -t utf8 --notest -r .
- files with question marks really do upset samba shares it seems and in order to remove these you can use the following renaming script:
#!/bin/bash
for i in `find . | grep "\?"`; do rename "s/\?//g" "$i"; done
Subtitles downloader
subliminal is a python tool that downloads subtitles from various sources for video files.
cd /root/src
apt install python python-setuptools python-pip
git clone https://github.com/Diaoul/subliminal
cd subliminal
python setup.py install
Subtitles download can be automated downloading first english subs if only available to be replaced by french ones once available using a little script like this one:
#!/opt/bin/bash
TV_DIR=$1
BACKTIME=$2
[ -z "$TV_DIR" ] && TV_DIR=/volume1/video/serie
[ -z "$BACKTIME" ] && BACKTIME=14
GETSUB=/opt/bin/subliminal
cd $TV_DIR
# try to get the subtitle only for 14 days to avoid exploding list of files to process
for file in $(find . -mtime -${BACKTIME} -type f \( -iname \*.avi -o -iname \*.mkv -o -iname \*.mp4 \) )
do
subtitle="${file%.*}.srt"
if [ ! -f "$subtitle" ]
then
echo processing $file
# download without .fr.srt or .en.srt to be sure it is from addic7ed in french first in -s single mode
$GETSUB -s -l fr -p addic7ed --addic7ed-username username --addic7ed-password password -q "$file" && rm "$file".en.srt
# if it failed with addict7ed get from any other source but with .fr.srt and .en.srt
[ ! -f "$subtitle" ] && $GETSUB -l en --addic7ed-username username --addic7ed-password password -q "$file"
fi
done
If you want to update subliminal in the future just do:
cd /root/src/subliminal
git pull
python setup.py install
TV series file renamer
tvnamer is a very powerful python program that enables to rename tv series.
cd /root/src
git clone https://github.com/dbr/tvnamer
cd tvnamer
python setup.py install
An example of user configuration of tvnamer capturing the most useful features of the tool can be found below:
cat .tvnamer.json
{
"language": "en",
"search_all_languages": false,
"always_rename": false,
"batch": true,
"episode_separator": "-",
"episode_single": "%02d",
"filename_with_date_and_episode": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
"filename_with_date_without_episode": "%(seriesname)s-e%(episode)s%(ext)s",
"filename_with_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s-%(episodename)s%(ext)s",
"filename_with_episode_no_season": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
"filename_without_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s%(ext)s",
"filename_without_episode_no_season": "%(seriesname)s-e%(episode)s%(ext)s",
"lowercase_filename": false,
"move_files_confirmation": true,
"move_files_destination": "/volume1/video/serie/%(seriesname)s/%(seriesname)s-s%(seasonnumber)02d",
"move_files_enable": true,
"multiep_join_name_with": ", ",
"normalize_unicode_filenames": false,
"recursive": false,
"custom_filename_character_blacklist": ":<>?*",
"replace_invalid_characters_with": "_",
"move_files_fullpath_replacements": [
{"is_regex": false, "match": " ", "replacement": "_"},
{"is_regex": false, "match": "_-_", "replacement": "-"},
{"is_regex": false, "match": ":", "replacement": ""},
{"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
],
"output_filename_replacements": [
{"is_regex": false, "match": "?", "replacement": ""},
{"is_regex": false, "match": " ", "replacement": "_"},
{"is_regex": false, "match": "_-_", "replacement": "-"},
{"is_regex": false, "match": ":", "replacement": ""},
{"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
],
"input_filename_replacements": [
{"is_regex": true, "match": "Serie.US", "replacement": "Serie"},
{"is_regex": true, "match": "Serie([^:])", "replacement": "Serie_expanded_name\\1"}
]
}
If you want to update tvnamer in the future just do:
cd /root/src/tvnamer
git pull
python setup.py install
Transmission tweaks
Install transmission and command line related tools via:
apt install transmission transmission-daemon transmission-cli transmission-remote-cli python-transmissionrpc
Do not forget to add this new service to the ones to be launched at debian chroot startup by editing /root/chroot-services.pl
and adding in my @service
array: '/etc/init.d/transmission-daemon',
Transmission daemon configuration files are located: /var/lib/transmission-daemon/info. Make sure to stop transmission on the synology web interface or using the below commands before editing this file so that your modifications are taken into account:
service transmission-daemon stop
vi
/var/lib/transmission-daemon/info/settings.json
service transmission-daemon start
- learn to use transmission-remote script
- in order to execute some actions at the end of a file download a script can be triggered e.g. for moving files around based on tracker name
"script-torrent-done-enabled": true,
"script-torrent-done-filename": "/usr/local/bin/transmission-done.sh",
With /usr/local/bin/transmission-done.sh
for instance being:
#!/bin/bash
# 'TR_TORRENT_NAME'
# 'TR_TORRENT_DIR'
# 'TR_TORRENT_ID'
# 'TR_APP_VERSION'
# 'TR_TORRENT_HASH'
# 'TR_TIME_LOCALTIME'
TV_SHOW_ID="hdtv"
MEDIA_DIR="/volume1"
DOWNLOAD_DIR="${MEDIA_DIR}/download"
TV_SHOW_DIR="${MEDIA_DIR}/download/serie"
DBGF=${DOWNLOAD_DIR}/log
istvshow=false
istvshowrpk=false
isprivate=false
istrackera=false
istrackerb=false
if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_ID}" -o -z "${TR_TORRENT_DIR}" ]
then
TR_TORRENT_ID=$1
[ -z "$1" ] && exit 1
fi
tremote ()
{
/usr/bin/transmission-remote -n USER:PASSWORD $HOST -t ${TR_TORRENT_ID} $* # 2>&1 | tee -a $DBGF
}
if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_DIR}" ]
then
TR_TORRENT_NAME=$( tremote -t $1 -i | grep Name: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2 )
TR_TORRENT_DIR=$( tremote -t $1 -i | grep Location: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2)
fi
echo `date +%Y-%m-%d\ %H:%M` processing id ${TR_TORRENT_ID} file ${TR_TORRENT_NAME} in dir ${TR_TORRENT_DIR} 2>&1 | tee -a $DBGF
( tremote -i | grep "Public torrent" | grep -qi No ) && isprivate=true
( tremote -it | grep -qi "tracker.a" ) && istrackera=true
( tremote -it | grep -qi "tracker.b" ) && istrackera=true
# if flexget sets Location to serie then it must be a tvshow
( tremote -i | grep Location | grep -q $TV_SHOW_DIR ) && istvshow=true
if ( echo "${TR_TORRENT_NAME}" | grep -Eqi "${TV_SHOW_ID}" )
then
istvshow=true
( ( echo "${TR_TORRENT_NAME}" | grep -Eqi "REPACK" ) || ( echo "${TR_TORRENT_NAME}" | grep -Eqi "PROPER" ) ) && istvshowrpk=true || istvshowrpk=false
fi
if $isprivate
then
# private torrent
if $istrackera
then
echo "trackera"
mkdir -p "$DOWNLOAD_DIR/trackera"
tremote --move "$DOWNLOAD_DIR/trackera"
elif $istrackerb
then
echo "trackerb"
mkdir -p "$DOWNLOAD_DIR/trackerb"
tremote --move "$DOWNLOAD_DIR/trackerb"
fi
else
# public torrent
if $istvshow
then
echo "tvshow detected, move and remove ${TR_TORRENT_DIR}/${TR_TORRENT_NAME}" "${TV_SHOW_DIR}" 2>&1 | tee -a $DBGF
tremote --move "$DOWNLOAD_DIR/serie"
tremote -r
else
echo "other public"
tremote -S
mkdir -p "$DOWNLOAD_DIR/public"
tremote --move "$DOWNLOAD_DIR/public"
fi
fi
- periodically you can check if your download and seed is complete based on your seedratio and then decide to move the completed files somewhere and remove it from transmission. This script does this for you:
#!/opt/bin/bash
tremote ()
{
transmission-remote -n user:password $*
}
for t in `tremote -l | grep Done | sed -e 's/^ *//' | sed 's/\*//' | cut -s -d " " -f1`
do
if ( tremote -t $t -it | grep -qi "tracker.a" );
then
echo processing $t
tremote -t $t --move "/volume1/files/trackera"
tremote -t $t -r
fi
done
- enabling blocklist in transmission is a sensible thing to do for this add the following option to /var/lib/transmission-daemon/info/settings.json
"blocklist-enabled": true,
"blocklist-url": "http://list.iblocklist.com/?list=bt_level1&fileformat=p2p&archiveformat=gz",
You can periodically update the blocklist via cron job by issuing sudo crontab -e
and adding:
30 19 * * * root /usr/bin/transmission-remote -n user:password --blocklist-update
and restarting cron:
- keeping a fair use of the ADSL connection to allow others to have good traffic without too latency edit
/var/lib/transmission-daemon/info/settings.json
"download-queue-size:" 5,
"peer-limit-per-torrent:" 5,
"peer-limit-global:" 30,
In order to have good permission rights, in synology domain add transmission gid 107 in /etc/group
that should match the gid of debian-transmission in debian domain. Add users that should get access to download directory to this group in synology domain. Fix on debian domain the directory rights to allow writing for debian-transmission:
addgroup debian-transmission mediagroup
chown -R debian-transmission:mediagroup /volume1/download
chmod g+rwX -R /volume1/download
- transmission logs suggests for udp and utp buffer optimizations (when adding in
/etc/default/transmission-daemon
to OPTION --logfile /volume1/download/transmission/transmission.log
and reading logs) put in /etc/sysctl.conf
echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf
echo "net.core.wmem_max = 1048576" >> /etc/sysctl.conf
sysctl -p
Automatic rss based file downloading
flexget is a good tool and easy to configure to automate file download based on rss feed matching names.
It can be installed by:
apt install sqlite
easy_install flexget
easy_install --upgrade transmissionrpc
A typical user configuration can be the following:
cat .flexget/config.yml
templates:
global:
transmission:
host: localhost
port: 9091
username: username
password: password
addpaused: no
email:
active: True
from: flexget@yourdomain.com
to: user@yourdomain.com
smtp_host: smtp.yourdomain.com
smtp_port: 25
smtp_login: false
tv:
quality: webrip|webdl|hdtv <720p
regexp:
reject:
- Blah Confidential
content_filter:
reject: '*.avi'
set:
path: /volume1/download/transmission/serie
skip_files:
- '*.nfo'
- '*.sfv'
- '*[sS]ample*'
- '*.txt'
include_subs: yes
series:
- Serie name one
- Serie name two
tasks:
stream1:
rss: http://streamsiteone.com/feed/
template: tv
priority: 10
stream2:
rss: http://streamsitetwo.com/?cat=9
template: tv
priority: 20
In order to check flexget configuration you can issue (when it fails have a look here):
flexget check
Launch of flexget task can be automated using cron job, by issuing sudo crontab -e
and adding:
0
07
*
*
*
root
/usr/local/bin/flexget --cron >/dev/null 2>&1
Relaunch cron service through:
If you want to update flexget in the future just do:
pip install --upgrade flexget
#pip install --force-reinstall --ignore-installed flexget
#pip install --force-reinstall --ignore-installed transmissionrpc
If you experience some issues with seen status or database you can reset it with the following command that should be used only as a last resort:
flexget database reset --sure
Fill mkv stereo-mode field correctly for Archos Video Player to turn your TV into right 3D mode
If you happen to have an Archos TV connect and a 3D capable TV (I strongly recommend 3D passive technology) and want to have your TV switched to the appropriate 3D mode automagically you can fill the right field value of your mkv file using this process:
apt install
mkvtoolnix
- find the right video track using:
mkvinfo file.mkv
Let's assume for the example sake that track1 contains the video- assign the right value for the stereo-mode field of the mkv depending on the type of video:
mkvpropedit --edit track:1 -s stereo-mode=1 file.mkv
StereoMode field values:
0: mono
1: side by side (left eye is first)
2: top-bottom (right eye is first)
3: top-bottom (left eye is first)
4: checkboard (right is first)
5: checkboard (left is first)
6: row interleaved (right is first)
7: row interleaved (left is first)
8: column interleaved (right is first)
9: column interleaved (left is first)
10: anaglyph (cyan/red)
11: side by side (right eye is first)
12: anaglyph (green/magenta)
13: both eyes laced in one Block (left eye is first) (field sequential mode)
14: both eyes laced in one Block (right eye is first) (field sequential mode)
Install DNS service: bind
Install bind9 DNS daemon:
apt install bind9
service bind9 restart
Configure your domain entries creating files e.g. /etc/bind/db.0.168.192
and /etc/bind/db.yourdomain.com
and reference it inside /etc/bind/name.conf.local
Do not forget to add this new service to the ones to be launched at debian chroot startup by editing /root/chroot-services.pl
and adding in my @service
array: '/etc/init.d/bind9',
Install DHCP daemon
Install and configure dhcp daemon:
apt install isc-dhcp-server
vi /etc/dhcp/dhcpd.conf
service isc-dhcp-server restart
Configure ethernet MAC address and host mapping in compliance with DNS bind configuration files: /etc/dhcp/dhcpd.conf
Do not forget to add this new service to the ones to be launched at debian chroot startup by editing /root/chroot-services.pl
and adding in my @service
array: '/etc/init.d/isc-hdcp-server',
Process to add a new host- edit subdomain dns entries
- add ether MAC address in dhcp table
Tasks to be performed at each synology update
Since all the configuration files are overwritten at each synology update you need to redo the following tasks in synology domain
(note that starting DSM v5.1 you need also to modify /etc/synoinfo.conf):sed -e "s/^#Port 22$/Port 2222/g" -i /etc.defaults/ssh/sshd_config /etc/ssh/sshd_config
sed -e s/ssh_port=\"22\"/ssh_port=\"2222\"/ -e s/sftpPort=\"22\"/sftpPort=\"2222\"/ -e s/rsync_sshd_port=\"22\"/rsync_sshd_port=\"2222\"/ -i /etc/synoinfo.conf
/usr/syno/sbin/synoservicecfg --restart ssh-shell
# DSM5.x
#sed -e "s/^Listen 80$/Listen 8080/g" -i /etc/httpd/conf/httpd.conf-user
#sed -e "s/^Listen 443/Listen 8443/g" -i /etc/httpd/conf/extra/httpd-ssl.conf-user
#/usr/syno/sbin/synoservicecfg --restart httpd-user
# DSM6.x
sed -e "s/listen 80 default_server;$/listen 8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:80 default_server;$/listen [::]:8080 default_server;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen 443 default_server ssl;$/listen 8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
sed -e "s/listen \[::\]:443 default_server ssl;$/listen [::]:8443 default_server ssl;/g" -i /etc/nginx/nginx.conf
nginx -s reload
You can even automate this by adding these lines in the synology domain /etc/rc.local
file to be safe (before the chroot.sh call).
Full manual backup
On synology, synology domain:
rsync -av --progress --delete --numeric-ids --r
elative --exclude-from=/volume1/incexcl-imac marc@imac.courville.org:/ /volumeUSB2/usbshare/backup/imac/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexc
l-synodata / /volumeUSB2/usbshare/backup/synodata/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexc
l-synosys / /volumeUSB2/usbshare/backup/synosys/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexc
l-debiansys / /volumeUSB2/usbshare/backup/debiansys/
rsync -av --progress --delete --numeric-ids --exclude-from=/volume1/
incexcl-mediatiny /volume1/video/hd/ /volumeUSB2/usbshare/video/
for i in
ds211-6T-ipkg hyperion-linux ggdrive mail
do
rsync
-av --progress --delete --numeric-ids --relative /volume1/backup/
$i/
/volumeUSB2/usbshare/backup/
$i/
done
On imarc:
rsync -av --progress --delete --exclude-from=/Users/marc/dev/backup/incexcl-imarc / /Volumes/Backup/backup/mba/
Do sync two disks:
-av --progress --delete --numeric-ids
For local fast backups you can use buffer with tar: (cd / && tar cPpSslf -) | buffer -m 64m | (cd /mnt/toto/backup/ && tar xvPpSslf -)
Fast non incremental backup: on synology, synology domain, install
ipkg install pv netcat less gcc make zlib rsync tar wget buffer
wget http://zlib.net/pigz/pigz-2.3.3.tar.gz
tar zxvf
pigz-2.3.3.tar.gz
cd pigz
sed -e "s/^CC=cc/CC=gcc/g" Makefile
make
cp -f pigz unpigz /opt/bin/
# non compressible source
#SOURCE: 192.168.0.2
dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | nc -l -p 12121
#DESTINATION:
nc 192.168.0.2 12121 | tar xvPpSslf -
# compressible source
#SOURCE: 192.168.0.2
dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | pigz | nc -q 10 -l -p 12121
#DESTINATION:
nc -w 10 192.168.0.2 12121 | pigz -d | tar xvPpSslf -
Backup your emails with offlineimap
Though there is a debian package for offlineimap you better install latest git since it enables to sync up the gmail labels: git clone git://github.com/OfflineIMAP/offlineimap.git
cd offlineimap
python ./setup.py install
cd ..
easy_install --upgrade offlineimap
Now you can automate sync for an everyday scheduling:
crontab -e
00 22 * * * /usr/local/bin/offlineimap -o > /dev/null 2>&1
Mutt is the ultimate email client
I have been using mutt since 1995 and still use it (less extensively though). Please refer to the following good links in order to have it running smoothly with gmail: consolify your gmail with mutt and the homely mutt.
Here are the tools that I installed:
aptitude install mutt-patched urlview muttprint muttprint-manual w3m notmuch par
Various tools exist to backup your google drive: I started with grive(2) and now I am using rclone.
grive method
Under linux you need to use
grive to be able to sync up your ggdrive folders which is handy for backup sake.
Installation is not that trivial since it has a dependency to yajl and there is no debian package (yet). Here is what I had to do:
git clone
git clone https://github.com/vitalif/grive2
cd yajl
./configure
cmake .
make
checkinstall
dpkg -i yajl_20140719-1_armhf.deb
Then for grive itself:
- get grive source code and install some pre-requisite packages for compilation:
git clone https://github.com/vitalif/grive2
cd grive2
apt install git cmake libgcrypt11-dev libjson0-dev libcurl4-openssl-dev libexpat1-dev libboost-filesystem-dev libboost-program-options-dev libboost-all-dev build-essential automake autoconf libtool pkg-config libcurl4-openssl-dev intltool libxml2-dev libgtk2.0-dev libnotify-dev libglib2.0-dev libevent-dev checkinstall libqt4-dev
- patch the source code following this diff
- now compile and install:
cmake .
make
cp ./grive/grive /usr/local/bin
- get setup with
cd /volume1/backup/ggdrive && grive -a
- automate the backup (download only to avoid conflicts)
crontab -e
00 22 * * * cd /volume1/backup/ggdrive && /usr/local/bin/grive -f > /dev/null 2>&1
rclone way
I now switched to
rclone since it supports the google docs format (i.e. converts it to office format during sync).
apt install golang
mkdir $HOME/work
export GOPATH=$HOME/work
go get -u -v github.com/ncw/rclone
cp $HOME/work/bin/rclone /usr/local/bin
rclone config
At this point please follow rclone wiki instructions for proper configuration.
Tiny tiny RSS server
In order to have centralized rss feeds one can install
tinyrss server:
apt install mysql-server mysql-client php5 php5-mysql php5-curl
cd /var/www/html
git clone https://tt-rss.org/git/tt-rss.git tt-rss
cd tt-rss
vi config.php
chmod -R 777 cache/images cache/upload/ cache/export/ cache/js/ feed-icons lock
cd ..
chown -R www-data:www-data tt-rss
# secure directories
vi /etc/apache2/sites-available/0443-main
<Directory /var/www/html/tt-rss/cache>
Require all denied
</Directory>
<Directory /var/www/html/tt-rss>
<Files "config.php">
Require all denied
</Files>
</Directory>
# FOR UPDATE git pull origin master
chown -R www-data:www-data tt-rss
diff config.php-dist config.php # manually merge the new configs
# FOR UPDATE ./update.php --update-schema (su www-data -c "/var/www/html/tt-rss/update.php --update-schema")
# FOR TESTING ./update.php --feeds --quiet
- reload apache2
/etc/init.d/apache2 reload
- open
https://www.website.org/tt-rss/install
for configuration then login as admin:password and change the password before defining new users
To automate start of the service at bootup create
/etc/init.d/tt-rss
and
/etc/default/tt-rss
scripts following
https://github.com/biapy/howto.biapy.com/tree/master/web-applications/tiny-tiny-rss
example:
cat /etc/default/tt-rss
## Defaults for Tiny Tiny RSS update daemon init.d script
# Set DISABLED to 1 to prevent the daemon from starting.
DISABLED=0
# Emplacement of your Tiny Tiny RSS installation.
TTRSS_PATH="/var/www/html/tt-rss"
# Set FORKING to 1 to use the forking daemon (update_daemon2.php) in stead of
# the standard one.
# This option is only available for Tiny Tiny RSS 1.2.20 and over.
FORKING=0
#EOF
cat /etc/init.d/tt-rss
#! /bin/bash
### BEGIN INIT INFO
# Provides: ttrss-DOMAIN
# Required-Start: $local_fs $remote_fs networking mysql
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Tiny Tiny RSS update daemon for DOMAIN
# Description: Update the Tiny Tiny RSS subscribed syndication feeds.
### END INIT INFO
# Author: Pierre-Yves Landuré <pierre-yves@landure.org>
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH="/sbin:/usr/sbin:/bin:/usr/bin"
DESC="Tiny Tiny RSS update daemon"
NAME="$(command basename "$(command readlink -f "${0}")")"
DISABLED=0
FORKING=0
# Read configuration variable file if it is present
[ -r "/etc/default/${NAME}" ] && . "/etc/default/${NAME}"
DAEMON_SCRIPT="update.php --daemon"
if [ "${FORKING}" != "0" ]; then
DAEMON_SCRIPT="update_daemon2.php"
fi
DAEMON="/usr/bin/php"
DAEMON_ARGS="${TTRSS_PATH}/${DAEMON_SCRIPT}"
DAEMON_DIR="${TTRSS_PATH}"
PIDFILE="/var/run/${NAME}.pid"
SCRIPTNAME="/etc/init.d/${NAME}"
USER="www-data"
GROUP="www-data"
# Exit if the package is not installed
[ -x "${DAEMON}" ] || exit 0
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
if [ "${DISABLED}" != "0" -a "${1}" != "stop" ]; then
command log_warning_msg "Not starting ${DESC} - edit /etc/default/tt-rss-DOMAIN and change DISABLED to be 0.";
exit 0;
fi
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
command start-stop-daemon --start --make-pidfile --chuid "${USER}" --group "${GROUP}" --background --quiet --chdir "${DAEMON_DIR}" --pidfile "${PIDFILE}" --exec "${DAEMON}" --test > /dev/null \
|| return 1
command start-stop-daemon --start --make-pidfile --chuid "${USER}" --group "${GROUP}" --background --quiet --chdir "${DAEMON_DIR}" --pidfile "${PIDFILE}" --exec "${DAEMON}" -- \
${DAEMON_ARGS} \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
command start-stop-daemon --stop --make-pidfile --user "${USER}" --group "${GROUP}" --quiet --retry=TERM/1/KILL/5 --pidfile "${PIDFILE}"
RETVAL="$?"
[ "${RETVAL}" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
command start-stop-daemon --stop --quiet --user "${USER}" --group "${GROUP}" --oknodo --retry=0/1/KILL/5 --pidfile "${PIDFILE}"
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
command rm -f "${PIDFILE}"
return "${RETVAL}"
}
case "${1}" in
start)
[ "${VERBOSE}" != no ] && log_daemon_msg "Starting ${DESC}" "${NAME}"
do_start
case "$?" in
0|1) [ "${VERBOSE}" != no ] && log_end_msg 0 ;;
2) [ "${VERBOSE}" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "${VERBOSE}" != no ] && log_daemon_msg "Stopping ${DESC}" "${NAME}"
do_stop
case "$?" in
0|1) [ "${VERBOSE}" != no ] && log_end_msg 0 ;;
2) [ "${VERBOSE}" != no ] && log_end_msg 1 ;;
esac
;;
restart|force-reload)
#
# If the "reload" option is implemented then remove the
# 'force-reload' alias
#
log_daemon_msg "Restarting ${DESC}" "${NAME}"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo "Usage: ${SCRIPTNAME} {start|stop|restart|force-reload}" >&2
exit 3
;;
esac
#EOF
Add in /root/chroot-services.pl
in services started /etc/init.d/mysql
and /etc/init.d/tt-rss
- For an update of tt-rss code just do:
cd /var/www/html/tt-rss
sudo git pull origin master
sudo chown -R www-data:www-data .
- In order to add a user go through admin account configuration menu.
Automount usb disk on debian domain
Samba: get rid of mangled names
Samba does not like special characters like: (>, <, *, :, ", ?, \, and |) and changes file names perceived from samba clients as "mangled"...
Find the files that will be mangled by evil samba:
find . | grep "[\*\?\:<>\\\"\|]"
Perform the renaming (can be automated every day via cron job):
find . -type d -name \*"[\*\?\:<>\\\"\|]"\* -print0 | xargs -0 rename 's/[\*\?\:<>\\\"\|]/_/g'
find . -type f -name \*"[\*\?\:<>\\\"\|]"\* -print0 | xargs -0 rename 's/[\*\?\:<>\\\"\|]/_/g'
#touch toto\|tutu.mkv tata\<tutu.mkv titi\|tutu.mkv tete\"tutu.mkv
NOT WORKING for me(tm): some claims that it can be fixed through special parameters in samba config files but I never managed to get it to work because of the lack of support in samba clients that I use on embedded devices. Anyway the way to go would have been to add in /usr/syno/etc.defaults/smb.conf
and /usr/syno/etc/smb.conf
in gobal section the following command to solve the issue:
mangled names = no
mangled map =(: _)
Restart samba
/usr/syno/etc/rc.sysv/S80samba.sh restart
Remove all nfo and posters files
find . -name \*.nfo -o -name \*archos.jpg -exec rm {} \;
Debian upgrade process
- Weezy to jessie debian upgrade
cp /etc/apt/sources.list{,.wheezy-bak}
sed -i -e 's/ \(stable\|wheezy\)/ jessie/ig' /etc/apt/sources.list
apt update
apt --download-only dist-upgrade
apt dist-upgrade
cd /etc/apache2/sites-available
mv 0080-main 0080-main.conf
mv 0443-main 0443-main.conf
a2ensite 0080-main
a2ensite 0443-main
service apache2 reload
Getting away from a fixed IP survival guide (sigh)
Since my beloved ISP (free) was getting too long to provide me with fiber optic access (yes xDSL is too slow), I switched back to the evil Orange that tricked me with an attractive web offer.
Of course I had to give up my fixed IP address in this process which was a big problem for me.
In order to overcome this problem I had to go through the following steps:
- modify gandi DNS entry to redirect www.courville.org to ghs.googlehosted.com using CNAME
- have google apps to map my sub site to https://sites.google.com/a/courville.org/courville in google apps console -> applications -> sites -> mapping
- create in gandi a DNS entry for my IP address home.courville.org with small TTL (e.g. 300s i.e. 5mn)
- have this record updated automatically with dyn-gandi python3 script using gandi API livedns production key generated updated through crontab job
git clone https://github.com/Danamir/dyn-gandi
cd dyn-gandi
python3 ./setup.py install
vi /etc/dyn-gandi.ini
[api]
url = https://dns.api.gandi.net/api/v5
; Generate your Gandi API key via : https://account.gandi.net/en/users/<user>/security
key = lalala
[dns]
domain = courville.org
; comma-separated records list
records = @,home
ttl = 3600
[ip]
; Choose an IP resolver : either plain text, or web page containing a single IP
resolver_url = http://ipecho.net/plain
; resolver_url = http://ifconfig.me/ip
; resolver_url = http://www.mon-ip.fr
; Optional alternative IP resolver, called on timeout
resolver_url_alt =
crontab -e
*/5 * * * * /usr/local/bin/dyn_gandi --conf=/etc/dyn-gandi.ini >/dev/null 2>&1
- since I am an happy user of synology use synology dyndns free service and make courville.synology.me point to my home
- change svn repositories from www to home:
svn info | grep "Repository Root" | cut -d ' ' -f 3 | sed "s/www/home/g" | xargs svn relocate
ftp server
Install proftpd launched from inetd:
apt install proftpd-basic
Android IPv6 nightmare
It seems that android if you check by getprop | grep dns sometimes chooses to have net.dns1 as an IPv6 resolver at least with the awful Orange Livebox.
Even if you disable the IPv6 support on the web interface at 192.168.1.1 it will re-enable it periodically (yay, sigh, sniff): you thus need to disable it when it does not work.
In order to periodically disable IPv6 support you can automate this task via the following script to add in crontab:
#!/bin/sh
curl -o /tmp/livebox_context -X POST -i -H "Content-type: application/json" -c /tmp/livebox_cookies.txt "http://192.168.1.1/authenticate?username=admin&password=YOURPASSWORDHERE"
ID=$(tail -n1 /tmp/livebox_context | sed 's/{"status":0,"data":{"contextID":"//1'| sed 's/"}}//1')
curl -i -b /tmp/livebox_cookies.txt -X POST -H 'Content-Type: application/json' -H 'X-Context: '$ID'' -d '{"parameters":{"Enable":false}}' http://192.168.1.1/sysbus/NMC/IPv6:set
rm /tmp/livebox_context /tmp/livebox_cookies.txt
On your server also stop ipv6 isc-dhcp-server that is confusing android:
netstat -apn|grep -i dhcp
service isc-dhcp-server stop
vi /etc/default/isc-dhcp-server
OPTIONS="-4"
INTERFACESv4="eth0"
Also you need to disable ipv6 on the synology network interface...
Home automation system: domoticz
Let's install an open source home automation system. I picked domoticz. I bought a rxccom RFXtrx433E USB HA controller and a Z-Stick S2 from Aeotech to play with the various sensors/actuators of my home.
apt install build-essential nano cmake git libboost-dev libboost-thread-dev libboost-system-dev libsqlite3-dev curl libcurl4-openssl-dev libssl-dev libusb-dev zlib1g-dev python3-dev openzwave usbutils
libopenzwave1.5-dev
mkdir /volume1/src
cd /volume1/src
git clone https://github.com/OpenZWave/open-zwave.git
ln -s open-zwave open-zwave-read-only
cd open-zwave
apt install libudev-dev
make
make install
cd ..
git clone https://github.com/domoticz/domoticz.git domoticz
cd domoticz
cmake
CMakeLists.txt
make
make install
# install is in /opt
sudo cp domoticz.sh /etc/init.d/domoticz
sudo chmod +x /etc/init.d/domoticz
sudo update-rc.d domoticz defaults
sudo vi /etc/init.d/domoticz
DAEMON=/opt/domoticz/$NAME
DAEMON_ARGS="$DAEMON_ARGS -www 8084"
DAEMON_ARGS="$DAEMON_ARGS -sslwww 8443"
#EOF
sudo service domoticz start
Update domoticz and open-zwave by:
cd /volume1/src/open-zwave
git pull
make
cd ..
cd domoticz
sudo service domoticz stop
git pull
cmake CMakeLists.txt
make
make install
sudo service domoticz start
Note that the above method does not work since it needs to recompile kernel modules. I thus used the synology domain jadahl package.
I added a couple of shell scripts launched by a scene switch to disable heaters and roller shutter timers while I am present or while away on vacations and launched via script:///usr/local/bin/domoticz-vacances.sh 0
cat /usr/local/bin/domoticz-presence.sh
#!/bin/sh
#
# domoticz-presence
# 1 enable timers 0 disable timers
if [ "$1" = "1" ]
then
ACTION="disabletimer"
LEVEL=25
elif [ "$1" = "0" ]
then
ACTION="enabletimer"
LEVEL=55
else
curl "http://localhost:8084/json.htm?type=command¶m=addlogmessage&message=Invalid%20parameter"
echo "Invalid parameter"
exit 1
fi
# update heater timers
for t in 9 13
do
curl "http://localhost:8084/json.htm?type=command¶m=$ACTION&idx=$t"
done
# set heater to ECO/COMFORT
curl "http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=$level"
cat /usr/local/bin/domoticz-vacances.sh
#!/bin/sh
#
# domoticz-vacances
# 1 enable timers 0 disable timers
if [ "$1" = "1" ]
then
ACTION1="disabletimer"
ACTION2="disablescenetimer"
LEVEL=25
elif [ "$1" = "0" ]
then
ACTION1="enabletimer"
ACTION2="enablescenetimer"
LEVEL=55
else
curl "http://localhost:8084/json.htm?type=command¶m=addlogmessage&message=Invalid%20parameter"
echo "Invalid parameter"
exit 1
fi
# update heater timers
for t in 2 3 4 8
do
curl "http://localhost:8084/json.htm?type=command¶m=$ACTION1&idx=$t"
done
# set heater to ECO/COMFORT
curl "http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=$level"
# disable scene roller shutters timers
for t in 2 3 4
do
curl "http://localhost:8084/json.htm?type=command¶m=$ACTION2&idx=$t"
done
I use a kubino Z-Wave DIN Pilot Wire to control my electric heaters through a virtual selector (multi-level) switch with following settings:
OF
0
http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=0
HG
10
http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=15
ECO
20
http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=25
C-2
30
http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=35
C-1
40
http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=45
COMFORT
50
http://127.0.0.1:8084/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=55
Now in order to automate closing the roller shutter at a suitable time i.e. at civil dusk but not before 8pm and before 10pm only if not on vacations (by checking vacation switch), here is the little shell script I use to update the scenetimer everyday that relies on sunwait tool:
#!/bin/sh
#
# domoticz-fermeturevolets
# use sunwait to get civil dusk time to calculates when to close roller shutters with the constraints to be in this interval >20h <22h, civildusk i.e. min(max(civildusk,20h),22h)
tdusk=`/usr/local/bin/sunwait -p 48.866667N 2.333333W | grep Civil | sed "s/^.* ends \([0-9]*\) .*$/\1/g"`
hdusk=`echo $tdusk | cut -c 1-2`
mdusk=`echo $tdusk | cut -c 3-4`
sdusk=`date -d"$hdusk:$mdusk" +%s`
notbefore=`date -d"20:00" +%s`
notafter=`date -d"22:00" +%s`
# max(sdusk,notbefore)
temp=$(($sdusk>$notbefore?$sdusk:$notbefore))
# min(ans,notafter)
result=$(($temp>$notafter?$notafter:$temp))
hdown=`date -d@$result +%H`
mdown=`date -d@$result +%M`
isvacances=`curl -s "http://localhost:8084/json.htm?type=devices&rid=43" | grep Status | sed 's/^.*Status.* : "\([^"]*\)",$/\1/g'`
if [ "$isvacances" = "Off" ]
then
echo we are not on vacation, fine: setting shutter closing time to $hdown:$mdown
# close roller shutters
ACTION=updatescenetimer
t=4
curl "http://localhost:8084/json.htm?type=command¶m=$ACTION&idx=$t&active=true&timertype=2&hour=$hdown&min=$mdown&randomness=false&command=0&days=1234567"
else
echo lucky us: we are on vacation, no change
fi
Solve usb serial devices enumeration issue at boot
The problem is that I have a zwave, rfxcom, tic and smartreader usb serial dongles and I want to be sure that I can distinguish each of them independently of the enumeration process.
In order to overcome this, create in the synology domain: /lib/udev/rules.d/50-usb-marc.rules
containing the following:
# udevadm info -a -n /dev/input/js0
# on synology lsusb gives idVendor:idProduct:bcdDevice
# have a loog at e.g. /sys/bus/usb/devices/3-2.1
# 0658:0200:0302
SUBSYSTEM=="tty", ATTRS{idVendor}=="0658", ATTRS{idProduct}=="0200", SYMLINK+="ttyACM-zwave", GROUP="root", MODE="0666"
# 0403:6015:1000
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6015", SYMLINK+="ttyUSB-tic", GROUP="root", MODE="0666"
# 0403:6001:0500
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="Smartreader2 plus", SYMLINK+="ttyUSB-smartreader", GROUP="root", MODE="0666"
# 0403:6001:0600
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="RFXtrx433", SYMLINK+="ttyUSB-rfxcom", GROUP="root", MODE="0666"
In your case adapt the idVendor, idProduct depending on the result of lsusb.
Note that SYMLINK needs to be prefixed by ttyUSB in order to be recognized by domoticz (no exotic names) as specified
here.
OSCAM server
In order to decode with a proper card Orange TV streams, use
OSCAM.
Recompile oscam:
svn checkout http://www.streamboard.tv/svn/oscam/trunk oscam-svn
cd oscam-svn
make
make install
Configure OSCAM in /usr/local/etc/oscam.{conf,server,services,srvid2}
with port 8888
Write a dedicated init script in /etc/init.d/oscam
#! /bin/sh
### BEGIN INIT INFO
# Provides: Oscam
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Oscam init script
# Description: Launch oscam at startup
### END INIT INFO
DAEMON=/usr/local/bin/oscam
DEAMON_OPTS="-b -r 2"
PIDFILE=/var/run/oscam.pid
test -x ${DAEMON} || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting OScam"
start-stop-daemon --start --quiet –user- --background --pidfile ${PIDFILE} --make-pidfile --exec ${DAEMON} -- ${DAEMON_OPTS}
log_end_msg $?
;;
stop)
log_daemon_msg "Stopping OScam"
start-stop-daemon --stop --exec ${DAEMON}
log_end_msg $?
;;
force-reload|restart)
$0 stop
$0 start
;;
*)
echo "Usage: /etc/init.d/oscam {start|stop|restart|force-reload}"
exit 1
;;
esac
exit 0
HA-bridge server
ha-bridge is nice to expose as hue bridge to alexa domoticz devices for voice operation (google home does not work with it).
Download latest release of ha-bridge
here.
wget https://github.com/bwssytems/ha-bridge/releases/download/v5.2.1/ha-bridge-5.2.1.jar
sudo cp
ha-bridge-5.2.1.jar
/usr/local/bin
Disable http on apache (essential to enable alexa discovery as hue bridge) by commenting Listen 80
in /etc/apache2/
ports.conf
.
Enable /etc/init.d/habridge
init script launching: /opt/jdk/bin/java -jar -Dserver.port=80 -Dconfig.file=/usr/local/etc/habridge/data/habridge.config /usr/local/bin/ha-bridge-5.2.1.jar
and putting all configurations into /usr/local/etc/habridge/data
.
Acrarium
Acrarium is a debug backend for
Android applications using acra.
mysql -u root -p
create database acra;
CREATE USER acrarium IDENTIFIED BY 'PASSWORD';
GRANT ALL PRIVILEGES ON acra.* TO 'acrarium'@'localhost' identified by '
PASSWORD
';
cat $HOME/.config/acrarium/application.properties
spring.datasource.url=jdbc:mysql://localhost:3306/acra?useSSL=false
spring.datasource.username=acrarium
spring.datasource.password=PASSWORD
spring.jpa.database-platform=org.hibernate.dialect.MySQL57Dialect
server.port=9090
cat /etc/apache2/sites-available/0443-main.conf
RewriteRule ^/acrarium$ acrarium/ [L,R=301]
ProxyPass /acrarium http://localhost:9090
ProxyPassReverse /acrarium http://localhost:9090
wget https://github.com/F43nd1r/Acrarium/releases/download/v0.9.19/acrarium-0.9.19.war
java -Xmx256m -Xms256m -jar acrarium-0.9.19.war
end