This page describes the various steps to install my rpi4 debian server.
Hardware configuration and initial setup
Starting point is to install image on muSD and ssd:
Note that both disks have same partitions uuids at this point.
Do not forget to create an empty ssh file (via touch) on boot partition to boot headless and get ssh server running on the rpi4.
Here is the partitioning that I applied on the disk and how I changed uuid:
fdisk /dev/sda
p
x
i
r
0xd34db33f
r
w
Check change via:
fdisk -l
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk model: JMS583
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xd34db33f
Device Boot Start End Sectors Size Id Type
/dev/sda1 8192 532479 524288 256M c W95 FAT32 (LBA)
/dev/sda2 532480 4390911 3858432 1.9G 83 Linux
Check ids via blkid
, then on muSD change root via vi /boot/cmdline.txt
changing root=PARTUUID=6c586e13-02
by root=PARTUUID=d34db33f-02
in /boot/cmdline.txt
.
At this point change uuid mount points via:
mkdir /mnt/rootfs
mount /dev/sda2 /mnt/rootfs
vi
/mnt/rootfs/etc/fstab
change only rootfs
PARTUUID=6c586e13-02 / ext4 defaults,noatime 0 1
PARTUUID=6c586e13-02 / ext4 defaults,noatime 0 1d34db33f-02
Check that it boots and then resize partitions via fdisk /dev/sda
to align with the following result:
fdisk -l
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: D3B3EDDF-DC19-473E-A358-D1BDA74FE6D1
Device Start End Sectors Size Type
/dev/sda1 8192 532479 524288 256M Microsoft basic data
/dev/sda2 532480 480583167 480050688 228.9G Linux filesystem
/dev/sda3 480583168 488397134 7813967 3.7G Linux swap
Set labels:
fatlabel /dev/sda1 boot
e2label /dev/sda2 rootfs
Then upgrade boot eeprom:
apt install rpi-eeprom
vi /etc/default/rpi-eeprom-update
FIRMWARE_RELEASE_STATUS="beta"
Now we are ready to clone the muSD to the SSD thus from muSD:
apt update
apt upgrade
apt install git
etc.
Edit fstab
for /boot
on muSD:
proc /proc proc defaults 0 0
PARTUUID=6c586e13-01 /boot vfat defaults 0 2
PARTUUID=d34db33f-02 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
Modify on muSD /boot/cmdline.txt
console=serial0,115200 console=tty1 root=PARTUUID=d34db33f-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait
And you should be good to go.
rpi-update
cp /lib/firmware/raspberrypi/bootloader/beta/pieeprom-2020-05-15.bin
pieeprom.bin
vcgencmd bootloader_config
rpi-eeprom-config pieeprom.bin > bootconf.txt
vi bootconf.txt
BOOT_ORDER=0xf14
#EOF
rpi-eeprom-config --out pieeprom-new.bin --config bootconf.txt pieeprom.bin
rpi-eeprom-update -d -f ./pieeprom-new.bin
mount /dev/sda1 /mnt/boot
cp -r /boot/* /mnt/boot/
vi /etc/fstab
In order to update the raspbian distrib, do it via
To get latest 64bit kernel add in /boot/config.txt
kernel=kernel8.img
arm_64bit=1
All boot config options are detailed here: https://www.raspberrypi.org/documentation/configuration/config-txt/boot.md
Get temperature
Debian distribution setup
Set hostname and and timezone:
MYHOSTNAME=hyperion
localectl set-locale LANG=fr_FR.UTF-8
timedatectl set-timezone Europe/Paris
hostnamectl set-hostname $MYHOSTNAME
hostname $MYHOSTNAME
vi /etc/hostname
mark32
EOF
vi /etc/hosts
127.0.0.1 localhost hyperion
Configure the right locales:
apt install locales
locale-gen en_US en_US.UTF-8 en_US.ISO-8859-1 en_US.ISO-8859-15 fr_FR fr_FR.UTF-8 fr_FR.ISO-8859-15 fr_FR.ISO-8859-1
dpkg-reconfigure locales
Generating locales (this might take a while)...
en_US.ISO-8859-1... done
en_US.ISO-8859-15... done
en_US.UTF-8... done
fr_FR.UTF-8... done
fr_FR.UTF-8... done
fr_FR.ISO-8859-15@euro... done
Reconfigure your time zone to get proper time:
Basic installation:
apt install mosh
mutt irssi
vim subversion-tools rsync mosh less tig git openssh-server build-essential python python-setuptools python-pip convmv sqlite urlview w3m par metastore curl most ispell ifrench mtp-tools apg dnsutils retext obconf apt-file octave maxima locate tmux nmap catdoc wv elinks links lynx pmount mpc ncmpc epstool gnupg pinentry-tty pinentry-curses silversearcher-ag pwgen
Install DNS service: bind
Install bind9 DNS daemon:
apt install bind9
service bind9 restart
Configure your domain entries creating files e.g. /etc/bind/db.0.168.192
and /etc/bind/db.yourdomain.com
and reference it inside /etc/bind/name.conf.local
Install DHCP daemon
Install and configure dhcp daemon:
apt install isc-dhcp-server
vi /etc/dhcp/dhcpd.conf
service isc-dhcp-server restart
Configure ethernet MAC address and host mapping in compliance with DNS bind configuration files: /etc/dhcp/dhcpd.conf
Process to add a new host- edit subdomain dns entries
- add ether MAC address in dhcp table
Access seamlessly NAS
Solution is to use autofs between rpi4 and synology NAS:
apt install autofs
cat /etc/auto.master
/media/nfs /etc/auto.nfs --ghost
#EOF
cat /etc/auto.nfs
ds414 192.168.0.2:/volume1
#EOF
Map synology domain users to debian one:
adduser --system --no-create-home --uid 1026 media
addgroup --gid 65536 mediagroup
On the synology as root in the synology domain (not chroot) do:
vi /etc/exports
/volume1 192.168.0.3(rw,async,no_wdelay,no_root_squash,insecure_locks,sec=sys,anonuid=65534,anongid=65534)
#EOF
exportfs -a
Install web server
apt install nginx apache2-utils
Configuration files are located there /etc/nginx/sites-available/default
htpassword:
htpasswd /etc/nginx/htpasswd username
Install proper signed certificates with letsencrypt
In order to use the letsencrypt free ssl certificate service, I went through the following manual steps to avoid any errors
apt install python-certbot-nginx certbot
certbot --nginx
ssl cert:
cd /etc/ssl
cat openssl.cnf | grep _default |
countryName_default = FR
stateOrProvinceName_default = IDF
localityName_default = Paris
0.organizationName_default = Courville.org
organizationalUnitName_default = Software
commonName_default = courville.org
emailAddress_default = software@courville.org
Install news feed
In order to have centralized rss feeds one can install tinyrss server:
Mariadb is now the default db server under debian (bye bye mysql).
apt install mariadb-server
mysql_secure_installation
apt install nginx php7.3-fpm php7.3-gd php7.3-mysql php7.3-curl php7.3-imap php7.3-mbstring php7.3-xml
apt install php-intl
vi /etc/php/7.3/fpm/php.ini
add in
/etc/nginx/sites-available/default
location ~ \.php$ {
root /var/www/html;
try_files $uri =404;
fastcgi_index index.php;
fastcgi_pass unix:/run/php/php7.3-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
Do not forget to add too:
index index.html index.htm index.nginx-debian.html index.php;
Start php:
service php7.3-fpm restart
service nginx restart
Create database
mysql -p -u root
mysql> CREATE USER 'ttrss'@'localhost' IDENTIFIED BY 'somepassword';
mysql> CREATE DATABASE ttrss;
mysql>
GRANT ALL PRIVILEGES ON ttrss.* TO "ttrss"@"localhost" IDENTIFIED BY '
somepassword';
Get tt-rss:
cd /var/www/html
git clone https://tt-rss.org/git/tt-rss.git tt-rss
chown -R www-data:www-data tt-rss
cd tt-rss
Enable service with systemd:
vi /etc/systemd/system/ttrss-update.service
[Unit]
Description=Tiny Tiny RSS update daemon
After=network.target mysql.service
[Service]
User=www-data
ExecStart=/var/www/html/tt-rss/update_daemon2.php
[Install]
WantedBy=multi-user.target
EOF
To activate at start
systemctl enable ttrss-update.service
systemctl --system daemon-reload
Activate it for testing:
systemctl start ttrss-update.service
systemctl status ttrss-update.service
For an update of tt-rss code just do:
cd /var/www/html/tt-rss
sudo git pull origin master
sudo chown -R www-data:www-data .
In order to add a user go through admin account configuration menu.
If an update of the db is reaquired:
su www-data -c "/var/www/html/tt-rss/update.php --update-schema"
Test if all is fine:
./update.php --feeds --quiet
Domoticz
sudo adduser domoticz
curl -sSL install.domoticz.com | sudo bash
Ports 8080 and 8443 have been chosen.
Start the service automatically:
vi /etc/systemd/system/domoticz.service
[Unit]
Description=domoticz_service
[Service]
User=domoticz
Group=domoticz
ExecStart=/home/domoticz/domoticz/domoticz -www 8080 -sslwww 8443
WorkingDirectory=/home/domoticz
Restart=on-failure
RestartSec=1m
[Install]
WantedBy=multi-user.target
#EOF
systemctl daemon-reload
systemctl enable domoticz.service
systemctl start domoticz.service
Get a list of all the switches
curl http://192.168.0.3:8080/json.htm?type=command\¶m=getlightswitches | grep 'Name\|idx'| paste -d " " - -
I added a couple of shell scripts launched by a scene switch to disable heaters and roller shutter timers while I am present or while away on vacations and launched via script:///usr/local/bin/domoticz-vacances.sh 0
cat /usr/local/bin/domoticz-presence.sh
#!/bin/sh
#
# domoticz-presence
# 1 enable timers 0 disable timers
if [ "$1" = "1" ]
then
ACTION="disabletimer"
LEVEL=25
elif [ "$1" = "0" ]
then
ACTION="enabletimer"
LEVEL=55
else
curl "http://localhost:8080/json.htm?type=command¶m=addlogmessage&message=Invalid%20parameter"
echo "Invalid parameter"
exit 1
fi
# update heater timers
for t in 9 13
do
curl "http://localhost:8080/json.htm?type=command¶m=$ACTION&idx=$t"
done
# set heater to ECO/COMFORT
curl "http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=$level"
cat /usr/local/bin/domoticz-vacances.sh
#!/bin/sh
#
# domoticz-vacances
# 1 enable timers 0 disable timers
if [ "$1" = "1" ]
then
ACTION1="disabletimer"
ACTION2="disablescenetimer"
LEVEL=25
elif [ "$1" = "0" ]
then
ACTION1="enabletimer"
ACTION2="enablescenetimer"
LEVEL=55
else
curl "http://localhost:8080/json.htm?type=command¶m=addlogmessage&message=Invalid%20parameter"
echo "Invalid parameter"
exit 1
fi
# update heater timers
for t in 2 3 4 8
do
curl "http://localhost:8080/json.htm?type=command¶m=$ACTION1&idx=$t"
done
# set heater to ECO/COMFORT
curl "http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=$level"
# disable scene roller shutters timers
for t in 2 3 4
do
curl "http://localhost:8080/json.htm?type=command¶m=$ACTION2&idx=$t"
done
I use a kubino Z-Wave DIN Pilot Wire to control my electric heaters through a virtual selector (multi-level) switch with following settings:
OF
0
http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=0
HG
10
http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=15
ECO
20
http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=25
C-2
30
http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=35
C-1
40
http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=45
COMFORT
50
http://127.0.0.1:8080/json.htm?type=command¶m=switchlight&idx=146&switchcmd=Set%20Level&level=55
Now in order to automate closing the roller shutter at a suitable time i.e. at civil dusk but not before 8pm and before 10pm only if not on vacations (by checking vacation switch), here is the little shell script I use to update the scenetimer everyday that relies on sunwait tool:
#!/bin/sh
#
# domoticz-fermeturevolets
# use sunwait to get civil dusk time to calculates when to close roller shutters with the constraints to be in this interval >20h <22h, civildusk i.e. min(max(civildusk,20h),22h)
tdusk=`/usr/local/bin/sunwait -p 48.866667N 2.333333W | grep Civil | sed "s/^.* ends \([0-9]*\) .*$/\1/g"`
hdusk=`echo $tdusk | cut -c 1-2`
mdusk=`echo $tdusk | cut -c 3-4`
sdusk=`date -d"$hdusk:$mdusk" +%s`
notbefore=`date -d"20:00" +%s`
notafter=`date -d"22:00" +%s`
# max(sdusk,notbefore)
temp=$(($sdusk>$notbefore?$sdusk:$notbefore))
# min(ans,notafter)
result=$(($temp>$notafter?$notafter:$temp))
hdown=`date -d@$result +%H`
mdown=`date -d@$result +%M`
isvacances=`curl -s "http://localhost:8080/json.htm?type=devices&rid=43" | grep Status | sed 's/^.*Status.* : "\([^"]*\)",$/\1/g'`
if [ "$isvacances" = "Off" ]
then
echo we are not on vacation, fine: setting shutter closing time to $hdown:$mdown
# close roller shutters
ACTION=updatescenetimer
t=4
curl "http://localhost:8080/json.htm?type=command¶m=$ACTION&idx=$t&active=true&timertype=2&hour=$hdown&min=$mdown&randomness=false&command=0&days=1234567"
else
echo lucky us: we are on vacation, no change
fi
Get the rpi temperature via creating a virtual dummy sensor in hardware tab and identifying its index then using this script in a crontab every 5 minutes (*/5 * * * *):
#!/bin/bash
cpuTemp0=$(cat /sys/class/thermal/thermal_zone0/temp)
cpuTemp1=$(($cpuTemp0/1000))
cpuTemp2=$(($cpuTemp0/100))
cpuTempM=$(($cpuTemp2 % $cpuTemp1))
echo CPU temp"="$cpuTemp1"."$cpuTempM"'C"
curl -S "http://192.168.0.3:8080/json.htm?type=command¶m=udevice&idx=75&nvalue=0&svalue=$cpuTemp1.$cpuTempM"
rfplayer plugin installation
The basic rfplayer support is very limited, in order to get a better support of this RF dongle install the following plugin: https://github.com/sasu-drooz/Domoticz-Rfplayer
su - domoticz
mkdir -p domoticz/plugins/rfplayer
curl -L https://raw.githubusercontent.com/sasu-drooz/Domoticz-Rfplayer/master/plugin.py > domoticz/plugins/rfplayer/plugin.py
chmod 755 domoticz/plugins/rfplayer/plugin.py
sudo systemctl restart domoticz.service
Install now under domiticz setup RFplayer plugin (to be distinguished from legacy ZiBlue RFPlayer USB) and make sure you tick the "Enable learning mode" option.
Link your sensors/actuators with Google Home via google actions
Simply run the installer:
su - domoticz
bash <(curl -s https://raw.githubusercontent.com/DewGew/dzga-installer/master/install.sh)
sudo systemctl enable dzga
sudo systemctl start dzga
sudo systemctl stop dzga
Check if service is running:
sudo systemctl status dzga
To update run installer again:
bash <(curl -s https://raw.githubusercontent.com/DewGew/dzga-installer/master/install.sh)
Make domoticz visible from outside:
vi /etc/nginx/sites-available/default
location /domoticz/ {
rewrite ^/domoticz/(.*)$ /$1 break;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:8080/; #Local ipno to dzga
proxy_read_timeout 90;
}
To configure the google actions do follow ONLY the instructions available http://192.168.0.3:3030/ on the 'Setup Actions on Google Console Instructions" tab and triple check every step. Additionnaly use capital 'C' in ClientID
and ClientSecret
.
In order to resync the devices you can call https://home.courville.org/assistant/sync
Solve usb serial devices enumeration issue at boot
The problem is that I have a zwave, rxccom RFXtrx433E USB, tic, Conbee II zigbee and smartreader usb serial dongles and I want to be sure that I can distinguish each of them independently of the enumeration process.
In order to overcome this, create proper udev url editing /etc/udev/rules.d/50-usb-marc.rules
containing the following:
SUBSYSTEM=="tty", ATTRS{idVendor}=="0658", ATTRS{idProduct}=="0200", SYMLINK+="ttyACM-zwave", GROUP="root", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6015", SYMLINK+="ttyUSB-tic", GROUP="root", MODE="0666", RUN+="/bin/stty -F /dev/%k 1200 sane evenp parenb cs7 -crtscts"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="Smartreader2 plus", SYMLINK+="ttyUSB-smartreader", GROUP="root", MODE="0666"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="RFXtrx433", SYMLINK+="ttyUSB-rfxcom", GROUP="root", MODE="0666"
SUBSYSTEM=="tty", ATTRS{idVendor}=="1cf1", ATTRS{idProduct}=="0030", ATTRS{product}=="ConBee II", SYMLINK+="ttyACM-zigbee", GROUP="root", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{product}=="RFPLAYER", SYMLINK+="ttyACM-rfplayer", GROUP="root", MODE="0666"
In your case adapt the idVendor, idProduct depending on the result of lsusb.
Note that SYMLINK needs to be prefixed by ttyUSB in order to be recognized by domoticz (no exotic names) as specified here.
Note that you need to avoid interference with ModemManager by inserting ENV{ID_MM_DEVICE_IGNORE}="1"
to all ttyACM devices.
Reload rules with udevadm control --reload-rules
.
Update: ModemManager seems to ignore ID_MM_DEVICE_IGNORE thus I eradicated ModemManager:
systemctl list-dependencies multi-user.target | grep Modem
systemctl disable ModemManager.service
systemctl stop ModemManager.service
Update your zusb stick:
wget https://service.z-wave.me/expertui/uzb/bootloader_UZB_from_05_05_to_7278_2MB.bin
wget https://service.z-wave.me/expertui/uzb/firmware_UZB_from_05_05_to_05_06.bin
./ZMESerialUpdater serialapi_uzbupdate -d /dev/ttyACM1 -b bootloader_UZB_from_05_05_to_7278_2MB.bin
#FAILS ./ZMESerialUpdater serialapi_uzbupdate -f firmware_UZB_from_05_05_to_05_06.bin -d /dev/ttyACM1
Zigbee gateway on the raspberry and domoticz integration
On a headless configuration:
wget -O - http://phoscon.de/apt/deconz.pub.key | sudo apt-key add -
sh -c "echo 'deb http://phoscon.de/apt/deconz $(lsb_release -cs) main' > /etc/apt/sources.list.d/deconz.list"
apt update
apt install deconz
systemctl disable deconz-gui
systemctl stop deconz-gui
vi /etc/default/deconz
DECONZ_OPTS="-–http-port=8090 --ws-port=50733 --upnp=0 --auto-connect=1 --dbg-error=1 -–dev=/dev/ttyACM-zigbee"
#EOF
vi /etc/systemd/system/multi-user.target.wants/deconz.service
[Unit]
Description=deCONZ: ZigBee gateway -- REST API
Wants=deconz-init.service deconz-update.service
StartLimitIntervalSec=0
Wants=nginx.service
After=nginx.service
[Service]
User=1002
EnvironmentFile=/etc/default/deconz
ExecStart=/usr/bin/deCONZ -platform minimal ${DECONZ_OPTS}
Restart=on-failure
RestartSec=30
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_KILL CAP_SYS_BOOT CAP_SYS_TIME
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable deconz
systemctl start deconz
systemctl status deconz
curl -H 'Content-Type: application/json' -X PUT -d '{"websocketnotifyall": true}' http://localhost:8090/api/APIKEY/config
In order to create an API key to remotely access the gateway:
python3 API_KEY.py 127.0.0.1:8090 create
export APIKEY=KEY
If you are lost you can retrieve the gateway configuration from the LAN querying:
https://dresden-light.appspot.com/discover Gateway configuration can be viewed via
curl -X GET http://localhost:8090/api/$APIKEY/config | jq '.' | less
Manual testing:
deCONZ -platform minimal -–http-port=8090 --ws-port=50733 --upnp=0 --auto-connect=1 --dbg-error=1 --device=dev/ttyACM-zigbee
su - domoticz
cd domoticz/plugins
git clone https://github.com/Smanar/Domoticz-deCONZ
chmod +x Domoticz-deCONZ/plugin.py
In order to debug deconz, daemon can be launched manually to observe logs: deCONZ -platform minimal --dbg-info=1 --dbg-error=1 --dbg-aps=1 --dbg-zdp=1
Getting away from a fixed IP survival guide (sigh)
Since my beloved ISP (free) was getting too long to provide me with fiber optic access (yes xDSL is too slow), I switched back to the evil Orange that tricked me with an attractive web offer.
Of course I had to give up my fixed IP address in this process which was a big problem for me.
In order to overcome this problem I had to go through the following steps:
- modify gandi DNS entry to redirect www.courville.org to ghs.googlehosted.com using CNAME
- have google apps to map my sub site to https://sites.google.com/a/courville.org/courville in google apps console -> applications -> sites -> mapping
- create in gandi a DNS entry for my IP address home.courville.org with small TTL (e.g. 300s i.e. 5mn)
- have this record updated automatically with dyn-gandi python3 script using gandi API livedns production key generated updated through crontab job
git clone https://github.com/Danamir/dyn-gandi
cd dyn-gandi
python3 ./setup.py install
python3 ./setup.py develop
vi /etc/dyn-gandi.ini
[api]
url = https://dns.api.gandi.net/api/v5
; Generate your Gandi API key via : https://account.gandi.net/en/users/<user>/security
key = lalala
[dns]
domain = courville.org
; comma-separated records list
records = @,home
ttl = 3600
[ip]
; Choose an IP resolver : either plain text, or web page containing a single IP
resolver_url = http://ipecho.net/plain
; resolver_url = http://ifconfig.me/ip
; resolver_url = http://www.mon-ip.fr
; Optional alternative IP resolver, called on timeout
resolver_url_alt =
crontab -e
*/5 * * * * /usr/local/bin/dyn_gandi --conf=/etc/dyn-gandi.ini >/dev/null 2>&1
- since I am an happy user of synology use synology dyndns free service and make courville.synology.me point to my home
adduser gitolite
su gitolite
ssh-keygen
git clone git://github.com/sitaramc/gitolite
mkdir -p $HOME/bin
gitolite/install -to $HOME/bin
As user
scp yourkey.pub gitolite@localhost:yourname.pub
As gitolite
bin/gitolite setup -pk yourname.pub
As user clone gitolite admin repo
git clone gitolite@localhost:gitolite-admin
Subtitles downloader
subliminal is a python tool that downloads subtitles from various sources for video files.
cd /agraver/git
apt install python python-setuptools python-pip
git clone https://github.com/Diaoul/subliminal
cd subliminal
python setup.py install
Subtitles download can be automated downloading first english subs if only available to be replaced by french ones once available using a little script like this one:
#!/bin/bash
TV_DIR=$1
BACKTIME=$2
[ -z "$TV_DIR" ] && TV_DIR=/media/nfs/ds414/video/serie
[ -z "$BACKTIME" ] && BACKTIME=14
GETSUB=/opt/bin/subliminal
cd $TV_DIR
# try to get the subtitle only for 14 days to avoid exploding list of files to process
for file in $(find . -mtime -${BACKTIME} -type f \( -iname \*.avi -o -iname \*.mkv -o -iname \*.mp4 \) )
do
subtitle="${file%.*}.srt"
if [ ! -f "$subtitle" ]
then
echo processing $file
# download without .fr.srt or .en.srt to be sure it is from addic7ed in french first in -s single mode
$GETSUB -s -l fr -p addic7ed --addic7ed-username username --addic7ed-password password -q "$file" && rm "$file".en.srt
# if it failed with addict7ed get from any other source but with .fr.srt and .en.srt
[ ! -f "$subtitle" ] && $GETSUB -l en --addic7ed-username username --addic7ed-password password -q "$file"
fi
done
If you want to update subliminal in the future just do:
cd /agraver/git/subliminal
git pull
python setup.py install
TV series file renamer
tvnamer is a very powerful python program that enables to rename tv series.
cd /agraver/git
git clone https://github.com/dbr/tvnamer
cd tvnamer
python setup.py install
An example of user configuration of tvnamer capturing the most useful features of the tool can be found below:
cat .tvnamer.json
{
"language": "en",
"search_all_languages": false,
"always_rename": false,
"batch": true,
"episode_separator": "-",
"episode_single": "%02d",
"filename_with_date_and_episode": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
"filename_with_date_without_episode": "%(seriesname)s-e%(episode)s%(ext)s",
"filename_with_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s-%(episodename)s%(ext)s",
"filename_with_episode_no_season": "%(seriesname)s-e%(episode)s-%(episodename)s%(ext)s",
"filename_without_episode": "%(seriesname)s-s%(seasonno)02de%(episode)s%(ext)s",
"filename_without_episode_no_season": "%(seriesname)s-e%(episode)s%(ext)s",
"lowercase_filename": false,
"move_files_confirmation": true,
"move_files_destination": "/media/nfs/ds414/video/serie/%(seriesname)s/%(seriesname)s-s%(seasonnumber)02d",
"move_files_enable": true,
"multiep_join_name_with": ", ",
"normalize_unicode_filenames": false,
"recursive": false,
"custom_filename_character_blacklist": ":<>?*",
"replace_invalid_characters_with": "_",
"move_files_fullpath_replacements": [
{"is_regex": false, "match": " ", "replacement": "_"},
{"is_regex": false, "match": "_-_", "replacement": "-"},
{"is_regex": false, "match": ":", "replacement": ""},
{"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
],
"output_filename_replacements": [
{"is_regex": false, "match": "?", "replacement": ""},
{"is_regex": false, "match": " ", "replacement": "_"},
{"is_regex": false, "match": "_-_", "replacement": "-"},
{"is_regex": false, "match": ":", "replacement": ""},
{"is_regex": false, "match": "Serie's_name", "replacement": "Series_Name"}
],
"input_filename_replacements": [
{"is_regex": true, "match": "Serie.US", "replacement": "Serie"},
{"is_regex": true, "match": "Serie([^:])", "replacement": "Serie_expanded_name\\1"}
]
}
If you want to update tvnamer in the future just do:
cd /agraver/git/tvnamer
git pull
python setup.py install
Transmission tweaks
Install transmission and command line related tools via:
apt install transmission transmission-daemon transmission-cli transmission-remote-cli python-transmissionrpc
Transmission daemon configuration files are located: /var/lib/transmission-daemon/info. Make sure to stop transmission on the synology web interface or using the below commands before editing this file so that your modifications are taken into account:
service transmission-daemon stop
vi
/var/lib/transmission-daemon/info/settings.json
service transmission-daemon start
- learn to use transmission-remote script
- in order to execute some actions at the end of a file download a script can be triggered e.g. for moving files around based on tracker name
"script-torrent-done-enabled": true,
"script-torrent-done-filename": "/usr/local/bin/transmission-done.sh",
With /usr/local/bin/transmission-done.sh
for instance being:
#!/bin/bash
# 'TR_TORRENT_NAME'
# 'TR_TORRENT_DIR'
# 'TR_TORRENT_ID'
# 'TR_APP_VERSION'
# 'TR_TORRENT_HASH'
# 'TR_TIME_LOCALTIME'
TV_SHOW_ID="hdtv"
MEDIA_DIR="/media/nfs/ds414"
DOWNLOAD_DIR="${MEDIA_DIR}/download"
TV_SHOW_DIR="${MEDIA_DIR}/download/serie"
DBGF=${DOWNLOAD_DIR}/log
istvshow=false
istvshowrpk=false
isprivate=false
istrackera=false
istrackerb=false
if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_ID}" -o -z "${TR_TORRENT_DIR}" ]
then
TR_TORRENT_ID=$1
[ -z "$1" ] && exit 1
fi
tremote ()
{
/usr/bin/transmission-remote -n USER:PASSWORD $HOST -t ${TR_TORRENT_ID} $* # 2>&1 | tee -a $DBGF
}
if [ -z "${TR_TORRENT_NAME}" -o -z "${TR_TORRENT_DIR}" ]
then
TR_TORRENT_NAME=$( tremote -t $1 -i | grep Name: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2 )
TR_TORRENT_DIR=$( tremote -t $1 -i | grep Location: | tr -s ' ' | sed 's/^ *//' | cut -s -d " " -f 2)
fi
echo `date +%Y-%m-%d\ %H:%M` processing id ${TR_TORRENT_ID} file ${TR_TORRENT_NAME} in dir ${TR_TORRENT_DIR} 2>&1 | tee -a $DBGF
( tremote -i | grep "Public torrent" | grep -qi No ) && isprivate=true
( tremote -it | grep -qi "tracker.a" ) && istrackera=true
( tremote -it | grep -qi "tracker.b" ) && istrackera=true
# if flexget sets Location to serie then it must be a tvshow
( tremote -i | grep Location | grep -q $TV_SHOW_DIR ) && istvshow=true
if ( echo "${TR_TORRENT_NAME}" | grep -Eqi "${TV_SHOW_ID}" )
then
istvshow=true
( ( echo "${TR_TORRENT_NAME}" | grep -Eqi "REPACK" ) || ( echo "${TR_TORRENT_NAME}" | grep -Eqi "PROPER" ) ) && istvshowrpk=true || istvshowrpk=false
fi
if $isprivate
then
# private torrent
if $istrackera
then
echo "trackera"
mkdir -p "$DOWNLOAD_DIR/trackera"
tremote --move "$DOWNLOAD_DIR/trackera"
elif $istrackerb
then
echo "trackerb"
mkdir -p "$DOWNLOAD_DIR/trackerb"
tremote --move "$DOWNLOAD_DIR/trackerb"
fi
else
# public torrent
if $istvshow
then
echo "tvshow detected, move and remove ${TR_TORRENT_DIR}/${TR_TORRENT_NAME}" "${TV_SHOW_DIR}" 2>&1 | tee -a $DBGF
tremote --move "$DOWNLOAD_DIR/serie"
tremote -r
else
echo "other public"
tremote -S
mkdir -p "$DOWNLOAD_DIR/public"
tremote --move "$DOWNLOAD_DIR/public"
fi
fi
- periodically you can check if your download and seed is complete based on your seedratio and then decide to move the completed files somewhere and remove it from transmission. This script does this for you:
#!/bin/bash
tremote ()
{
transmission-remote -n user:password $*
}
for t in `tremote -l | grep Done | sed -e 's/^ *//' | sed 's/\*//' | cut -s -d " " -f1`
do
if ( tremote -t $t -it | grep -qi "tracker.a" );
then
echo processing $t
tremote -t $t --move "/media/nfs/ds414/files/trackera"
tremote -t $t -r
fi
done
- enabling blocklist in transmission is a sensible thing to do for this add the following option to /var/lib/transmission-daemon/info/settings.json
"blocklist-enabled": true,
"blocklist-url": "http://list.iblocklist.com/?list=bt_level1&fileformat=p2p&archiveformat=gz",
You can periodically update the blocklist via cron job by issuing sudo crontab -e
and adding:
30 19 * * * root /usr/bin/transmission-remote -n user:password --blocklist-update
and restarting cron:
- keeping a fair use of the ADSL connection to allow others to have good traffic without too latency edit
/var/lib/transmission-daemon/info/settings.json
"download-queue-size:" 5,
"peer-limit-per-torrent:" 5,
"peer-limit-global:" 30,
In order to have good permission rights, in synology domain add transmission gid 107 in /etc/group
that should match the gid of debian-transmission in debian domain. Add users that should get access to download directory to this group in synology domain. Fix on debian domain the directory rights to allow writing for debian-transmission:
addgroup debian-transmission mediagroup
chown -R debian-transmission:mediagroup /media/nfs/ds414/download
chmod g+rwX -R /media/nfs/ds414/download
- transmission logs suggests for udp and utp buffer optimizations (when adding in
/etc/default/transmission-daemon
to OPTION --logfile /media/nfs/ds414/download/transmission/transmission.log
and reading logs) put in /etc/sysctl.conf
echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf
echo "net.core.wmem_max = 1048576" >> /etc/sysctl.conf
sysctl -p
Test with transmission-daemon in foreground via transmission-daemon --foreground
Be sure that correct owner on chown -R debian-transmission:debian-transmission /var/lib/transmission/* /etc/transmission-daemon/*
Make transmission visible from outside:
vi /etc/nginx/sites-available/default
location /transmission/ {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:9091/transmission/; #Local ipno to dzga
proxy_read_timeout 90;
}
Automatic rss based file downloading
flexget is a good tool and easy to configure to automate file download based on rss feed matching names.
It can be installed by:
apt install sqlite
pip install flexget
pip install transmission-rpc
A typical user configuration can be the following:
cat .flexget/config.yml
templates:
global:
transmission:
host: localhost
port: 9091
username: username
password: password
addpaused: no
email:
active: True
from: flexget@yourdomain.com
to: user@yourdomain.com
smtp_host: smtp.yourdomain.com
smtp_port: 25
smtp_login: false
tv:
quality: webrip|webdl|hdtv <720p
regexp:
reject:
- Blah Confidential
content_filter:
reject: '*.avi'
set:
path: /media/nfs/ds414/download/transmission/serie
skip_files:
- '*.nfo'
- '*.sfv'
- '*[sS]ample*'
- '*.txt'
include_subs: yes
series:
- Serie name one
- Serie name two
tasks:
stream1:
rss: http://streamsiteone.com/feed/
template: tv
priority: 10
stream2:
rss: http://streamsitetwo.com/?cat=9
template: tv
priority: 20
In order to check flexget configuration you can issue (when it fails have a look here):
flexget check
Launch of flexget task can be automated using cron job, by issuing sudo crontab -e
and adding:
0 07 * * * root /usr/local/bin/flexget --cron >/dev/null 2>&1
Relaunch cron service through:
service cron restart
If you want to update flexget in the future just do:
pip install --upgrade flexget transmission-rpc
#pip install --force-reinstall --ignore-installed flexget
#pip install --force-reinstall --ignore-installed transmissionrpc
If you experience some issues with seen status or database you can reset it with the following command that should be used only as a last resort:
flexget database reset --sure
Backup your data with history: rsnapshot
rsnapshot is an rsync based backup system that I like: it is simple yet efficient.
apt install rsnapshot
My configuration is the following /etc/rsnapshot.conf
:
config_version 1.2
snapshot_root /media/nfs/backup/rsnapshot
cmd_cp /bin/cp
cmd_rm /bin/rm
#cmd_rsync /usr/syno/bin/rsync
cmd_rsync /usr/bin/rsync
cmd_ssh /opt/bin/ssh
cmd_du /usr/bin/du
interval daily 7
interval weekly 4
interval monthly 6
link_dest 1
verbose 2
loglevel 3
logfile /media/nfs/backup/rsnapshot.log
#rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded --syno-acl
rsync_long_args -av --progress --delete --numeric-ids --relative --delete-excluded
lockfile /var/run/rsnapshot.pid
# little trick in .ssh/config to map syno to host localhost with port 2222.
WARNING: you need to enable backup services i.e. rsync on synology's web interface
backup root@syno:/ synosys/ exclude_file=/etc/incexcl-synosys
backup / debiansys/ exclude_file=/etc/incexcl-debiansys
backup / debiandata/ exclude_file=/etc/incexcl-debiandata
backup / synodata/ exclude_file=/etc/incexcl-synodata
In order to backup synology system domain, I use an alias for syno in .ssh/config
file
# for rsnapshot to access ds414:2222
host syno
port 2222
hostname ds414
You will also need to enable rsync (network backup) service on synology web interface through main menu->backup & replication->backup services specifying SSH encryption port as 2222.
You can notice that, to be safe, I am performing the backup to an external USB disk that I attach to the synology (located in /volumeUSB1/usbshare
).
All the directories (include and exclude) are specified in the /etc/incexcl-debiansys file which has the following format:
+ /etc/
+ /root/
+ /usr/
+ /usr/local/
+ /usr/local/etc/
- /usr/local/*
- /usr/*
+ /var/
+ /var/www/
+ /var/spool/
+ /var/spool/cron/
- /var/spool/*
+ /var/lib/
+ /var/lib/mysql/
- /var/lib/*
- /var/*
- /*
Synology's system configuration file tree to be backup is stored in /etc/incexcl-synosys
:
+ /etc/
+ /root/
+ /usr/
+ /usr/syno/
+ /usr/syno/etc/
+ /usr/syno/apache/
- /usr/syno/*
- /usr/*
+ /opt/
+ /opt/etc/
+ /opt/share/
+ /opt/share/lib/
- /opt/share/*
+ /opt/lib/
+ /opt/lib/ipkg/
- /opt/lib/*
- /opt/*
- /*
- Install and configure a simple mail client: mutt to receive notifications of failure
Edit /root/.muttrc
to reflect your smtp server and credentials (here the domain is a google apps business one):unset metoo
unset save_empty
my_hdr From: backup <user@yourdomain.com>
my_hdr X-Organization: YOURDOMAIN
set from="user@yourdomain.com"
set imap_user="user@yourdomain.com" imap_pass="yourPassword"
set smtp_url="smtp://user\@yourdomain.com@smtp.gmail.com:587/" smtp_pass="yourPassword"
set folder=imaps://user\@yourdomain.com@imap.gmail.com/
- Automate your backups launching rsnapshot in the crontab: issue
sudo crontab -e
to add:
#minute hour mday month wday who command
55 23 * * 0,1,2,3,4,5,6 root /usr/bin/rsnapshot -v daily ||
mutt -s "bkp `date +%Y%m%d` daily" user@yourdomain.com < /media/nfs/backup/rsnapshot.log
0 02 * * 0 root /usr/bin/rsnapshot -v weekly ||
mutt -s "bkp `date +%Y%m%d` weekly" user@yourdomain.com < /media/nfs/rsnapshot.log
0 04 1 * * root /usr/bin/rsnapshot -v monthly ||
mutt -s "bkp `date +%Y%m%d` monthly" user@yourdomain.com < /media/nfs/rsnapshot.log
Restart cron:
Full manual backup
On synology, synology domain:
rsync -av --progress --delete --numeric-ids --r
elative --exclude-from=/volume1/incexcl-imac marc@imac.courville.org:/ /volumeUSB2/usbshare/backup/imac/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexc
l-synodata / /volumeUSB2/usbshare/backup/synodata/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexc
l-synosys / /volumeUSB2/usbshare/backup/synosys/
rsync -av --progress --delete --numeric-ids --relative --exclude-from=/volume1/debian/etc/incexc
l-debiansys / /volumeUSB2/usbshare/backup/debiansys/
rsync -av --progress --delete --numeric-ids --exclude-from=/volume1/
incexcl-mediatiny /volume1/video/hd/ /volumeUSB2/usbshare/video/
for i in
ds211-6T-ipkg hyperion-linux ggdrive mail
do
rsync
-av --progress --delete --numeric-ids --relative /volume1/backup/
$i/
/volumeUSB2/usbshare/backup/
$i/
done
On imarc:
rsync -av --progress --delete --exclude-from=/Users/marc/dev/backup/incexcl-imarc / /Volumes/Backup/backup/mba/
Do sync two disks:
-av --progress --delete --numeric-ids
For maximum speed following https://gist.github.com/KartikTalwar/4393116 on synology fastest setting on synology is to disable compression and use chacha20 cypher:
rsync -aHXxv --numeric-ids --delete --progress --exclude-from=/volume1/incexcl-synomum -e "ssh -T -c chacha20-poly1305@openssh.com -o Compression=no -x" /volume1/ moulinvert:/volume1/
For local fast backups you can use buffer with tar: (cd / && tar cPpSslf -) | buffer -m 64m | (cd /mnt/toto/backup/ && tar xvPpSslf -)
Fast non incremental backup: on synology, synology domain, install
ipkg install pv netcat less gcc make zlib rsync tar wget buffer
wget http://zlib.net/pigz/pigz-2.3.3.tar.gz
tar zxvf
pigz-2.3.3.tar.gz
cd pigz
sed -e "s/^CC=cc/CC=gcc/g" Makefile
make
cp -f pigz unpigz /opt/bin/
# non compressible source
#SOURCE: 192.168.0.2
dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | nc -l -p 12121
#DESTINATION:
nc 192.168.0.2 12121 | tar xvPpSslf -
# compressible source
#SOURCE: 192.168.0.2
dir=directory; tar cPpSslf - $dir | pv -s `du -sb $dir | cut -f 1` | pigz | nc -q 10 -l -p 12121
#DESTINATION:
nc -w 10 192.168.0.2 12121 | pigz -d | tar xvPpSslf -
Backup your emails with offlineimap
Though there is a debian package for offlineimap you better install latest git since it enables to sync up the gmail labels:
git clone git://github.com/OfflineIMAP/offlineimap.git
cd offlineimap
python ./setup.py install
cd ..
easy_install --upgrade offlineimap
Now you can automate sync for an everyday scheduling:
crontab -e
00 22 * * * /usr/local/bin/offlineimap -o > /dev/null 2>&1
Mutt is the ultimate email client
I have been using mutt since 1995 and still use it (less extensively though). Please refer to the following good links in order to have it running smoothly with gmail: consolify your gmail with mutt and the homely mutt.
Here are the tools that I installed:
aptitude install mutt-patched urlview muttprint muttprint-manual w3m notmuch par
easy_install goobook
Files renaming
I use two tools that I find handy:
- perl-file-rename that I find extremely useful and powerful that handles regexp (it is installed by default on debian). One example of usage is the following renaming series in directory that starts with the episode numbed:
rename 's/^([0-9]*) - /serie-s01e$1-/g' *.mp4
rename 's/ /_/g' *
rename "s/l_homme/l\'homme/g" *
- convmv to get rid of non UTF-8 characters
apt install convmv
Here is an example of how to use convmv recursively on current directory to fix encoding:
convmv -f iso-8859-1 -t utf8 --notest -r .
- files with question marks really do upset samba shares it seems and in order to remove these you can use the following renaming script:
#!/bin/bash
for i in `find . | grep "\?"`; do rename "s/\?//g" "$i"; done
Low power Ethernet based headless server
In order to optimize power consumption, let's disable Wi-Fi and Bluetooth:
echo "dtoverlay=disable-wifi" | sudo tee -a /boot/config.txt
echo "dtoverlay=disable-bt" | sudo tee -a /boot/config.txt
sudo systemctl disable hciuart
sudo systemctl disable hciuart
Leds consume power too: dark mode it is: add in /boot/config.txt
##turn off ethernet port LEDs
dtparam=eth_led0=4
dtparam=eth_led1=4
##turn off mainboard LEDs
dtoverlay=act-led
##disable ACT LED
dtparam=act_led_trigger=none
dtparam=act_led_activelow=off
##disable PWR LED
dtparam=pwr_led_trigger=none
dtparam=pwr_led_activelow=off
Power off HDMI (-p
for on again)
Automount known USB disks exfat formatted and make it available with samba
Add exfat support:
apt install exfat-fuse exfat-utils
Add removable section in in /etc/auto.master
vi /etc/auto.master
/media/removable /etc/auto.removable --timeout=2,sync,nodev,nosuid,iuid=1031
#EOF
Automount only known disk, identify it first:
blkid /dev/sda1
/dev/sda1: UUID="A5DA-5739" TYPE="exfat" PTTYPE="dos" PARTUUID="5fc68fae-7d99-49d0-8a3c-2d56497223dd"
udevadm info --query all --path /sys/block/sda/sda1/ | grep UUID
E: ID_PART_TABLE_UUID=7bbc5ab5-8ae1-4405-9981-cc9d458bb09b
E: ID_FS_UUID=A5DA-5739
E: ID_FS_UUID_ENC=A5DA-5739
E: ID_PART_ENTRY_UUID=5fc68fae-7d99-49d0-8a3c-2d56497223dd
Now activate it in autofs:
vi /etc/auto.removable
nas -fstype=auto UUID=A5DA-5739
#EOF
Install samba
apt install samba samba-common-bin
vi /etc/samba/smb.conf
## AUTHENTICATION SECTION ##
## HOMES SECTION ##
[nas]
comment= NAS
path = /media/removable/nas
valid users = media
force group = users
create mask = 0660
directory mask = 0771
read only = no
#EOF
/etc/init.d/smbd restart
Make rpi easily discoverable via bonjour
For this use avahi:
apt install avahi-daemon
vi /etc/avahi/avahi-daemon.conf
allow-interfaces=eth0,wlan0
#EOF
service avahi-daemon restart
newsbin support: sabnzb
wget
http://www.rarlab.com/rar/unrarsrc-5.9.3.tar.gz
tar zxvf unrarsrc-5.9.3.tar.gz
cd unrar
make
install -v -m755 unrar /usr/local/bin
apt install sabnzbdplus
vi
/etc/default/sabnzbdplus
USER=media
HOST=0.0.0.0
PORT=6080
#EOF
cd /media/nfs/ds414/download
chown -R media:mediagroup sabnzb
chmod -R ug+rwX sabnzb
cd sabnzb
mkdir tv movie audio software
complete incomplete incoming
#http://192.168.0.3:6080
vi /etc/nginx/sites-available/default
location /sabnzbd {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:6080/sabnzbd; #Local ipno to dzga
proxy_read_timeout 90;
}
#EOF
apt install nzbget
sonarr for nzb TV shows
apt install apt-transport-https dirmngr
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
apt install mono-devel
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0xA236C58F409091A18ACA53CBEBFF6B99D9B78493
echo "deb http://apt.sonarr.tv/ master main" | sudo tee /etc/apt/sources.list.d/sonarr.list
apt install
nzbdrone
chown -R media:mediagroup /opt/NzbDrone
vi
/etc/systemd/system/sonarr.service
[Unit]
Description=Sonarr Daemon
After=syslog.target network-online.target
[Service]
User=media
Group=mediagroup
Type=simple
ExecStart=/usr/bin/mono --debug /opt/NzbDrone/NzbDrone.exe
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
#EOF
systemctl enable sonarr.service
systemctl start sonarr.service
systemctl status sonarr.service
#http://192.168.0.3:8989
vi /etc/nginx/sites-available/default
location /sonarr {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:8989; #Local ipno to dzga
proxy_read_timeout 90;
}
#EOF
In sonarr web interface do set in settings general url base to /sonarr
radaar for nzb movies
cd /opt/
apt install curl
curl -L -O $( curl -s https://api.github.com/repos/Radarr/Radarr/releases | grep linux.tar.gz | grep browser_download_url | head -1 | cut -d \" -f 4 )
tar -xvzf Radarr.develop.*.linux.tar.gz
chown -R media:mediagroup /opt/Radarr
vi
/etc/systemd/system/radarr.service
[Unit]
Description=Radarr Daemon
After=syslog.target network.target
[Service]
User=media
Group=mediagroup
Type=simple
ExecStart=/usr/bin/mono --debug /opt/Radarr/Radarr.exe -nobrowser
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
#EOF
systemctl enable radarr.service
systemctl start radarr.service
systemctl status radarr.service
#http://192.168.0.3:7878
vi /etc/nginx/sites-available/default
location /radarr {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:7878; #Local ipno to dzga
proxy_read_timeout 90;
}
#EOF
In radarr web interface do set in settings general url base to /radarr
Acrarium backend
Acrarium is a debug backend for Android applications using acra.
mysql -u root -p
create database acrarium;
CREATE USER acrarium IDENTIFIED BY 'PASSWORD';
GRANT ALL PRIVILEGES ON acrarium.* TO 'acrarium'@'localhost' identified by '
PASSWORD
';
cat /home/media/.acra/application.properties
spring.datasource.url=jdbc:mysql://localhost:3306/acrarium?useSSL=false
&serverTimezone=UTC
spring.datasource.username=acrarium
spring.datasource.password=PASSWORD
spring.jpa.database-platform=org.hibernate.dialect.MySQL57Dialect
server.port=9090
#EOF
apt install default-jdk
wget
https://github.com/F43nd1r/Acrarium/releases/download/v0.10.7/acrarium-0.10.7.jar
cp acrarium-0.10.7.jar /var/lib/acrarium
vi /etc/systemd/system/acrarium.service
[Unit]
Description=Acrarium Daemon
After=syslog.target network.target
[Service]
User=media
Group=mediagroup
Type=simple
ExecStart=/usr/bin/java -jar /var/lib/acrarium/acrarium-0.10.7.jar
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
#EOF
systemctl enable acrarium.service
systemctl start acrarium.service
systemctl status acrarium.service
#http://192.168.0.3:9090
vi /etc/nginx/sites-available/default
location /acrarium {
rewrite ^/acrarium/?(.*) /$1 break;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_pass http://localhost:9090; #Local ipno to dzga
proxy_read_timeout 90;
}
#EOF
BD streaming
YACReader seems to be one solution:
cd unarr-deb
dpkg-buildpackage -b -rfakeroot -us -uc
dpkg -i ../libunarr_1.0.1-2_armhf.deb
dpkg -i ../libunarr-dev_1.0.1-2_armhf.deb
cd yacreader-deb
apt install qtmultimedia5-dev qtscript5-dev qtdeclarative5-dev
libqt5multimedia5
dpkg-buildpackage -b -rfakeroot -us -uc
vi /etc/systemd/system/
YACReaderLibraryServer.service
[Unit]
Description=YACReaderLibraryServer Daemon
After=syslog.target network.target
[Service]
User=media
Group=mediagroup
Type=simple
ExecStart=/usr/bin/YACReaderLibraryServer start
TimeoutStopSec=20
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
#EOF
systemctl enable
YACReaderLibraryServer
.service
systemctl start
YACReaderLibraryServer
.service
systemctl status
YACReaderLibraryServer
.service
cat
/home/media/.local/share/YACReader/YACReaderLibrary/YACReaderLibrary.ini
In order to add or refresh the comics you need to:
su - media
YACReaderLibraryServer add-library BD /volume1/bd
YACReaderLibraryServer list-libraries /volume1/bd
YACReaderLibraryServer update-library /volume1/bd
docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker pi
# you need to logout/login
sudo apt install libffi-dev libssl-dev
sudo apt remove python-configparser
sudo pip3 install docker-compose
thelounge irc client
In order to get an web based irc client, thelounge is a good candidate:
curl -sL https://deb.nodesource.com/setup_15.x | sudo -E bash -
apt-get install -y nodejs
Install deb package from
https://github.com/thelounge/thelounge/releases
wget
https://github.com/thelounge/thelounge/releases/download/v4.2.0/thelounge_4.2.0_all.deb
dpkg -i ./
thelounge_4.2.0_all.deb
end