Bart Simons

Sync folders and files on Linux with rsync and inotify

 •  Filed under folders, files, linux, sync, rsync, inotify

So: you've got two or more clients and/or servers. They contain files that you want to have automatically synced when possible, because that would save a lot of time. Well, I got the solution for you: with a little bit of thinking in an innovative way I have found the solution that might bring you onto the right path as well. Rsync is a great solution, but having to run rsync manually would take a lot of unnecessary time away, right? And that is where inotify is for: real-time monitoring of your filesystem so that your files can be synced between multiple machines with the power of rsync!

Self-made sync daemon in a working state

Setting up a test scenario

I needed to get myself a nice development environment at first so I started off 3 virtual servers which all run Ubuntu 16.04, my personal favourite. All these 3 machines needed to be setup with the following software packages:

  • OpenSSH server
  • Rsync
  • Inotify

Also noteworthy is that these machines are absolutely not connected through a private network. All rsync traffic is supposed to be worked out over SFTP.

For who is this for?

I could see some potential for workflow improvement on these situations:

  • A development environment, where constant file transfers are taking up a lot and/or too much time
  • Load balanced file storage clusters/servers
  • Backup/failover servers with the need for constant replication
Working on it

First things first: we need to get all the dependencies installed on the 3 servers with this one-line command:

apt update && apt -y install openssh-server rsync inotify-tools  

After that, let's create a specific folder that we want to sync. Let's call it SyncFiles:

mkdir /opt/syncfiles  

And for secure file transfer, we want a public-private key link for the transfer link that rsync uses, this is how to configure it:

ssh-keygen -t rsa -f ~/rsync-key -N ''

# Paste the output in your destination servers' ~/.ssh/authorized_keys file:
cat ~/rsync-key.pub

# Removing public key for security purposes..
rm ~/rsync-key.pub

# Remember to execute this script on all servers separately! Then, copy the output of the script in all of your servers' authorized_keys files.

Now that you have got all the pre-configuration work done, it's about time to write a script that goes through an infinite loop with inotifywait in it:

#!/bin/bash

# Supposed to run on rsync-host01, change rsync-host02 to rsync-host01 to make a script that is meant to run on rsync-host02.

while true; do  
  inotifywait -r -e modify,attrib,close_write,move,create,delete /opt/syncfiles
  rsync -avz -e "ssh -i /root/rsync-key -o StrictHostKeyChecking=no“  /opt/syncfiles/ root@rsync-host02:/opt/syncfiles/
done  

I saved this script in the /opt directory as file-sync.sh.

To finish things off, lets create the systemd service file that can stop, start, and reset the script on demand or on specific events like a system bootup.

Create a file called sync.service in the directory /etc/systemd/system/ and put the following contents in it:

[Unit]
Description = SyncService  
After = network.target

[Service]
PIDFile = /run/syncservice/syncservice.pid  
User = root  
Group = root  
WorkingDirectory = /opt  
ExecStartPre = /bin/mkdir /run/syncservice  
ExecStartPre = /bin/chown -R root:root /run/syncservice  
ExecStart = /bin/bash /opt/file-sync.sh  
ExecReload = /bin/kill -s HUP $MAINPID  
ExecStop = /bin/kill -s TERM $MAINPID  
ExecStopPost = /bin/rm -rf /run/syncservice  
PrivateTmp = true

[Install]
WantedBy = multi-user.target  

Chmod this service file and reload the systemd daemon:

chmod 755 /etc/systemd/system/sync.service  
systemctl daemon-reload  

You are all set! You can now use these commands to manage your self-made directory sync daemon:

# Start your service
systemctl start sync.service

# Obtain your services' status
systemctl status sync.service

# Stop your service
systemctl stop sync.service

# Restart your service
systemctl restart sync.service  

Gunicorn as a SystemD service

 •  Filed under gunicorn, systemd, service

Beautiful, you have just finished your Python web application, framework or API. You have chosen to use Gunicorn as your web server gateway interface, a solid choice. Now, you want to make your app manageable and so you want to integrate it with your init system. But the question is: how? This article contains all the information you need to make your Gunicorn-based app manageable with SystemD.

Creating a starting point

At first, we need a sample application that runs behind Gunicorn. The falcon framework is another great and lightweight Python framework to build a test application with. So, here's my example code file called sample.py:

import falcon  
import json

class RootPage:  
    def on_get(self, req, resp):
        resp.body = "Hello, it works!"

app = falcon.API()  
app.add_route('/', RootPage())  

Make sure to place this file in a directory called /opt/sampleapp which needs to be created first with mkdir:

adduser --shell=/bin/false --no-create-home --disabled-password  
mkdir /opt/sampleapp  
chown -R sampleapp:sampleapp /opt/sampleapp  

This piece of code can be brought to life with the following command:

gunicorn sample:app -b 0.0.0.0:8000  

Your terminal output should look like this:

Gunicorn in action

Integration with SystemD

So now that we got my demo working, let's make it manageable with SystemD! To accomplish that, we first need to know how to daemonize the Gunicorn process and all it's subprocesses. Luckily, we can all do that inside a SystemD service file. Create a file in /etc/systemd/system/ called sampleapp.service and put the following content in it:

[Unit]
Description = SampleApp  
After = network.target

[Service]
PermissionsStartOnly = true  
PIDFile = /run/sampleapp/sampleapp.pid  
User = sampleapp  
Group = sampleapp  
WorkingDirectory = /opt/sampleapp  
ExecStartPre = /bin/mkdir /run/sampleapp  
ExecStartPre = /bin/chown -R sampleapp:sampleapp /run/sampleapp  
ExecStart = /usr/bin/env gunicorn sample:app -b 0.0.0.0:8000 --pid /run/sampleapp/sampleapp.pid  
ExecReload = /bin/kill -s HUP $MAINPID  
ExecStop = /bin/kill -s TERM $MAINPID  
ExecStopPost = /bin/rm -rf /run/sampleapp  
PrivateTmp = true

[Install]
WantedBy = multi-user.target  

Don't forget to apply the right permissions on the service file! (chmod 755 sampleapp.service)

Don't forget to execute these commands to make your changes effective:

chmod 755 /etc/systemd/system/sampleapp.service  
systemctl daemon-reload  

You can now manage your service with one of the following commands listed here:

# Start your service
systemctl start sampleapp.service

# Obtain your services' status
systemctl status sampleapp.service

# Stop your service
systemctl stop sampleapp.service

# Restart your service
systemctl restart sampleapp.service  

I got my sample app service started successfully! Here it is:
Sample service in working state

And the web service itself works as well:

Web service in working state

Awesome! So this is how Gunicorn could be managed through and integrated with SystemD. Good luck with your project 🙂

Unlock a Ghost blog account

 •  Filed under ghost, ghost.org, unlock, account

Bummer! You've just locked yourself out of your Ghost blog account, right? Well, it happened to me a while ago after I've mistyped my password 5 times. Could happen to anyone, right? Here's a nifty little trick that may help you if you don't want to go to the process of resetting your password and/or if you want to keep your current password.

First things first: you need to have access to a shell on your server. I use SSH for that. Once you have shell access to your server: stop all possible Ghost processes. This is important, because you don't want to modify a database that's active and/or in use!

If you use SQLite as your database backend (it's the standard setting, so if you don't know what database backend you have, use this) you need some sort of a SQLite client. You can install one via apt-get:

sudo apt-get install sqlite3  

Navigate to your ghost.db file, you can find this file within the content/data folder inside your Ghost installation path. Then, execute this command to enter into a SQL shell:

sudo sqlite3 ghost.db  

Now, the only thing you need to do yet is updating the status of your account. Execute this inside the SQL shell:

update users set status = "active" where "slug" = "insertyourusernamehere";  
.exit

Start your Ghost instance again, and voila: you can login again!

In case if you are using any other SQL-compatible database to run your Ghost blog: just execute the query inside the correct database and you will be able to login as well.

IceWarp Installation Script

 •  Filed under icewarp, installation, deployment, script, automation

Are you currently searching for an easy way to install IceWarp Business Mail Server on Linux, preferably a Debian-based distribution? There's an easy way to automate that! I've written this awesome script in less than one hour:

#!/bin/bash

# IceWarp Download And Deployment Script
# Created by Bart Simons, 2016

# Dependencies:
#  - wget
#  - pv

echo "Downloading IceWarp..."  
wget http://www.icewarp.com/download/server/Ubuntu/Ubuntu16/icewarp11/IceWarpServer-11.4.5_UBUNTU16_x64.tar.gz -q --show-progress -O icewarp.tar.gz

echo "Creating working directory..."  
mkdir icewarp

echo "Extracting IceWarp..."  
pv icewarp.tar.gz | tar -xzf - --strip-components=1 -C icewarp

echo "Changing directory to working directory..."  
cd icewarp

echo "Executing installer..."  
./install.sh

echo "Removing installation files..."  
cd ..  
rm icewarp.tar.gz  
rm -rf icewarp

echo "Done!"  

This should work, good luck with your brand new IceWarp installation!

Live system DD backup, with encryption and compression!

 •  Filed under linux, live, backup, dd, openssl, encryption, gzip, compression

Imaging a live system disk, what a stupid idea, right? It actually is.. but hey, I was just looking for an alternative solution for an off-site backup of my servers: DD reading my disk bit by bit, while spitting out data through the pipeline to gzip which spits out data to OpenSSL through another pipeline, and OpenSSL finally pipes all the final data to netcat.

That's a handful of words right there! It's relatively simple, actually. This is the command I ran on my server:

sudo dd if=/dev/vda conv=sync,noerror status=progress | openssl aes-192-cbc -salt -e | gzip -9 -c | nc -l 10.11.12.1 56002  

Netcat starts a TCP listener on the IP address 10.11.12.1 and port number 56002, and waits for an incoming connection. You will be prompted to enter a password to put on your data after you run this long command, your data gets protected with AES-192 encryption which is more than sufficient. Make sure you remember this password, because it is the ONLY key to your data!

Remote server transferring an image to the client

I used my local computer which runs Linux to stream the disk image to. The local machine connects to the remote server and the transfer will start as soon when a TCP connection has been initialised with the server. You can even follow the throughput like on the screenshot above! Pretty cool, huh?

The command to run on your client machine is:

nc 10.11.12.1 56002 | gunzip -c | openssl aes-192-cbc -salt -d > disk.img  

It took me around 15-20 minutes to copy a live 20GB VM over to my local computer. This server has 1 core and 512MB RAM so I'd say that the results are pretty acceptable.

And the good thing is that the Linux disk was actually bootable! Since the output is a raw disk image, I had to convert it over to a VMware .vmdk file and that file worked. I'm pretty amazed!

Awesome, now let's hope that there's an equivalent alternative for Windows available 😜