Bart Simons

linux

A 3 post collection


Sync folders and files on Linux with rsync and inotify

 •  Filed under folders, files, linux, sync, rsync, inotify

So: you've got two or more clients and/or servers. They contain files that you want to have automatically synced when possible, because that would save a lot of time. Well, I got the solution for you: with a little bit of thinking in an innovative way I have found the solution that might bring you onto the right path as well. Rsync is a great solution, but having to run rsync manually would take a lot of unnecessary time away, right? And that is where inotify is for: real-time monitoring of your filesystem so that your files can be synced between multiple machines with the power of rsync!

Self-made sync daemon in a working state

Setting up a test scenario

I needed to get myself a nice development environment at first so I started off 3 virtual servers which all run Ubuntu 16.04, my personal favourite. All these 3 machines needed to be setup with the following software packages:

  • OpenSSH server
  • Rsync
  • Inotify

Also noteworthy is that these machines are absolutely not connected through a private network. All rsync traffic is supposed to be worked out over SFTP.

For who is this for?

I could see some potential for workflow improvement on these situations:

  • A development environment, where constant file transfers are taking up a lot and/or too much time
  • Load balanced file storage clusters/servers
  • Backup/failover servers with the need for constant replication
Working on it

First things first: we need to get all the dependencies installed on the 3 servers with this one-line command:

apt update && apt -y install openssh-server rsync inotify-tools  

After that, let's create a specific folder that we want to sync. Let's call it SyncFiles:

mkdir /opt/syncfiles  

And for secure file transfer, we want a public-private key link for the transfer link that rsync uses, this is how to configure it:

ssh-keygen -t rsa -f ~/rsync-key -N ''

# Paste the output in your destination servers' ~/.ssh/authorized_keys file:
cat ~/rsync-key.pub

# Removing public key for security purposes..
rm ~/rsync-key.pub

# Remember to execute this script on all servers separately! Then, copy the output of the script in all of your servers' authorized_keys files.

Now that you have got all the pre-configuration work done, it's about time to write a script that goes through an infinite loop with inotifywait in it:

#!/bin/bash

# Supposed to run on rsync-host01, change rsync-host02 to rsync-host01 to make a script that is meant to run on rsync-host02.

while true; do  
  inotifywait -r -e modify,attrib,close_write,move,create,delete /opt/syncfiles
  rsync -avz -e "ssh -i /root/rsync-key -o StrictHostKeyChecking=no“  /opt/syncfiles/ root@rsync-host02:/opt/syncfiles/
done  

I saved this script in the /opt directory as file-sync.sh.

To finish things off, lets create the systemd service file that can stop, start, and reset the script on demand or on specific events like a system bootup.

Create a file called sync.service in the directory /etc/systemd/system/ and put the following contents in it:

[Unit]
Description = SyncService  
After = network.target

[Service]
PIDFile = /run/syncservice/syncservice.pid  
User = root  
Group = root  
WorkingDirectory = /opt  
ExecStartPre = /bin/mkdir /run/syncservice  
ExecStartPre = /bin/chown -R root:root /run/syncservice  
ExecStart = /bin/bash /opt/file-sync.sh  
ExecReload = /bin/kill -s HUP $MAINPID  
ExecStop = /bin/kill -s TERM $MAINPID  
ExecStopPost = /bin/rm -rf /run/syncservice  
PrivateTmp = true

[Install]
WantedBy = multi-user.target  

Chmod this service file and reload the systemd daemon:

chmod 755 /etc/systemd/system/sync.service  
systemctl daemon-reload  

You are all set! You can now use these commands to manage your self-made directory sync daemon:

# Start your service
systemctl start sync.service

# Obtain your services' status
systemctl status sync.service

# Stop your service
systemctl stop sync.service

# Restart your service
systemctl restart sync.service  

Live system DD backup, with encryption and compression!

 •  Filed under linux, live, backup, dd, openssl, encryption, gzip, compression

Imaging a live system disk, what a stupid idea, right? It actually is.. but hey, I was just looking for an alternative solution for an off-site backup of my servers: DD reading my disk bit by bit, while spitting out data through the pipeline to gzip which spits out data to OpenSSL through another pipeline, and OpenSSL finally pipes all the final data to netcat.

That's a handful of words right there! It's relatively simple, actually. This is the command I ran on my server:

sudo dd if=/dev/vda conv=sync,noerror status=progress | openssl aes-192-cbc -salt -e | gzip -9 -c | nc -l 10.11.12.1 56002  

Netcat starts a TCP listener on the IP address 10.11.12.1 and port number 56002, and waits for an incoming connection. You will be prompted to enter a password to put on your data after you run this long command, your data gets protected with AES-192 encryption which is more than sufficient. Make sure you remember this password, because it is the ONLY key to your data!

Remote server transferring an image to the client

I used my local computer which runs Linux to stream the disk image to. The local machine connects to the remote server and the transfer will start as soon when a TCP connection has been initialised with the server. You can even follow the throughput like on the screenshot above! Pretty cool, huh?

The command to run on your client machine is:

nc 10.11.12.1 56002 | gunzip -c | openssl aes-192-cbc -salt -d > disk.img  

It took me around 15-20 minutes to copy a live 20GB VM over to my local computer. This server has 1 core and 512MB RAM so I'd say that the results are pretty acceptable.

And the good thing is that the Linux disk was actually bootable! Since the output is a raw disk image, I had to convert it over to a VMware .vmdk file and that file worked. I'm pretty amazed!

Awesome, now let's hope that there's an equivalent alternative for Windows available 😜

Node.js Linux Deployment Script

 •  Filed under installation, node.js, nodejs, linux, deployment, script

Need an easy solution to deploy the latest version of Node.js on Linux? This Bash script that I made does exactly that, and nothing more. There might be more elegant and better ways to deploy Node.js but I needed something lightweight and still as functional as solutions like nvm and n. The only things this script depends on are bash,curl,sed,grep and tar. These tools come preinstalled on almost all Linux distributions,

#!/bin/bash

# Node.js deployment script (automatically fetches the latest version)

CPUARCH="x64"  
LINK="https://nodejs.org/dist/latest/"  
DESTINATIONDIR="/opt/nodejs"

DOWNLOAD=$(curl -s $LINK | grep -o '<a .*href=.*>' | sed -e 's/<a /\<a /g' | sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d' | grep 'linux' | grep "$CPUARCH" | grep ".tar.gz")  
LINK=$LINK$DOWNLOAD

echo "[*] Downloading the latest version of Node.js .."

$(curl -s $LINK -o $DOWNLOAD)

echo "[*] Node.js has been downloaded .."  
echo "[*] Creating folder in /opt and unpacking Node.js there .."

$([ -d $DESTINATIONDIR ] && rm -rf $DESTINATIONDIR )
$(mkdir -p $DESTINATIONDIR)
$(tar -xf $DOWNLOAD -C $DESTINATIONDIR --strip-components=1)
$(rm $DOWNLOAD)

echo "[*] Symlinking Node.js and npm into /usr/bin .."

$([ -f /usr/bin/node ] && rm /usr/bin/node)
$([ -f /usr/bin/nodejs ] && rm /usr/bin/nodejs)
$([ -f /usr/bin/npm ] && rm /usr/bin/npm)
$(ln -s $DESTINATIONDIR/bin/node /usr/bin/node)
$(ln -s $DESTINATIONDIR/bin/node /usr/bin/nodejs)
$(ln -s $DESTINATIONDIR/bin/npm /usr/bin/npm)

echo "[*] All done!"  

I'd much rather program something like this in Python, but this actually works pretty good! I'm happy with the results 🍻

Node.js Linux Deployment Script

Tip: you can change https://nodejs.org/dist/latest/ to https://nodejs.org/dist/latest-boron/ to change the distribution channel to the latest LTS release of Node.js 😉