Bart Simons

Bootstrap Tables to JSON, and vice versa.

 •  Filed under html, bootstrap, tables, json, jquery, javascript

This is just a quick post and/or announcement about a JavaScript library I have been working on:

github.com/bmsimons/bootstrap-jsontables

The main goal of this library is to make the link between a Bootstrap/HTML5 table with JSON a lot easier. Do you have an API available that returns data in JSON format? No problem! Bootstrap-jsontables takes care of that. Take a look at the examples on my GitHub repo if you need a quick visual glance of what this library could do for your project.


More features coming soon!

I'm having big plans to improve the search functionality of bootstrap-jsontables. The search functionality in its current form is just too simple, so that's going to be improved in the future.


Converting a table to JSON data

Say that we have this Bootstrap table:

<table id="appUsers" class="table">  
    <thead>
        <tr>
            <th>ID</th>
            <th>Name</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>1</td>
            <td>Bart</td>
        </tr>
        <tr>
            <td>2</td>
            <td>Nick</td>
        </tr>
    </tbody>
</table>  


Use this JavaScript/jQuery code to convert the table to JSON:

// Create a new JSONTable object on the #appUsers table.
var jsonTable = new JSONTable($("#appUsers"))

// Convert the table to JSON
var tableData = jsonTable.toJSON()

// The tableData object:
[{"ID":"1","Name":"Bart"}, {"ID":"2","Name":"Nick"}]



Keeping a log of all progress over here would be a bad choice, so please check out my GitHub for any future updates, changes and improvements:

https://github.com/bmsimons/bootstrap-jsontables

I hope that this library helps you with your project. Your feature requests and/or contributions are welcome, please let me and the community know in the form of a GitHub issue :)

WireGuard.io Getting Started Guide/Tutorial

 •  Filed under getting started, guide, tutorial, WireGuard, WireGuard.io

In todays networking world, virtual private networks are unmissable. With IT needs growing exponentially in the current modern era, it is essential to make the right choices on what VPN software you are going to use. While IPSec tunnels are commonly deployed and proven to deliver good performance while being stable at the same time, are there any other alternatives?

Yes there are. Here are some VPN solutions I have deployed in the past:

  • OpenVPN, both in peer-to-peer and remote access configurations
  • PPTP (with pptpd on Linux)
  • SoftEther (Has its own VPN protocol over an SSL connection)

Recently - on a long journey on Google - I came across WireGuard. They claim to have the networking code of their VPN software running in kernel-space for optimal performance, so that seems all good. I decided to dig deeper into WireGuard, so I could write a guide/tutorial on the getting started and configuration process.

My test environment

My test environment consists over two Linux servers in the cloud, they are directly connected to each other over a private network:

  • server-01: 10.129.29.151
  • server-02: 10.129.30.154

For benchmarking networking speeds I used iperf, and this is the traffic speed test result I got over this private network:

iperf raw network speeds

Installing WireGuard

This step is pretty straight forward, just copy and paste this code into your terminal:

add-apt-repository -y ppa:wireguard/wireguard  
apt update  
apt install -y wireguard-dkms wireguard-tools  

If you don't use Ubuntu on your servers, check out this page on the WireGuard website to find out how to install it on your Linux distribution of preference.

Initialisation of WireGuard's virtual interfaces

Configuring a simple peer-to-peer tunnel on WireGuard is not that complicated.

First of all, let's create the wg0 interface on both servers - this will be the virtual interface for your virtual private network between both servers:

ip link add dev wg0 type wireguard  

Your virtual network also needs an IP address for each node so that machines can communicate between each other over IP:

# For server-01:
ip address add dev wg0 192.168.2.1/24

# For server-02:
ip address add dev wg0 192.168.2.2/24  


Generating a configuration for each node

WireGuard uses a key-based VPN solution for communication between nodes. This system insists of a private key and a public key for each node. You can generate these keys on each node with the following command:

# For server-01:
wg genkey | tee privatekey01 | wg pubkey > publickey01

# For server-02
wg genkey | tee privatekey02 | wg pubkey > publickey02  

Create a configuration file named wireguard.conf and store it somewhere safe with the right Linux permissions applied on this file (chown/chmod). Here's what you need to put in this configuration file:

# On server-01:

[Interface]
ListenPort = 4820  
PrivateKey = privatekey01's content goes here

[Peer]
Endpoint = ip:port of endpoint (10.129.30.154:4820)  
PublicKey = publickey02's content goes here  
AllowedIPs = 0.0.0.0/0  
# On server-02:

[Interface]
ListenPort = 4820  
PrivateKey = privatekey02's content goes here

[Peer]
Endpoint = ip:prt of endpoint (10.129.29.151:4820)  
PublicKey = publickey01's content goes here  
AllowedIPs = 0.0.0.0/0  

Link the configuration to the interface on all nodes:

wg setconf wg0 wireguard.conf  

Bring the interface up on all nodes:

ip link set up dev wg0  

You are now connected, you can test connectivity by sending ICMP echo packets:

WireGuard ICMP connectivity test


Benchmarking performance

Run this command on the first node (server-01 in my case):

iperf -s  

Run this command on the second node (server-02 in my case):

iperf -c 192.168.2.1  

These are the results I got over the tunnel:

Pretty good results for just a dual-core server. I'm sure that there are possibilities/tweaks to make WireGuard perform even better, we'll see...

Using Portainer.io to build and manage your Docker swarm

 •  Filed under portainer.io, build, manage, docker, swarm

Are you planning on deploying a Docker swarm anytime soon? Just rethink about how you want to manage your Docker compute hosts. Do you want to go with the CLI way of doing things, or are you most likely preferring a GUI-based frontend to manage your Docker swarm? Personally I'm a fan of doing all the things over a CLI interface, but hey: GUI-managed solutions can be good if they are done right.

So, let's get straight into talking about what Portainer.io can do for you and your (future) Docker environment: most of the things that you would usually do in the Docker CLI interface are made into a sleek-looking graphical interface with Portainer.

Setting up a basic Docker swarm with two compute nodes

Imagine having three servers for your Docker setup:

  • A management server to manage your Docker swarm, while at the same time being equipped with Portainer.io
  • Two compute servers which both function as a member of the Docker swarm

These are the IP addresses of the 3 machines

  • Docker Manager: 10.131.38.69
  • Docker Compute 01: 10.131.40.233
  • Docker Compute 02: 10.131.17.82

On the 'Docker Manager' machine, you execute the following command to initialise a new Docker swarm:

docker swarm init --advertise-addr 10.131.38.69  

This command also returns what you need to know to make compute nodes join your Docker swarm:

Docker Swarm Init

Once you have all your member servers joined into your swarm you can install Portainer.io on your Docker management server with this command:

docker service create --name portainerio -p 9000:9000 --constraint 'node.role == manager' --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock portainer/portainer -H unix:///var/run/docker.sock  

You are now running a Portainer.io instance on port 9000, great! Open your webbrowser and navigate to the web interface, fill in a password and login with the password that you just set.

Docker swarm in portainer.io

Boom! It works and shows you all the worker machines inside the web panel. Now it's all up to you what you are going to build and/or deploy. I hope this article was informative for you, enjoy the freedom of containers!

Ubuntu Cheat Sheet

 •  Filed under ubuntu, cheat sheet

In need of a handy cheat sheet filled with commands you can use in your Linux terminal? This page has got you covered with lots of commands for lots on different use cases. Missing a command? Feel free to send me your ideas, questions and suggested commands in the comments.


Install a full LAMP stack with just one command
sudo apt install lamp-server^  

Notes: this stack contains PHP5 on Ubuntu 14.04 or lower, and PHP7 on Ubuntu 16.04 or higher. Not tested with non-LTS releases.


Install PhpMyAdmin
sudo apt install phpmyadmin  


Install and configure basic SMB/CIFS shared folder
sudo apt install samba  
sudo smbpasswd -a bart

sudo echo "[bart]" >> /etc/samba/smb.conf  
sudo echo "path = /home/bart" >> /etc/samba/smb.conf  
sudo echo "valid users = bart" >> /etc/samba/smb.conf  
sudo echo "read only = no" >> /etc/samba/smb.conf

sudo service smbd restart  


Install security updates only
sudo unattended-upgrades -d  


Upgrade to the next LTS release
sudo do-release-upgrade  


Cleanup the package database and no longer needed packages
sudo apt autoremove && sudo apt clean && sudo apt autoclean  


List kernel version and Ubuntu release version
uname -a && lsb_release -a  


Edit and update grub settings
sudo nano /etc/default/grub  
update-grub  


Get unique IPs accessing your site

For Apache:

cat /var/log/apache2/access.log | awk '{print $1}' | sort -u  

For NGINX:

cat /var/log/nginx/access.log | awk '{print $1}' | sort -u  


Use Ubuntu as a NAT router
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf  
sysctl -p  


Get bash history
cat ~/.bash_history  


Get SSH login attempts that exceed set limits
cat /var/log/auth.log | grep "attempts"  


Get successful local and remote login attempts
cat /var/log/auth.log | grep "systemd-logind"  


Check for open, listening ports
netstat -tulpn  


Get RAM usage information
free -m  


Get processes currently running as root
ps aux | grep "root"  


Stop and start interface
ifconfig eth0 down  
ifconfig eth0 up  

Note: replace eth0 with your desired network interface.


Manually set static IP on interface
ifconfig eth0 192.168.1.20 netmask 255.255.255.0 broadcast 192.168.1.255  


Check for running systemd services
systemctl list-units --state=running --type=service  


Check for configuration errors
jounalctl -xe  


Get cron jobs

Hourly cron jobs:

ls /etc/cron.hourly  

Daily cron jobs:

ls /etc/cron.daily  

Weekly cron jobs:

ls /etc/cron.weekly  

Monthly cron jobs:

ls /etc/cron.monthly  

Other cron jobs:

ls /etc/cron.d  


Hardware/software info

List PCI devices:

lspci  

List block devices:

lsblk  

List USB devices:

lsusb  

List CPU devices

lscpu  

List general HW info

lshw  

List loaded kernel modules

lsmod  


File/directory creation, parsing and IO

Create new file

touch new.file  

Get contents of file

cat new.file  

Overwrite a file

echo "Hello, dear Linux user!" > new.file  

Append to a file

echo "Hello, dear Linux user!" >> new.file  

Get lines containing a substring

cat new.file | grep "user"  


Permissions

Set ownership of a folder, recursively:

chown -R www-data:www-data /var/www/html  

Note: this sets the ownership of the /var/www/html folder to the default web server user and group.

Sync folders and files on Linux with rsync and inotify

 •  Filed under folders, files, linux, sync, rsync, inotify

So: you've got two or more clients and/or servers. They contain files that you want to have automatically synced when possible, because that would save a lot of time. Well, I got the solution for you: with a little bit of thinking in an innovative way I have found the solution that might bring you onto the right path as well. Rsync is a great solution, but having to run rsync manually would take a lot of unnecessary time away, right? And that is where inotify is for: real-time monitoring of your filesystem so that your files can be synced between multiple machines with the power of rsync!

Self-made sync daemon in a working state

Setting up a test scenario

I needed to get myself a nice development environment at first so I started off 3 virtual servers which all run Ubuntu 16.04, my personal favourite. All these 3 machines needed to be setup with the following software packages:

  • OpenSSH server
  • Rsync
  • Inotify

Also noteworthy is that these machines are absolutely not connected through a private network. All rsync traffic is supposed to be worked out over SFTP.

For who is this for?

I could see some potential for workflow improvement on these situations:

  • A development environment, where constant file transfers are taking up a lot and/or too much time
  • Load balanced file storage clusters/servers
  • Backup/failover servers with the need for constant replication
Working on it

First things first: we need to get all the dependencies installed on the 3 servers with this one-line command:

apt update && apt -y install openssh-server rsync inotify-tools  

After that, let's create a specific folder that we want to sync. Let's call it SyncFiles:

mkdir /opt/syncfiles  

And for secure file transfer, we want a public-private key link for the transfer link that rsync uses, this is how to configure it:

ssh-keygen -t rsa -f ~/rsync-key -N ''

# Paste the output in your destination servers' ~/.ssh/authorized_keys file:
cat ~/rsync-key.pub

# Removing public key for security purposes..
rm ~/rsync-key.pub

# Remember to execute this script on all servers separately! Then, copy the output of the script in all of your servers' authorized_keys files.

Now that you have got all the pre-configuration work done, it's about time to write a script that goes through an infinite loop with inotifywait in it:

#!/bin/bash

# Supposed to run on rsync-host01, change rsync-host02 to rsync-host01 to make a script that is meant to run on rsync-host02.

while true; do  
  inotifywait -r -e modify,attrib,close_write,move,create,delete /opt/syncfiles
  rsync -avz -e "ssh -i /root/rsync-key -o StrictHostKeyChecking=no“  /opt/syncfiles/ root@rsync-host02:/opt/syncfiles/
done  

I saved this script in the /opt directory as file-sync.sh.

To finish things off, lets create the systemd service file that can stop, start, and reset the script on demand or on specific events like a system bootup.

Create a file called sync.service in the directory /etc/systemd/system/ and put the following contents in it:

[Unit]
Description = SyncService  
After = network.target

[Service]
PIDFile = /run/syncservice/syncservice.pid  
User = root  
Group = root  
WorkingDirectory = /opt  
ExecStartPre = /bin/mkdir /run/syncservice  
ExecStartPre = /bin/chown -R root:root /run/syncservice  
ExecStart = /bin/bash /opt/file-sync.sh  
ExecReload = /bin/kill -s HUP $MAINPID  
ExecStop = /bin/kill -s TERM $MAINPID  
ExecStopPost = /bin/rm -rf /run/syncservice  
PrivateTmp = true

[Install]
WantedBy = multi-user.target  

Chmod this service file and reload the systemd daemon:

chmod 755 /etc/systemd/system/sync.service  
systemctl daemon-reload  

You are all set! You can now use these commands to manage your self-made directory sync daemon:

# Start your service
systemctl start sync.service

# Obtain your services' status
systemctl status sync.service

# Stop your service
systemctl stop sync.service

# Restart your service
systemctl restart sync.service