Bart Simons

Interactive website tracking with NodeJS - Proof of Concept

 •  Filed under node.js, node, interactive, website, tracking

Interactive website tracking is the future! Why do you want to aggregate static data if you can make your data dynamic? This mindset motivated me to make a small interesting proof of concept to show off the capabilities of dynamic tracking.

The development stack

I decided to make my small app in JavaScript, both for the server and client. Data transfer had to be effective, modern and bandwidth-effective, so I chose websockets over legacy AJAX technology.

I used this bootstrap form as an example:

    <div class="form-group">
        <label for="exampleInputEmail1">Email address</label>
        <input type="email" class="form-control" id="exampleInputEmail1" placeholder="Email">
    <div class="form-group">
        <label for="exampleInputPassword1">Password</label>
        <input type="password" class="form-control" id="exampleInputPassword1" placeholder="Password">
    <div class="form-group">
        <label for="exampleInputFile">File input</label>
        <input type="file" id="exampleInputFile">
        <p class="help-block">Example block-level help text here.</p>
    <div class="checkbox">
        <input type="checkbox"> Check me out
    <button type="submit" class="btn btn-default">Submit</button>

And the page has this client-side code:

    new Fingerprint2().get(function(result, components)
        var sessionidentifier = "ID"

        const socket = new WebSocket("ws://formstream:8080")

        var interval = 750

        var eventarray = []

            if (eventarray.length != 0)
                eventarray = []
        }, interval)

        $('form').on("click textInput", function(e)
            var eventroot = {}
            eventroot['session'] = sessionidentifier
            eventroot['event'] = e.type
            eventroot['id'] =
            eventroot['value'] =
            eventroot['timestamp'] =

As shown in the code sample above, I used fingerprintjs2 to create a unique browser identifier. This identifier can not be used as a personal identifier, but rather a unique anonymous identifier that matches your 'specific browser'. After a fingerprint has been generated, the script continues to run a continuous loop defined as an interval, which can be set to your desired value in a variable.

A UNIX epoch timestamp is generated for every event taking place, linked to that is every bit of event information that matters:

  • The event itself
  • The DOM object ID
  • The object's value

All this information is stored inside a JSON object. Once an event takes place, the JSON object gets pushed to the eventarray array.

Once the script reaches the end of an interval, the JSON objects stored in the eventarray get stringified and pushed to the websocket server but only in the case where there are more than 0 JSON objects available in the array, just to prevent the script from sending empty data on every interval.

The server application
var ws = require('nodejs-websocket')

var server = ws.createServer(function (conn)  
        console.log("NEW CONNECTION!")
        conn.on("text", function (str)
        conn.on("close", function (code, reason)
                console.log("CONNECTION CLOSED.")

The server application is quite simple, it runs on NodeJS and only requires the nodejs-websocket dependency. No big deal, as it only functions as a console application that prints the received data from the client.

Server-sided console output

Remember that this is just a proof-of-concept that just captures form events. In reality you could capture any possible event and put it in a server side database. I hope that this article was informative enough for you, stay tuned for more 👍🏽

Bootstrap Tables to JSON, and vice versa.

 •  Filed under html, bootstrap, tables, json, jquery, javascript

This is just a quick post and/or announcement about a JavaScript library I have been working on:

The main goal of this library is to make the link between a Bootstrap/HTML5 table with JSON a lot easier. Do you have an API available that returns data in JSON format? No problem! Bootstrap-jsontables takes care of that. Take a look at the examples on my GitHub repo if you need a quick visual glance of what this library could do for your project.

More features coming soon!

I'm having big plans to improve the search functionality of bootstrap-jsontables. The search functionality in its current form is just too simple, so that's going to be improved in the future.

Converting a table to JSON data

Say that we have this Bootstrap table:

<table id="appUsers" class="table">  

Use this JavaScript/jQuery code to convert the table to JSON:

// Create a new JSONTable object on the #appUsers table.
var jsonTable = new JSONTable($("#appUsers"))

// Convert the table to JSON
var tableData = jsonTable.toJSON()

// The tableData object:
[{"ID":"1","Name":"Bart"}, {"ID":"2","Name":"Nick"}]

Keeping a log of all progress over here would be a bad choice, so please check out my GitHub for any future updates, changes and improvements:

I hope that this library helps you with your project. Your feature requests and/or contributions are welcome, please let me and the community know in the form of a GitHub issue :) Getting Started Guide/Tutorial

 •  Filed under getting started, guide, tutorial, WireGuard,

In todays networking world, virtual private networks are unmissable. With IT needs growing exponentially in the current modern era, it is essential to make the right choices on what VPN software you are going to use. While IPSec tunnels are commonly deployed and proven to deliver good performance while being stable at the same time, are there any other alternatives?

Yes there are. Here are some VPN solutions I have deployed in the past:

  • OpenVPN, both in peer-to-peer and remote access configurations
  • PPTP (with pptpd on Linux)
  • SoftEther (Has its own VPN protocol over an SSL connection)

Recently - on a long journey on Google - I came across WireGuard. They claim to have the networking code of their VPN software running in kernel-space for optimal performance, so that seems all good. I decided to dig deeper into WireGuard, so I could write a guide/tutorial on the getting started and configuration process.

My test environment

My test environment consists over two Linux servers in the cloud, they are directly connected to each other over a private network:

  • server-01:
  • server-02:

For benchmarking networking speeds I used iperf, and this is the traffic speed test result I got over this private network:

iperf raw network speeds

Installing WireGuard

This step is pretty straight forward, just copy and paste this code into your terminal:

add-apt-repository -y ppa:wireguard/wireguard  
apt update  
apt install -y wireguard-dkms wireguard-tools  

If you don't use Ubuntu on your servers, check out this page on the WireGuard website to find out how to install it on your Linux distribution of preference.

Initialisation of WireGuard's virtual interfaces

Configuring a simple peer-to-peer tunnel on WireGuard is not that complicated.

First of all, let's create the wg0 interface on both servers - this will be the virtual interface for your virtual private network between both servers:

ip link add dev wg0 type wireguard  

Your virtual network also needs an IP address for each node so that machines can communicate between each other over IP:

# For server-01:
ip address add dev wg0

# For server-02:
ip address add dev wg0  

Generating a configuration for each node

WireGuard uses a key-based VPN solution for communication between nodes. This system insists of a private key and a public key for each node. You can generate these keys on each node with the following command:

# For server-01:
wg genkey | tee privatekey01 | wg pubkey > publickey01

# For server-02
wg genkey | tee privatekey02 | wg pubkey > publickey02  

Create a configuration file named wireguard.conf and store it somewhere safe with the right Linux permissions applied on this file (chown/chmod). Here's what you need to put in this configuration file:

# On server-01:

ListenPort = 4820  
PrivateKey = privatekey01's content goes here

Endpoint = ip:port of endpoint (  
PublicKey = publickey02's content goes here  
AllowedIPs =  
# On server-02:

ListenPort = 4820  
PrivateKey = privatekey02's content goes here

Endpoint = ip:prt of endpoint (  
PublicKey = publickey01's content goes here  
AllowedIPs =  

Link the configuration to the interface on all nodes:

wg setconf wg0 wireguard.conf  

Bring the interface up on all nodes:

ip link set up dev wg0  

You are now connected, you can test connectivity by sending ICMP echo packets:

WireGuard ICMP connectivity test

Benchmarking performance

Run this command on the first node (server-01 in my case):

iperf -s  

Run this command on the second node (server-02 in my case):

iperf -c  

These are the results I got over the tunnel:

Pretty good results for just a dual-core server. I'm sure that there are possibilities/tweaks to make WireGuard perform even better, we'll see...

Using to build and manage your Docker swarm

 •  Filed under, build, manage, docker, swarm

Are you planning on deploying a Docker swarm anytime soon? Just rethink about how you want to manage your Docker compute hosts. Do you want to go with the CLI way of doing things, or are you most likely preferring a GUI-based frontend to manage your Docker swarm? Personally I'm a fan of doing all the things over a CLI interface, but hey: GUI-managed solutions can be good if they are done right.

So, let's get straight into talking about what can do for you and your (future) Docker environment: most of the things that you would usually do in the Docker CLI interface are made into a sleek-looking graphical interface with Portainer.

Setting up a basic Docker swarm with two compute nodes

Imagine having three servers for your Docker setup:

  • A management server to manage your Docker swarm, while at the same time being equipped with
  • Two compute servers which both function as a member of the Docker swarm

These are the IP addresses of the 3 machines

  • Docker Manager:
  • Docker Compute 01:
  • Docker Compute 02:

On the 'Docker Manager' machine, you execute the following command to initialise a new Docker swarm:

docker swarm init --advertise-addr  

This command also returns what you need to know to make compute nodes join your Docker swarm:

Docker Swarm Init

Once you have all your member servers joined into your swarm you can install on your Docker management server with this command:

docker service create --name portainerio -p 9000:9000 --constraint 'node.role == manager' --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock portainer/portainer -H unix:///var/run/docker.sock  

You are now running a instance on port 9000, great! Open your webbrowser and navigate to the web interface, fill in a password and login with the password that you just set.

Docker swarm in

Boom! It works and shows you all the worker machines inside the web panel. Now it's all up to you what you are going to build and/or deploy. I hope this article was informative for you, enjoy the freedom of containers!

Ubuntu Cheat Sheet

 •  Filed under ubuntu, cheat sheet

In need of a handy cheat sheet filled with commands you can use in your Linux terminal? This page has got you covered with lots of commands for lots on different use cases. Missing a command? Feel free to send me your ideas, questions and suggested commands in the comments.

Install a full LAMP stack with just one command
sudo apt install lamp-server^  

Notes: this stack contains PHP5 on Ubuntu 14.04 or lower, and PHP7 on Ubuntu 16.04 or higher. Not tested with non-LTS releases.

Install PhpMyAdmin
sudo apt install phpmyadmin  

Install and configure basic SMB/CIFS shared folder
sudo apt install samba  
sudo smbpasswd -a bart

sudo echo "[bart]" >> /etc/samba/smb.conf  
sudo echo "path = /home/bart" >> /etc/samba/smb.conf  
sudo echo "valid users = bart" >> /etc/samba/smb.conf  
sudo echo "read only = no" >> /etc/samba/smb.conf

sudo service smbd restart  

Install security updates only
sudo unattended-upgrades -d  

Upgrade to the next LTS release
sudo do-release-upgrade  

Cleanup the package database and no longer needed packages
sudo apt autoremove && sudo apt clean && sudo apt autoclean  

List kernel version and Ubuntu release version
uname -a && lsb_release -a  

Edit and update grub settings
sudo nano /etc/default/grub  

Get unique IPs accessing your site

For Apache:

cat /var/log/apache2/access.log | awk '{print $1}' | sort -u  


cat /var/log/nginx/access.log | awk '{print $1}' | sort -u  

Use Ubuntu as a NAT router
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf  
sysctl -p  

Get bash history
cat ~/.bash_history  

Get SSH login attempts that exceed set limits
cat /var/log/auth.log | grep "attempts"  

Get successful local and remote login attempts
cat /var/log/auth.log | grep "systemd-logind"  

Check for open, listening ports
netstat -tulpn  

Get RAM usage information
free -m  

Get processes currently running as root
ps aux | grep "root"  

Stop and start interface
ifconfig eth0 down  
ifconfig eth0 up  

Note: replace eth0 with your desired network interface.

Manually set static IP on interface
ifconfig eth0 netmask broadcast  

Check for running systemd services
systemctl list-units --state=running --type=service  

Check for configuration errors
jounalctl -xe  

Get cron jobs

Hourly cron jobs:

ls /etc/cron.hourly  

Daily cron jobs:

ls /etc/cron.daily  

Weekly cron jobs:

ls /etc/cron.weekly  

Monthly cron jobs:

ls /etc/cron.monthly  

Other cron jobs:

ls /etc/cron.d  

Hardware/software info

List PCI devices:


List block devices:


List USB devices:


List CPU devices


List general HW info


List loaded kernel modules


File/directory creation, parsing and IO

Create new file

touch new.file  

Get contents of file

cat new.file  

Overwrite a file

echo "Hello, dear Linux user!" > new.file  

Append to a file

echo "Hello, dear Linux user!" >> new.file  

Get lines containing a substring

cat new.file | grep "user"  


Set ownership of a folder, recursively:

chown -R www-data:www-data /var/www/html  

Note: this sets the ownership of the /var/www/html folder to the default web server user and group.