Bart Simons

tutorial

A 6 post collection


WireGuard.io Getting Started Guide/Tutorial

 •  Filed under getting started, guide, tutorial, WireGuard, WireGuard.io

In todays networking world, virtual private networks are unmissable. With IT needs growing exponentially in the current modern era, it is essential to make the right choices on what VPN software you are going to use. While IPSec tunnels are commonly deployed and proven to deliver good performance while being stable at the same time, are there any other alternatives?

Yes there are. Here are some VPN solutions I have deployed in the past:

  • OpenVPN, both in peer-to-peer and remote access configurations
  • PPTP (with pptpd on Linux)
  • SoftEther (Has its own VPN protocol over an SSL connection)

Recently - on a long journey on Google - I came across WireGuard. They claim to have the networking code of their VPN software running in kernel-space for optimal performance, so that seems all good. I decided to dig deeper into WireGuard, so I could write a guide/tutorial on the getting started and configuration process.

My test environment

My test environment consists over two Linux servers in the cloud, they are directly connected to each other over a private network:

  • server-01: 10.129.29.151
  • server-02: 10.129.30.154

For benchmarking networking speeds I used iperf, and this is the traffic speed test result I got over this private network:

iperf raw network speeds

Installing WireGuard

This step is pretty straight forward, just copy and paste this code into your terminal:

add-apt-repository -y ppa:wireguard/wireguard  
apt update  
apt install -y wireguard-dkms wireguard-tools  

If you don't use Ubuntu on your servers, check out this page on the WireGuard website to find out how to install it on your Linux distribution of preference.

Initialisation of WireGuard's virtual interfaces

Configuring a simple peer-to-peer tunnel on WireGuard is not that complicated.

First of all, let's create the wg0 interface on both servers - this will be the virtual interface for your virtual private network between both servers:

ip link add dev wg0 type wireguard  

Your virtual network also needs an IP address for each node so that machines can communicate between each other over IP:

# For server-01:
ip address add dev wg0 192.168.2.1/24

# For server-02:
ip address add dev wg0 192.168.2.2/24  


Generating a configuration for each node

WireGuard uses a key-based VPN solution for communication between nodes. This system insists of a private key and a public key for each node. You can generate these keys on each node with the following command:

# For server-01:
wg genkey | tee privatekey01 | wg pubkey > publickey01

# For server-02
wg genkey | tee privatekey02 | wg pubkey > publickey02  

Create a configuration file named wireguard.conf and store it somewhere safe with the right Linux permissions applied on this file (chown/chmod). Here's what you need to put in this configuration file:

# On server-01:

[Interface]
ListenPort = 4820  
PrivateKey = privatekey01's content goes here

[Peer]
Endpoint = ip:port of endpoint (10.129.30.154:4820)  
PublicKey = publickey02's content goes here  
AllowedIPs = 0.0.0.0/0  
# On server-02:

[Interface]
ListenPort = 4820  
PrivateKey = privatekey02's content goes here

[Peer]
Endpoint = ip:prt of endpoint (10.129.29.151:4820)  
PublicKey = publickey01's content goes here  
AllowedIPs = 0.0.0.0/0  

Link the configuration to the interface on all nodes:

wg setconf wg0 wireguard.conf  

Bring the interface up on all nodes:

ip link set up dev wg0  

You are now connected, you can test connectivity by sending ICMP echo packets:

WireGuard ICMP connectivity test


Benchmarking performance

Run this command on the first node (server-01 in my case):

iperf -s  

Run this command on the second node (server-02 in my case):

iperf -c 192.168.2.1  

These are the results I got over the tunnel:

Pretty good results for just a dual-core server. I'm sure that there are possibilities/tweaks to make WireGuard perform even better, we'll see...

Scraping websites with LXML

 •  Filed under example, tutorial, scraping, websites, lxml

The internet is such a big place, and it is still growing exponentially together with the (also) growing trend of data traffic. Sometimes, that what just matters is all that we need. Links, paragraphs, keywords are three examples of data that we care about: the metadata. LXML is a great library that makes parsing HTML documents from within Python pretty useful, so I decided to write some code example for those who are interested.

Scraping the Reddit front page as an example

Reddit's front page is easily parsable. In fact, it has a straight forward CSS structure that actually makes sense:

Each link to a post is contained inside a div tag with the thing class inside of it. Chromium - the internet browser in the screenshot above - actually supports searching by XPath from the developer console. Very neat, cheers to the developers that made this possible!

The same thing could be done programmatically, by using Python and LXML. Here's an example that should work:

#!/usr/bin/env python3

import lxml.html

from pycurl import Curl  
from io import BytesIO

userAgent = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0'  
redditMainPage = 'https://www.reddit.com/new/'

def fetchUri(uriToFetch):  
    buffer = BytesIO()
    c = Curl()
    c.setopt(c.URL, uriToFetch)
    c.setopt(c.WRITEDATA, buffer)
    c.setopt(c.USERAGENT, userAgent)
    c.perform()
    c.close()
    return buffer.getvalue().decode('iso-8859-1')

requestResult = fetchUri(redditMainPage)  
requestLxmlDocument = lxml.html.document_fromstring(requestResult)  
requestLxmlRoot = requestLxmlDocument.xpath("//div[contains(@class, 'thing')]//div[contains(@class, 'entry')]//p[contains(@class, 'title')]//a[contains(@class, 'title')]")

for rootObject in requestLxmlRoot:  
    print(str(rootObject.text_content())+"\n")

This code iterates over each reddit post found on the main page, and returns it's name, followed by a newline character. This code snippet should work fine on both Python 2 and 3, with PycURL and LXML installed. Good luck experimenting with LXML!

Ghost.org API - Getting Started

 •  Filed under ghost.org, api, tutorial, basics, ghost, getting started

The Ghost blogging software platform makes great use of top-of-the-line technologies, and with that comes a great public API that you can use to integrate Ghost with your current software. It does only take about half an hour to learn the basics, and could be a great new method for you to publish your content!

The demo project

To showcase the possibilities of Ghost, we need a little demo project to get the fire started with examples. This project will cover the following API related stuff:

  • Logging into Ghost from the command line;
  • Publishing a new page with automatically generated data.
Logging into Ghost from the command line

When you create an API request to a Ghost blog it expects a specially crafted so-called access token. An access token is used for authentication purposes on the API side and can be generated by sending your username and password to this location in an HTTP POST request:

/api/v0.1/authentication/token

It expects the following body data:

grant_type=password&username=dummyusername&password=dummypassword&client_id=ghost-admin&client_secret=dummyclientsecret

The client secret of your Ghost blog can be extracted from the login page of your website. You can search through the DOM of the login page or use curl together with grep to fetch it for you:

Fetch Ghost Client Data

I also made a PowerShell wrapper to automate the API login process:

<#  
    .SYNOPSIS
        Logs into a Ghost blog.
    .DESCRIPTION
        Ghost blog login from PowerShell, for demonstration purposes.
    .LINK
        https://bartsimons.me
#>

Function Invoke-GhostLogin {  
    [CmdletBinding()]
        [OutputType(
            [PSCustomObject]
        )]

    Param (
        [Parameter(
            Mandatory = $true
        )]
        [String]
        $AdminURL,

        [Parameter(
            Mandatory = $true
        )]
        [String]
        $User,

        [Parameter(
            Mandatory = $true
        )]
        [String]
        $Password
    )

    $LoginPageContent = (Invoke-WebRequest $AdminURL).AllElements

    $ClientID =     ($LoginPageContent | where { $_.name -eq "env-clientId" } | Select-Object content).content
    $ClientSecret = ($LoginPageContent | where { $_.name -eq "env-clientSecret" } | Select-Object content).content

    If ($AdminURL[$AdminURL.length-1] -ne "/") {
        $AdminURL+="/"
    }

    $LoginAPIResponse = (Invoke-WebRequest ($AdminURL+"api/v0.1/authentication/token") -Method Post -Body "grant_type=password&username=$User&password=$Password&client_id=$ClientID&client_secret=$ClientSecret").Content | ConvertFrom-Json

    return New-Object PSObject -Property @{
        access_token  = $LoginAPIResponse.access_token
        expires_in    = $LoginAPIResponse.expires_in
        refresh_token = $LoginAPIResponse.refresh_token
        token_type    = $LoginAPIResponse.token_type
    }
}

And just for fun, I also made a wrapper for Bash if you need a automation solution for Linux and macOS:

#!/bin/bash

# Ghost Blog Login Script
# Usage:   ./ghostlogin.sh url username password
# Example: ./ghostlogin.sh https://ghost-example.com/ghost/ admin 123456

if [[ $# -eq 3 ]]; then

    url="$1"
    length_url_end=${#1}
    length_url_start=length_url_end-1
    if [[ ${1:length_url_start:length_url_end} != "/" ]]; then
        url="$url/"
    fi

    clientdata=($(curl -s $url | grep '<meta name="env-client' | cut -d '"' -f4))

    eval "curl -s ${url}api/v0.1/authentication/token --data 'grant_type=password&username=$2&password=$3&client_id=${clientdata[0]}&client_secret=${clientdata[1]}'"

fi  

Ghost Login Bash Script

And that makes up for the authentication part of this guide. All you need for you is the access_token token to authenticate all the API requests that require authentication. Now, the next step is to actually post something on your website by using the API. Weird enough, I was not able to find any information nor examples about the POST and PUT request methods in the Ghost official documentation. I found out about how to use it by inspecting the network traffic going out to my testing/demo server. This is how it works:

  • You make a POST request with the necessary details/content, the draft gets created;
  • The response data contains the body data that's needed for the next request;
  • You make a PUT request with the response data from the response of the previous request and modify the needed stuffs;

Here's how to initially create a post via PowerShell:

<#  
    .SYNOPSIS
        Creates a new post on a Ghost blog.
    .DESCRIPTION
        Create a new post on a Ghost blog, to
    .LINK
        https://bartsimons.me
#>

Function New-GhostPost {  
    [CmdletBinding()]
        [OutputType(
            [String]
        )]

    Param (
        [Parameter (
            Mandatory = $true
        )]
        [String]
        $AccessToken,

        [Parameter (
            Mandatory = $true
        )]
        [String]
        $Title,

        [Parameter (
            Mandatory = $true
        )]
        [String]
        $AdminURL,

        [Parameter (
            Mandatory = $true
        )]
        [String]
        $Slug,

        [Parameter (
            Mandatory = $true
        )]
        [String]
        $Content,

        [Parameter (
            Mandatory = $true
        )]
        [Boolean]
        $Public
    )

    If ($AdminURL[$AdminURL.length-1] -ne "/") {
        $AdminURL+="/"
    }

    $NewPostDataTemplate = '{"posts":[{"title":"<%TITLE%>","slug":"<%SLUG%>","markdown":"<%CONTENT%>","image":null,"featured":false,"page":false,"status":"draft","language":"en_US","meta_title":null,"meta_description":null,"author":"1","publishedBy":null,"tags":[]}]}' -Replace "<%TITLE%>","$Title" -Replace "<%SLUG%>","$Slug" -Replace "<%CONTENT%>","$Content"
    $PageCreationResponse = (Invoke-WebRequest ($AdminURL+"api/v0.1/posts/?include=tags") -Method Post -Headers @{"Authorization"="Bearer $AccessToken";"Content-Type"="application/json; charset=UTF-8"} -Body $NewPostDataTemplate).Content

    If ($Public -eq $true) {
        $PostID = ($PageCreationResponse | ConvertFrom-Json).posts.id
        $PageCreationResponse = (Invoke-WebRequest ($AdminURL+"api/v0.1/posts/$PostID/?include=tags") -Method Put  -Headers @{"Authorization"="Bearer $AccessToken";"Content-Type"="application/json; charset=UTF-8"} -Body ($PageCreationResponse -Replace "draft","published")).Content
    }

    return $PageCreationResponse
}

You can do the same thing in Bash if you feel more at home with UNIX tools just like me. The code on this one might be a bit messy but it works. And it's not PowerShell. 😉

#!/bin/bash

# Ghost Blog Post Script
# Usage:   ./ghostmakepost.sh url accesstoken title slug content public
# Example: ./ghostmakepost.sh https://ghost-example.com/ghost/ "fEwkJhfew7j....fe31" "Just testing :)" "just-testing" "######It works!" true 

if [[ $# -eq 6 ]]; then  
    url="$1"
    length_url_end=${#1}
    length_url_start=length_url_end-1
    if [[ ${1:length_url_start:length_url_end} != "/" ]]; then
        url="$url/"
    fi

    accesstoken="$2"
    title="$3"
    slug="$4"
    content="$5"
    public="$6"

    conceptdata='{"posts":[{"title":"<%TITLE%>","slug":"<%SLUG%>","markdown":"<%CONTENT%>","image":null,"featured":false,"page":false,"status":"draft","language":"en_US","meta_title":null,"meta_description":null,"author":"1","publishedBy":null,"tags":[]}]}'
    conceptdata="${conceptdata/<%TITLE%>/$title}"
    conceptdata="${conceptdata/<%SLUG%>/$slug}"
    conceptdata="${conceptdata/<%CONTENT%>/$content}"
    conceptpostcommand="curl -s ${url}api/v0.1/posts/?include=tags -X POST -H 'Authorization: Bearer $accesstoken' -H 'Content-Type: application/json; charset=UTF-8' --data-binary '$conceptdata'"

    conceptpostresult=$(eval $conceptpostcommand)

    conceptpostid=($(echo $conceptpostresult | cut -d '"' -f5))

    length_conceptpostid=${#conceptpostid}
    length_conceptpostid_start=1
    length_conceptpostid_end=length_conceptpostid-2

    conceptpostid=${conceptpostid:length_conceptpostid_start:length_conceptpostid_end}

    if [[ $public == "true" ]]; then
        conceptpostresult="${conceptpostresult/\"status\":\"draft\"/\"status\":\"published\"}"
        publicpostcommand="curl -s ${url}api/v0.1/posts/$conceptpostid/?include=tags -X PUT -H 'Authorization: Bearer $accesstoken' -H 'Content-Type: application/json; charset=UTF-8' --data-binary '$conceptpostresult'"
        publicpostresult=$(eval $publicpostcommand)
        echo $publicpostresult
    else
        echo $conceptpostresult
    fi
fi  

This is my take on working with the Ghost public API from a command-line and/or scripting perspective. I hope you learnt something from my write-up on Ghost and its beautiful backend work. If you haven't tried Ghost yet, go give it a try: it's open source and free + built on node.js 🙂

Raspberry Pi Kiosk Tutorial

 •  Filed under raspberry pi, kiosk, tutorial, chromium

The Raspberry Pi has proven itself to be a great computing device while keeping cost as low as possible. For less than $40 you are getting a great little device with its 64-bit ARM processor. This made me think, would it be a good idea to turn the Raspberry Pi 3 into a kiosk?

The software that you'll need

A list is not even needed, because you only need three things: Raspbian Jessie Lite installed as the OS, the X11 display stack installed and the Chromium webbrowser installed on the Pi.

Why Chromium

Chromium is the open source variant of the popular Google Chrome browser. It has great potential for this project because of the --kiosk flag. With this option you can create a full-screen browser window inside your empty X11 session.

Installing the X11 display stack

You can install the X11 display stack on your Raspberry Pi with the following command:

sudo apt install xserver-xorg xinit  

This will install the X11 server on your Pi together with the X11 server initialisation program. You also have to run this command:

sudo dpkg-reconfigure x11-common  

To select who's authorized to start the X11 server. Make sure it is set like this:

x11-common package reconfiguration

Installing Chromium

Just run sudo apt install chromium-browser to install Chromium on your Pi. It's literally that simple.

Writing the .xinitrc script

When you execute the xinit command on your Pi, you will see a terminal window instead of Chromium. To fix this, you'll need to write a file in your home directory called ~/.xinitrc. This file contains everything that gets run when the X11 server starts up. So, edit this file with nano (or any other text editor) and make sure it looks like this:

.xinitrc file with Chromium

Save it, and try running Chromium by invoking the xinit command on your Pi.

But... my window is not fullscreen!

It could be that your Pi shows black borders around the screen, which has to do something with overscan. Open your /boot/config.txt file (with sudo) and add disable_overscan=1 to this file like this:

Disable Overscan

Reboot your Pi and the problem should magically disappear! (at least, that's what happened to me...)

Starting X11 at boot

Raspbian comes with a built-in tool to configure your Pi: raspi-config. Run this command on your Pi and you will be greeted with a command-line dialog window. Go to option 3 and press enter:

raspi-config main menu option 3

And then go for option 2:

raspi-config boot options option 2

Press enter, go to finish and when the program asks for a reboot choose yes.

Once you have rebooted your Pi, you'll have to do one more thing to finish this project: finally starting X11 at boot! My method might be more sophisticated than needed, but hey: it worked for me. Edit the ~/.bashrc file in your favourite text editor and add the following to the bottom of the file:

if [ -z "$SSH_CLIENT" ] || [ -z "$SSH_TTY" ]; then  
    xinit -- -nocursor
fi  

save the file and reboot your Pi. Enjoy your beautiful self-made Raspberry Pi Kiosk!

Load webpage from an URL specified in a file

A Raspberry Pi with the default partition layout contains two partitions: the OS partition, and the BOOT partition which is formatted in FAT32 and gets automatically mounted at boot. For example: create a new file in /boot/ called website.txt and put your desired URL in it. Finally to make it work you simply replace the URL in your ~/.xinitrc file with $(cat /boot/website.txt) and it should work! The thing is that you can actually see and modify contents of this file in Windows, since the BOOT partition is partitioned as FAT.

NGINX RTMP Streaming Server Installation Guide

 •  Filed under tutorial, nginx, rtmp, streaming, server, installation, guide

Personally, I find NGINX the best choice as a web server, and so this website runs on NGINX too. Today I found out about the NGINX RTMP module, originally developed by Roman Arutyunyan. This module allows you to expand NGINX with RTMP capabilities so you can use NGINX as a media streaming server! You can build it by compiling NGINX with a separate module which can be done manually or by using this bash script:

#!/bin/bash

echo  
echo   NGINX RTMP SERVER INSTALLER V2  
echo   COPYLEFT BARTSIMONS.ME, 2016  
echo 

## CHECKING FOR ROOT ACCESS...
user=$(whoami)  
if [[ $user != "root" ]]; then  
    echo "You are not root. Please run this script as superuser!"
    exit
fi

## GLOBAL VARIABLES AND PACKAGE CACHE UPDATE
nginx_url="http://nginx.org/download/nginx-1.11.4.tar.gz"  
nginx_tar="nginx-1.11.4.tar.gz"  
nginx_fld="nginx-1.11.4"  
rtmp_url="https://github.com/arut/nginx-rtmp-module.git"

echo "Updating package cache..."  
apt -qqq update

## CONTINUE WHEN USER IS ROOT & INSTALL WGET IF NOT INSTALLED...
long_out_wget_check=$(dpkg-query --list | grep wget)  
short_out_wget_check=${long_out_wget_check:0:2}  
wget_installed=0

if [[ $short_out_wget_check == "ii" ]]; then  
    wget_installed=1
else  
    echo "Installing wget..."
    apt install -qqq -y wget
fi

## INSTALL BUILD-ESSENTIAL IF NOT INSTALLED...
long_out_be_check=$(dpkg-query --list | grep build-essential)  
short_out_be_check=${long_out_be_check:0:2}  
be_installed=0

if [[ $short_out_be_check == "ii" ]]; then  
    be_installed=1
else  
    echo "Installing build-essential..."
    apt install -qqq -y build-essential
fi

## INSTALL LIBPCRE3 DEV HEADERS IF NOT INSTALLED...
long_out_pcre_dev_check=$(dpkg-query --list | grep libpcre3-dev)  
short_out_pcre_dev_check=${long_out_pcre_dev_check:0:2}  
pcre_dev_installed=0

if [[ $short_out_pcre_dev_check == "ii" ]]; then  
    pcre_dev_installed=1
else  
    echo "Installing libpcre3 development headers..."
    apt install -qqq -y libpcre3-dev
fi

## INSTALL LIBPCRE IF NOT INSTALLED...
long_out_pcre_check=$(dpkg-query --list | grep libpcre3-dev)  
short_out_pcre_check=${long_out_pcre_check:0:2}  
pcre_installed=0

if [[ $short_out_pcre_check == "ii" ]]; then  
    pcre_installed=1
else  
    echo "Installing libpcre3..."
    apt install -qqq -y libpcre3
fi

## INSTALL GIT IF NOT INSTALLED
long_out_git_check=$(dpkg-query --list | grep "git ")  
short_out_git_check=${long_out_git_check:0:2}  
git_installed=0

if [[ $short_out_git_check == "ii" ]]; then  
    git_installed=1
else  
    echo "Installing git..."
    apt install -qqq -y git
fi

## INSTALL LIBSSL DEV HEADERS IF NOT INSTALLED...
long_out_libssl_dev_check=$(dpkg-query --list | grep libssl-dev)  
short_out_libssl_dev_check=${long_out_libssl_dev_check:0:2}  
libssl_dev_installed=0

if [[ $short_out_libssl_dev_check == "ii" ]]; then  
    libssl_dev_installed=1
else  
    echo "Installing libssl-dev..."
    apt install -qqq -y libssl-dev
fi

## DOWNLOAD AND UNTAR NGINX SOURCE CODE
echo "Downloading nginx source code..."  
wget --quiet $nginx_url  
echo "Unpacking nginx source code..."  
tar -xzf $nginx_tar

## CLONE NGINX-RTMP-MODULE
echo "Cloning nginx RTMP module git repository..."  
git clone $rtmp_url

## CONFIGURE, COMPILE AND INSTALL!
cd $nginx_fld  
./configure --add-module=../nginx-rtmp-module
make  
make install

## CLEANUP TIME!
echo "Cleaning up left over folders & files..."  
rm -rf $nginx_fld  
rm -rf $nginx_tar  
rm -rf nginx-rtmp-module

if [[ $git_installed == 0 ]]; then  
    echo "git was not installed earlier. Uninstalling git"
    apt remove --purge -qqq git
fi

if [[ $pcre_dev_installed == 0 ]]; then  
    echo "libpcre3-dev was not installed earlier. Uninstalling libpcre3-dev..."
    apt remove --purge -qqq libpcre3-dev
fi

if [[ $be_installed == 0 ]]; then  
    echo "build-essential was not installed earlier. Uninstalling build-essential..."
    apt remove --purge -qqq build-essential
fi

if [[ $wget_installed == 0 ]]; then  
    echo "wget was not installed earlier. Uninstalling wget..."
    apt remove --purge -qqq wget
fi

if [[ $pcre_installed == 0 ]]; then  
    echo "libpcre3 was not installed earlier. Uninstalling libpcre..."
    apt remove --purge -qqq libpcre3
fi

if [[ $libssl_dev_installed == 0 ]]; then  
    echo "libssl-dev was not installed earlier. Uninstalling libssl-dev..."
    apt remove --purge -qqq libssl-dev
fi

echo " "  
echo "NGINX and the RTMP server module has been installed on your system!"  

Please note that this script has been built for Debian-based operating system. Compiling it manually on other systems is not that difficult at all: you just need to include the module with a flag for the configure script.

Configuring NGINX

Once you've got the modified NGINX version installed on your server, it's time to edit the NGINX configuration so that NGINX will serve RTMP traffic.

The default configuration file location for NGINX is /usr/local/nginx/conf/nginx.conf
Add the following configuration to the end of this file:

rtmp {  
    server {
        listen 1935;
        chunk_size 8192;

        application stream {
            live on;
            record off;

            allow publish 127.0.0.1;
            deny publish all;
            allow play all;
        }
    }
}

Now you are ready to go, you can start nginx on your server

/usr/local/nginx/sbin/nginx


You can stop nginx like this:

/usr/local/nginx/sbin/nginx -s stop


Thanks for reading and have fun streaming!