Automatically replacing/transforming input parameters in cucumber-js

Most implementations of cucumber provide a mechanism for changing literal text in the feature file to values or objects your step definition code can use. This is known as step definition or step argument transforms. Here’s how this works in cucumber-js.

Assume we have this scenario:

Scenario: Test
    When I print 'Welcome {myname}'
    And I print 'Today is {todays_date}'

And we have this step-definition.

defineStep("I print {mystring}", async function (this: OurWorld, x: string) {

Notice the use of {mystring} in the Cucumber expression

We can use defineParameterType() to automatically replace all placeholders.

    regexp: /'([^']*)'/,
    transformer: function (s) {
        return s
            .replace('{todays_date}', new Date().toDateString())
            .replace('{myname}', 'Gerben')
    name: "mystring",
    useForSnippets: false

You can even use this to for objects like so:

    name: 'color',
    regexp: /red|blue|yellow/,
    transformer: s => new Color(s)

defineStep("I fill the canvas with the color {color}", async function (this: OurWorld, x: Color) {
    // x is an object of type Color

When I fill the canvas with the color red

How to dump the state of all variables in JMeter

To see the state of the variables and properties at a specific point in the test, you add a Debug sampler. This sampler dumps the information as response data into whatever result listener are configured.

If need the information in your own code to make decisions then you can use the following snippet of JSR223 code in a sampler or post processing rule:

import java.util.Map;
for (Map.Entry entry : vars.entrySet().sort{ a,b ->  a.key <=> b.key }) { entry.getKey() + "  :  " + entry.getValue().toString();
for (Map.Entry entry : props.entrySet().sort{ a,b ->  a.key <=> b.key }) { entry.getKey() + "  :  " + entry.getValue().toString();

Migrating from Visual Studio load tests to JMeter

Microsoft recently announced:

Our cloud-based load testing service will continue to run through March 31st, 2020. Visual Studio 2019 will be the last version of Visual Studio with the web performance and load test capability. Visual Studio 2019 is also the last release for Test Controller and Test Agent

The time has come to find other technologies for load testing. JMeter is one of the alternatives and in this article I show how the various concepts in Visual Studio map to it.

Visual Studio concept JMeter equivalent
Web requests Samplers -> HTTP Request
Headers of web requests Config -> HTTP Header Manager
Validation rules Assertions
Extraction rules Post Processors
Conditions / Decisions / Loops Logic Controllers -> If, Loop and While controllers
Transactions Logic Controllers -> Transaction Controller
Web Test Test Fragment
Call to Web Test Logic Controllers -> Module Controller
Context parameters User Defined Variables along with the syntax ${myvariable} wherever the value of the variable is needed
Data sources Config Element -> CSV Data Set Config
Virtual users, Load patterns and duration See the settings of the Thread Groups
Credentials Config Element -> HTTP Authorization Manager
Web Test Plugins Although its possible to write JAVA plugins, its probably easiest to add a JSR223 Sampler with a snippet of Groovy code inside a Test Fragment or Thread Group
Request plugins Same here, except use a JSR223 Pre- or Post Processor

Free SSL for machines in your private network

If you are running servers in your private network that need SSL, you can use LetsEncrypt and Certbot to automatically obtain and renew certificates for free. Even if your machines are not accessible from the internet.

What you need:

  • A static IP in your internal network for the server, like
  • Own a domain like “” In this post I assume the server is accessed using
  • Certbot’s support for the nameserver of the domain. Even if you purchased your domain at some unupported provider, its usually no cost to change to a supported nameserver. In this post I am using Cloudflare

How to set it all up:

  1. Log in to your Cloudflare account and create an A record for ‘myserver’ with address
  2. Get a global API key from Cloudflare and remember it.
  3. Login to the private server.
  4. Create /root/.secrets/cloudflare.ini and put the following content into it:

    dns_cloudflare_email = "<mailadres of your cloudflare account>"
    dns_cloudflare_api_key = "<the api key you remembered earlier>"

  5. Ensure only root can read the directory and file

    sudo sudo chmod 0700 /root/.secrets/
    sudo chmod 0400 /root/.secrets/cloudflare.ini

  6. Install Certbot and the plugins it needs to talk to Cloudflare. For my environment this boiled down to:

    sudo apt-get install certbot -t stretch-backports
    sudo apt-get install python3-certbot-dns-cloudflare -t stretch-backports

  7. Tell Certbot to obtain a free certificate for

    sudo /usr/bin/certbot certonly \
        --dns-cloudflare \
        --dns-cloudflare-credentials /root/.secrets/cloudflare.ini \
        -d \
        --preferCed-challenges dns-01

  8. Voila! You now have a certificate stored in /etc/letsencrypt/live/

Dealing with renewals:

  1. Certificates from LetsEncrypt have a short expiry time, so we need to renew it before it expires. We don’t want to have to think about doing this, we want this to be automatic. A simple crontab entry solves that.

    14 5    * * *   root    /usr/bin/certbot renew --quiet > /dev/null 2>&1

Doing something with the SSL Certificate:

  1. After Certbot has obtained or renewed a certificate it executes scripts located in /etc/letsencrypt/renewal-hooks/post/
    In my case I am running Ubiquity’s Unifi controller software and use this script to deal with the renewal:

    # Backup previous keystore
    cp /var/lib/unifi/keystore /var/lib/unifi/keystore.backup.$(date +%F_%R)
    # Convert to PKCS12 format
    openssl pkcs12 -export \
        -inkey /etc/letsencrypt/live/${DOMAIN}/privkey.pem \
        -in /etc/letsencrypt/live/${DOMAIN}/fullchain.pem \
        -out /etc/letsencrypt/live/${DOMAIN}/fullchain.p12 \
        -name unifi \
        -password pass:unifi
    # Install certificate
    keytool -importkeystore \
        -deststorepass aircontrolenterprise \
        -destkeypass aircontrolenterprise \
        -destkeystore /var/lib/unifi/keystore \
        -srckeystore /etc/letsencrypt/live/${DOMAIN}/fullchain.p12 \
        -srcstoretype PKCS12 \
        -srcstorepass unifi \
        -alias unifi \
    #Restart UniFi controller
    service unifi restart

A real world example of digital signature checking

In this post we will see exactly how we can check if a SSL certificate hasn’t been tampered with.

We will use as an example and we’re manually going to check that the certificate’s digital signature is valid. Other important steps such as traversing the entire chain is beyond the scope of this simple example. Certificates don’t remain valid forever, so today you will get different ones. For sake of reproduction. I’ve included the ones I used later on in this post.

When I browsed to Google, it returned 2 certificates to my browser:

  1. Its own certificate
  2. The certificate of the intermediate CA that signed Google’s certificate

We’re going to use the following approach to check the signature on Google’s certificate:

  1. Retrieve the digital signature included in Google’s certificate.

  2. Retrieve the intermediate CA’s public-key from the CA’s certificate.

  3. Decrypt the digital signature in Google’s certificate using the public-key from the intermediate CA. Now we have the hash value that the intermediate CA calculated at the time when it signed Google’s certificate.

  4. Calculate the hash value of Google’s certificate ourself

  5. Compare the two hash values. If they are the same, then Google’s certificate has not changed since it was signed and therefore we consider it to be valid

Retrieve the signature from Google’s certificate

Google’s certificate is listed further on in this post. Its in the PEM format which is just a base64 encoded representation of a X.509 certificate. I decoded it back into ‘plain old’ bytes and then I had the ASN.1 DER encoded version of the certificate. Using an ASN.1 viewer I can see that the entire X.509 file has the following structure.

SEQUENCE(3 elem)
    SEQUENCE(8 elem) <-- Google's part of the certificate. It contains 8 things, which I'm not showing here
    SEQUENCE(2 elem) <-- 2 elements that say which algorithm the intermediate CA used to sign Google's part of the certificate. Its a SHA1 with RSA encryption
    BIT STRING(2048 bit) <-- Intermediate CA's signature

So the last 2048 bits (256 bytes) contain the signature of the certificate. Below is the hex representation of those bytes:


By the way. If you're doing these steps too and using an ASN.1 viewer, you might have noticed that I skipped the first byte of the contents. That's because its a BITSTRING and the following quote from the ITU-T X.690 specification implies that the content starts with a byte thats not really part of the content

The initial octet shall encode, as an unsigned binary integer with bit 1 as the least significant bit, the number of unused bits in the final subsequent octet. The number shall be in the range zero to seven.

Retrieve the intermediate CA's public-key from the CA's certificate

The CA's public-key is stored somewhere in the the middle of its certificate (not Google's certificate). Here I used the same trick of using an ASN.1 viewer to figure out which part of the ASN.1 contained the key.

The modulo is


There are 2 odd things about this modulo. I know that its a 2048 bit / 256 byte key. However I have 257 bytes. You might think that we're running into that BITSTRING thing again here, but that's not the case as the ASN.1 tag specifies that the modulo element is an INTEGER. Whats really going on is that the RSA modulo is a 2048 bit unsigned number and that's serialized with an extra leading byte to indicate that its unsigned.

The exponent is:

01 00 01

Decrypt the signature from Google’s certificate

We know the intermediate CA's public key and we know the bytes that contain the signature of the certificate. So now we can do an RSA decyption on those bytes and voila, we will have the hash that the intermediate CA calculated during the signing process.

I used the following snippet of Python to do this. But most languages should be able to do this:

#Decrypt the signature from the certificate using the intermediate CA's public RSA key
modulo    = 0x009C2A04775CD850913A06A382E0D85048BC893FF119701A88467EE08FC5F189CE21EE5AFE610DB7324489A0740B534F55A4CE826295EEEB595FC6E1058012C45E943FBC5B4838F453F724E6FB91E915C4CFF4530DF44AFC9F54DE7DBEA06B6F87C0D0501F28300340DA0873516C7FFF3A3CA737068EBD4B1104EB7D24DEE6F9FC3171FB94D560F32E4AAF42D2CBEAC46A1AB2CC53DD154B8B1FC819611FCD9DA83E632B8435696584C819C54622F85395BEE3804A10C62AECBA972011C739991004A0F0617A95258C4E5275E2B6ED08CA14FCCE226AB34ECF46039797037EC0B1DE7BAF4533CFBA3E71B7DEF42525C20D35899D9DFB0E1179891E37C5AF8E7269
exponent  = 0x010001
signature = 0x348B7D645A64085B1FF6D86DF35480F9D913EADB09210B7E7402B7779F730077C7C7926A7A953DCD814C35E30608C02586A220795F965AF0E97F3CE5C32E7234FD6259782E447BFF73F6319797CA8DB1EB8D0A58119FB0794EF83ACCD8E45895C91FDCA97BB82FB425811E8A4CF0D41594618A5663BF774AC9CE2DBB9798E6E5BB6C5CCEC68B80D93E8C6748394B3822DE437C4FB93BCF302723ACD4D9ECAC75FFA4993D559C12C2E17228AC917942B1666D9948C6C42FAD1B0EB8F78AB0B38A5B392F85E7BDBFE97FD7534269CBB8FE22B03EF305514668DCE491683B1DD6852DBEE9C21E9C9E955B41E7078ACB722B2555CECBDEAD60AEC4FDC1C9A9686BE8
IntermediateCAsHash = pow(signature, exponent, modulo)
bytesOfHash = IntermediateCAsHash.to_bytes(sys.getsizeof(IntermediateCAsHash),byteorder='big', signed=False)
print ( "%s" % ''.join(format(x, '02X') for x in bytesOfHash ))

Running this code, gave me the following output ( I manually added line breaks, so remove them if you ever copy/paste this somewhere):


The 0000...1FFF...FF00 part is an RSA Encryption Block Type 1 from the PKCS#1 standard and isn't really part of the data that the intermediate CA wanted to encrypt. We can ignore it and focus on the 3021300906052B0E03021A05000414F8F3D8AACF7E27B2F66A2231C3240682A15ADFF6 part. This part is an ASN.1 DER encoded data-structure defined in RFC2313 as:

DigestInfo ::= SEQUENCE {
     digestAlgorithm DigestAlgorithmIdentifier,
     digest Digest
DigestAlgorithmIdentifier ::= AlgorithmIdentifier

The AlgorithmIdentifier is defined in RFC 5280 as

AlgorithmIdentifier  ::=  SEQUENCE  {
    algorithm               OBJECT IDENTIFIER,
    parameters              ANY DEFINED BY algorithm OPTIONAL

So this means we should get:

SEQUENCE(2 elements)
    SEQUENCE(2 elements)
        NULL (see RFC2313)

And indeed when we use the ASN.1 decoder we get the following output:

SEQUENCE(2 elem)
    SEQUENCE(2 elem)
    OCTET STRING(20 byte) F8F3D8AACF7E27B2F66A2231C3240682A15ADFF6

So, now we know that the hash value calculated by the intermediate CA is


Calculate the hash value of Google's certificate ourself

Now we are going to repeat the same hash calculation that the intermediate CA did a long time ago. We will:

  1. Need to extract the bytes that represents Google's part of the certificate.This may NOT include any of bytes that hold the digital signature itself.
  2. Run a SHA1 hash calculation on it.

The following python code does that and when I run it, it prints



So we conclude that Google's certificate has not been tampered with!

import base64
import hashlib
def showSha1HashOfCertificate(bashe64EncodedCert):

    #Before doing the base64 decoding, we need to remove the 1st and last lines
    certificateWithoutCommentLines = bashe64EncodedCert.replace("-----BEGIN CERTIFICATE----","").replace("----END CERTIFICATE-----","")
    bytesOfCertificate =  base64.b64decode(certificateWithoutCommentLines)
    #The hash is calculated over the bytes that resulted from DER encoding the part that the X.509 specs
    #refer as the 'tbsCertificate' field of the entire certificate. 
    #Using the ASN.1 viewer I see that the tbsCertificate (the first member of the sequence) starts at offset 4 and its length is 4 + 1453 bytes     
    bytesOftbsCertificatePart = bytesOfCertificate[4: 1461]
    sha1Hasher = hashlib.sha1()
    ourHash = sha1Hasher.digest();
    print ("%s" % ''.join(format(x, '02X') for x in ourHash ))

googlesBashe64EncodedCert = """
... I removed a lot of the lines for brevity


The certificates

Below is the certificate for Google (its a big one!)


And here we have the certificate of the intermediate CA that signed the above certificate:


Zalenium a stable and scalable Selenium grid

I just want to give well deserved thumbs up to Zalando’s Zalenium Their own description says it best:

Allows anyone to have a disposable, flexible, container based Selenium Grid infrastructure featuring video recording, live preview, basic auth & online/offline dashboards

Getting up and-running really is only one docker pull and docker run command away.

Accessing gpio pins inside a docker container on a raspberry pi

If your container needs access to the GPIO pins, then it must have access to the /dev/gpiomem device. From the command line you can do that like this:

$ docker run --device=/dev/gpiomem:/dev/gpiomem of commandline...

Here’s how to do it with a docker-compose file:

version: "2"

      - /dev/gpiomem:/dev/gpiomem
    ports: of the file...

Containerising the development environment

One of the nice things about docker is that we can use all kinds of software without cluttering up our local machine. I really like the ability to have the development environment running in a container. Here is an example where we:

  • Get a Node.js development environment with all required tools and packages
  • Allow remote debugging of the app in the container
  • See code changes immediately reflected inside the container

The dockerfile below gives us a container with all required tools and packages for a Node.js app. In this example we assume the ‘.’ directory contains the files needed to run the app.

FROM node:9


RUN npm install -g nodemon

COPY package.json /code/package.json
RUN npm install && npm ls
RUN mv /code/node_modules /node_modules
COPY . /code

CMD ["npm", "start"]

That’s nice, but how does this provide remote debugging? and how do code changes propagate to a running container?

Two very normal aspects of docker achieve this. Firstly docker-compose.yml overrules the CMD ["npm", "start"] statement to start nodemon with the --inspect= flag. That starts the app with the debugger listening on all of the machines IP addresses. We expose port 5858 to allow remote debuggers to connect to the app in the container.

Secondly, the compose file contains a volume mapping that overrules the /code folder in the container and points it to the directory on the local machine where you edit the code. Combined with the --watch flag nodemon sees any changes you make to the code and restarts the app in the container with the latest code changes.

Note: If you are running docker on Windows of the code is stored on some network share, then you must use the --legacy-watch flag instead of --watch

The docker-compose.yml file:

version: "2"

    build: .
    command: nodemon --inspect= --watch
      - ./:/code
      - "5858:5858"

Here’s a launch.json for Visual Studio Code to attach to the container.

    "version": "0.2.0",
    "configurations": [
            "name": "Attach",
            "type": "node",
            "request": "attach",
            "port": 5858,
            "address": "localhost",
            "restart": true,
            "sourceMaps": false,
            "outDir": null,
            "localRoot": "${workspaceRoot}",
            "remoteRoot": "/code"

Docker on Raspbian: cgroup not supported on this system

Are you running Docker on Raspbian and getting the error:

cgroups: memory cgroup not supported on this system

Best solution is to add cgroup_memory=1 in /boot/cmdline.txt and reboot.

sudo echo "cgroup_memory=1" >> /boot/cmdline.txt

PLease note, for future releases of Raspbian you will need the following instead:

sudo echo "cgroup_enable=memory" >> /boot/cmdline.txt

Alternatively, you can downgrade to an earlier docker version:

sudo apt-get install -y docker-ce=17.09.0~ce-0~raspbian --allow-downgrades