Tag Archives: docker

Zalenium a stable and scalable Selenium grid

I just want to give well deserved thumbs up to Zalando’s Zalenium Their own description says it best:

Allows anyone to have a disposable, flexible, container based Selenium Grid infrastructure featuring video recording, live preview, basic auth & online/offline dashboards

Getting up and-running really is only one docker pull and docker run command away.

Accessing gpio pins inside a docker container on a raspberry pi

If your container needs access to the GPIO pins, then it must have access to the /dev/gpiomem device. From the command line you can do that like this:

$ docker run --device=/dev/gpiomem:/dev/gpiomem ...rest of commandline...

Here’s how to do it with a docker-compose file:

version: "2"

services:
  app:
    devices:
      - /dev/gpiomem:/dev/gpiomem
    ports:
     ...rest of the file...

Containerising the development environment

One of the nice things about docker is that we can use all kinds of software without cluttering up our local machine. I really like the ability to have the development environment running in a container. Here is an example where we:

  • Get a Node.js development environment with all required tools and packages
  • Allow remote debugging of the app in the container
  • See code changes immediately reflected inside the container

The dockerfile below gives us a container with all required tools and packages for a Node.js app. In this example we assume the ‘.’ directory contains the files needed to run the app.

FROM node:9

WORKDIR /code

RUN npm install -g nodemon

COPY package.json /code/package.json
RUN npm install && npm ls
RUN mv /code/node_modules /node_modules
COPY . /code

CMD ["npm", "start"]

That’s nice, but how does this provide remote debugging? and how do code changes propagate to a running container?

Two very normal aspects of docker achieve this. Firstly docker-compose.yml overrules the CMD ["npm", "start"] statement to start nodemon with the --inspect=0.0.0.5858 flag. That starts the app with the debugger listening on all of the machines IP addresses. We expose port 5858 to allow remote debuggers to connect to the app in the container.

Secondly, the compose file contains a volume mapping that overrules the /code folder in the container and points it to the directory on the local machine where you edit the code. Combined with the --watch flag nodemon sees any changes you make to the code and restarts the app in the container with the latest code changes.

Note: If you are running docker on Windows of the code is stored on some network share, then you must use the --legacy-watch flag instead of --watch

The docker-compose.yml file:

version: "2"

services:
  app:
    build: .
    command: nodemon --inspect=0.0.0.0:5858 --watch
    volumes:
      - ./:/code
    ports:
      - "5858:5858"

Here’s a launch.json for Visual Studio Code to attach to the container.

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Attach",
            "type": "node",
            "request": "attach",
            "port": 5858,
            "address": "localhost",
            "restart": true,
            "sourceMaps": false,
            "outDir": null,
            "localRoot": "${workspaceRoot}",
            "remoteRoot": "/code"
        }
    ]
}

Docker on Raspbian: cgroup not supported on this system

Are you running Docker on Raspbian and getting the error:

cgroups: memory cgroup not supported on this system

Best solution is to add cgroup_memory=1 in /boot/cmdline.txt and reboot.

sudo echo "cgroup_memory=1" >> /boot/cmdline.txt

PLease note, for future releases of Raspbian you will need the following instead:

sudo echo "cgroup_enable=memory" >> /boot/cmdline.txt

Alternatively, you can downgrade to an earlier docker version:

sudo apt-get install -y docker-ce=17.09.0~ce-0~raspbian --allow-downgrades