Contents

The last post in this series explored different ways of deploying .Net apps to a Raspberry Pi and touched on some of the pros and cons of using Docker for this. This post covers using GitHub actions to build a Raspberry Pi image and deploy to Docker Hub when commits are pushed to the repository.

Getting Started

An easy way of starting an action is navigating to the Actions tab of a GitHub repo and choosing a workflow – it’ll even recommend ones based on existing repo contents. After that it’ll open the action yaml in an online editor add help setup a commit. Personally I chose to copy the yaml to VS Code and commit myself. I also recommend this YAML extension with the whitespace-sensitive pain that the language brings. The actions are stored in the repo under /.github/workflows. To get an idea of the extensive capabilities of GitHub actions, try awesome-actions. If new to actions, taking a free GitHub learning lab course may be helpful.

Building and Revising Action Workflow

Initial, Direct GitHub Action

Before the first official Docker GitHub action was announced I had started with this opspresso/action-docker action. Later my security paranoia kicked in about integrating with unknown 3rd party actions so I switched to doing the steps more manually / directly in the action, like below.

name: Cat Siren Push
on:
  push:
    paths:
    - 'siren/**'
    - '.github/workflows/push-siren.yml'
jobs:
  cat-siren:
    runs-on: ubuntu-latest
    steps:
    - name: Set Build Date
      run: echo "::set-env name=BUILD_DATE::$(date +'%Y-%m-%d %H:%M:%S')"
    - name: Set Build Number
      run: echo "::set-env name=BUILD_VER::1.0.$GITHUB_RUN_NUMBER"
    - name: Set Image Name
      run: echo "::set-env name=IMG::cat-siren"
    - name: Checkout
      uses: actions/checkout@v1
    - name: Docker login
      run: echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
    - name: Docker build
      working-directory: ./siren
      run: |
          docker build \
            --build-arg GIT_SHA=${{ github.sha }} \
            --build-arg GIT_REF=${{ github.ref }} \
            --build-arg BUILD_DATE="${{ env.BUILD_DATE }}" \
            --build-arg BUILD_VER=${{ env.BUILD_VER }} \
            -t ${{ env.IMG }} .
    - name: Docker tags
      run: |
          docker tag ${{ env.IMG }} ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}:${{ env.BUILD_VER }}
          docker tag ${{ env.IMG }} ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}:${{ github.sha }}
          docker tag ${{ env.IMG }} ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}:latest
    - name: Docker push
      run: |
          docker push ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}:${{ env.BUILD_VER }}
          docker push ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}:${{ github.sha }}
          docker push ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}:latest
    - name: Docker logout
      run: docker logout

The action workflow starts by indicating it should be run on a push to the repo. There are a number of events that trigger workflows though. My repo has more than just the app being built in the workflow so it specifies a path filter to only run when something changed in the siren folder or if the workflow itself changed.

For some variables like the version, build date, or image name, I started out by generating output in a step and then in subsequent steps, referencing the output of a previous step. I quickly switched to use of environment variables and set-env as it was less verbose and more flexible. Those are then referred to through the env context; for example, --build-arg BUILD_VER=${{ env.BUILD_VER }}. On a related note, the build version is generated with the help of one of the default environment variables GitHub provides, $GITHUB_RUN_NUMBER.

After setting environment variables, the checkout action is used to pull the code. For the docker login step, the workflow’s secrets context is used to read Docker credentials from the GitHub repo’s /settings/secrets area. I was tempted to login and push the Docker image to the GitHub Package Registry but I didn’t have a compelling reason to target GitHub package registry over Docker Hub.

GitHub Actions Version 2 added the ability to run multiple commands in one run step so that came in handy with the docker tag commands for example.

Initial Dockerfile

I was able to slim down the Dockerfile later but the initial version that corresponded to the initial action follows.

Prior to deploying with Docker, -r linux-arm was used for a self-contained deployment; that was no longer needed with Docker and would only add to the size since the base aspnet layer contains the runtime.

Initially I started with the Unosquare camera module which required including /opt/vc/bin in PATH since it just shells to the raspistill command line tool. Later I switched to MMALSharp which needed /opt/vc/lib in LD_LIBRARY_PATH.

Official Docker Build and Push Action

Later with the official Docker build and push action, the amount of yaml to maintain went down considerably without any loss of functionality.

name: Cat Siren Push
on:
  push:
    paths:
    - 'siren/**'
    - '.github/workflows/push-siren.yml'
jobs:
  cat-siren:
    runs-on: ubuntu-latest
    steps:
    - name: Set Build Number
      run: echo "::set-env name=BUILD_VER::1.0.$GITHUB_RUN_NUMBER"
    - name: Set Image Name
      run: echo "::set-env name=IMG::cat-siren"
    - name: Checkout
      uses: actions/checkout@v1
    - name: Docker build
      uses: docker/build-push-action@v1
      with:
        path: ./siren
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}
        repository: ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}
        labels: BUILD_VER=${{ env.BUILD_VER }}
        add_git_labels: true
        tags: ${{ env.BUILD_VER }},latest
        tag_with_ref: false
        tag_with_sha: true
        push: true

This also meant I was able to drop 12 steps from the Dockerfile – the build related ARG and LABEL steps. Best I can tell from the GitHub action logs, the Docker action is adding those labels to a copy of my Dockerfile. Letting that action do the labeling had the added benefit that it follows the Open Container Initiative’s standard labels.

Adding Debug Support

The last post touched on Docker debugging. To support that I added an additional cat-siren-debug job to the push workflow.

name: Cat Siren Push
on:
  push:
    paths:
    - 'siren/**'
    - '.github/workflows/push-siren.yml'
jobs:
  cat-siren:
    # Same steps as before, omitted for brevity...
  cat-siren-debug:
    runs-on: ubuntu-latest
    steps:
    - name: Set Build Number
      run: echo "::set-env name=BUILD_VER::1.0.$GITHUB_RUN_NUMBER"
    - name: Set Image Name
      run: echo "::set-env name=IMG::cat-siren"
    - name: Checkout
      uses: actions/checkout@v1
    - name: QEMU Prep
      working-directory: ./siren
      run: |
          pwd
          chmod u+x ./qemu.sh
          sudo ./qemu.sh
    - name: Docker build (Debug)
      uses: docker/build-push-action@v1
      with:
        path: ./siren
        dockerfile: ./siren/Dockerfile.debug
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}
        repository: ${{ secrets.DOCKER_USERNAME }}/${{ env.IMG }}
        build_args: QEMU=true
        labels: BUILD_VER=${{ env.BUILD_VER }}
        add_git_labels: true
        tags: latest-debug
        tag_with_ref: false
        tag_with_sha: false
        push: true

The steps are similar except for different labels and a new QEMU Prep step. That was needed for building an ARM image on Ubuntu via the action – specifically the step that installs the Debugger. Without it the debug build job would fail with the error standard_init_linux.go:211: exec user process caused "exec format error". The qemu.sh script follows.

apt-get update && apt-get install -y --no-install-recommends qemu-user-static binfmt-support
update-binfmts --enable qemu-arm
update-binfmts --display qemu-arm
mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register
cp -v /usr/bin/qemu-arm-static .

QEMU emulation gave me grief and I don’t pretend to understand it as much as I’d like. I was half-tempted to just do the build on the Pi itself to avoid it; that wouldn’t play well with CI/CD and GitHub actions though. While self-hosted runners for GitHub actions are available, having the Pi always on and exposed to the interwebs isn’t secure or practical. The following resources were helpful to me while researching QEMU emulation.

Dockerfile.debug is also similar except:

  • The publish build is done in Debug configuration instead of Release.
  • The qemu-arm-static file copied by the QEMU Prep step to the Dockerfile location is then copied into the container at /usr/bin.
  • The .NET Core command line debugger is installed from https://aka.ms/getvsdbgsh.

Monitoring the Results

In my experience the action was executed within seconds of the push to GitHub. Once nice thing about the jobs within an action is that they execute in parallel so both versions of the image can be built at the same time.

Afterwards it doesn’t hurt to verify the images made it to the registry.

Creating the Container on the Pi

A pull.sh helper script is copied to the SD card from sd-card-write.sh (see Scripting Raspberry Pi Image Builds). That’s then copied to the Pi on initial setup – setup.sh from Automating Raspberry Pi Setup. This script makes it easier to pull new image versions and recreate and start a container.

#!/bin/bash
name=cat-siren
username=thnk2wn
image="$username/$name:latest"
while [[ $# -ge 1 ]]; do
    i="$1"
    case $i in
        -d|--debug)
            image="$username/$name:latest-debug"
            shift
            ;;
        *)
            echo "Unrecognized option $1"
            exit 1
            ;;
    esac
    shift
done
imageIdBefore=$(docker images --format '{{.ID}}' $image)
echo "Pulling latest image - $image"
docker pull $image
if [ $? -ne 0 ]; then
    echo "Pull failed. Are you logged in $(whoami)?"
    if [[ ! -f "~/.docker/config.json" ]]
    then
        docker login || exit 1
        docker pull $image
    else
        exit 1
    fi
fi
imageIdAfter=$(docker images --format '{{.ID}}' $image)
if [ "$imageIdBefore" = "$imageIdAfter" ]; then
  echo "Nothing to do; pull did not result in a new image"
  exit 1
fi
buildVer=$(docker inspect -f '{{ index .Config.Labels "BUILD_VER" }}' $image)
createdOn=$(docker inspect -f '{{ index .Config.Labels "org.opencontainers.image.created" }}' $image)
echo "New image => Id: $imageIdAfter, Build Version $buildVer, Created on: $createdOn"
echo "Stopping and removing existing container"
docker stop $name || true && docker rm $name || true
echo "Creating new container and starting"
docker run \
    --privileged \
    -e Siren__ResetMotionAfter=15 \
    -e Siren__CaptureDuration=10 \
    -e Siren__WarmupInterval=75 \
    -e Serilog__MinimumLevel=Information \
    -e Serilog__MinimumLevel__Override__MMALSharp=Information \
    --mount type=bind,source=/opt/vc/lib,target=/opt/vc/lib,readonly \
    --mount type=bind,source=/opt/vc/bin,target=/opt/vc/bin,readonly \
    -v /home/pi/motion-media:/var/lib/siren/media \
    -d \
    --name $name \
    $image
echo "Cleaning up old images"
docker image prune -f
echo "Running. Tailing logs; ctrl+c to stop"
docker logs -f $name

An optional -d script argument will pull and create a container from the debugger enabled image instead. The image is then pulled; if it fails, the script assumes it’s due to not being logged in (i.e. 1st run) with this being a private repository.

The script grabs the image id before and after the pull and exits if it didn’t change. If it did change, the old container is stopped and removed. It also extracts build version and date labels just for output to help verify the version being run is the version expected.

Notes on docker run:

  • Permissions – running privileged is needed for hardware access (i.e. motion sensor, camera).
  • Environment variables – environment variables that are commonly changed are included to override image defaults when needed.
  • Read-only bind mounts – These bind mounts for /opt/vc/lib and /opt/vc/bin were needed by the Pi camera libraries used in my app (specifically Broadcom’s MMAL library).
  • Media volume – establishes a writable host location for the app to write camera images and video to when motion is detected.

If the image is updated often, a solution to automatically update docker containers might be desirable. For a real-world IoT application, something like Azure IoT Edge is great for this. For my needs with infrequent updates, the helper script is more than sufficient.

Diagnostics

Troubleshooting Runtime Issues

Originally before I accounted for /opt/vc/lib and /opt/vc/bin I received exceptions like the below.

Unhandled exception. System.DllNotFoundException:
Unable to load shared library 'libbcm_host.so' or one of its dependencies.
In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable:
liblibbcm_host.so: cannot open shared object file: No such file or directory
at MMALSharp.Native.BcmHost.bcm_host_init()
at MMALSharp.MMALCamera..ctor()
at MMALSharp.MMALCamera.<>c.<.cctor>b__30_0()
at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
at System.Lazy`1.CreateValue()
at System.Lazy`1.get_Value()
at MMALSharp.MMALCamera.get_Instance()

As the exception mentions the LD_DEBUG can be helpful in troubleshooting. There are more refined debug levels but I set -e LD_DEBUG=all temporarily in the docker run command of pull.sh as I wasn’t sure what I was looking for. That generated a considerable output of logging so I redirected logs to a file with docker logs cat-siren > logs.txt 2>&1 and then over on my Mac, copied it over with scp -r pi@catsirenpi.local:/home/pi/logs.txt ~/pi-logs.txt and browsed in my editor of choice. These logs help determine dependencies and where they are being searched. Sometimes though Stack Overflow is faster or looking at another Dockerfile.

Another helpful tool is using ldd to show shared library dependencies. In my case for example, with the camera and MMALSharp, ldd /opt/vc/lib/libmmal.so.

linux-vdso.so.1 (0xbed1e000)
/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so => /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so (0xb6f23000)
libmmal_vc_client.so => /opt/vc/lib/libmmal_vc_client.so (0xb6f08000)
libmmal_components.so => /opt/vc/lib/libmmal_components.so (0xb6eed000)
libvchiq_arm.so => /opt/vc/lib/libvchiq_arm.so (0xb6ed7000)
libvcsm.so => /opt/vc/lib/libvcsm.so (0xb6ebd000)
libmmal_core.so => /opt/vc/lib/libmmal_core.so (0xb6e9f000)
libmmal_util.so => /opt/vc/lib/libmmal_util.so (0xb6e7f000)
libcontainers.so => /opt/vc/lib/libcontainers.so (0xb6e5e000)
libvcos.so => /opt/vc/lib/libvcos.so (0xb6e45000)
libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb6e1b000)
libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0xb6e08000)
librt.so.1 => /lib/arm-linux-gnueabihf/librt.so.1 (0xb6df1000)
libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb6ca3000)
/lib/ld-linux-armhf.so.3 (0xb6f4b000)

Exploring .NET Docker Images

There’s not a lot of information on the ASP.NET Core Runtime DockerHub image. Sometimes it’s helpful to peek under the hood a bit so to speak.

Browsing the dotnet-docker repo can be helpful to look at the Dockerfiles for .NET. In my case that’s 3.1/aspnet/buster-slim/arm32v7/Dockerfile.

Alternatively, running an interactive shell with the image…

docker run -it mcr.microsoft.com/dotnet/core/aspnet:3.1.2-buster-slim-arm32v7 sh
# Explore...
cd /usr/share/dotnet
ls -a
exit

… as well as inspecting the image…

docker image inspect mcr.microsoft.com/dotnet/core/aspnet:3.1.2-buster-slim-arm32v7

One tool I really like is dive, which helps explore and visualize each layer of the docker image. After installing (i.e. brew install dive), running dive thnk2wn/cat-siren:latest helps me verify the image and see if anything can be trimmed down.

Up Next

The past 5 posts in this series have focused on the project details, automating the Pi setup, and deploying and running .NET code. The upcoming posts will start diving into the Pi hardware and the .NET code for this project. Coming shortly are posts on using the infrared sensor and the camera. However the Project GitHub repo is ahead of the posts if you’re inclined to jump ahead.