CI/CD for Amiberry using Github Actions (pt 3)

By | 2022-06-16

In the first and second parts of this series, we saw how to set up a CI/CD solution for Amiberry, using self-hosted runners. The workflow will automatically build on every new commit, but also create and publish a new release (including binaries and an auto-generated changelog), based on the git tag. So generally it works as intended, but there’s always room for improvement. What could we improve? Performance, of course!

The self-hosted approach works well, but since I’m using (mostly) Raspberry Pi devices to compile, it takes longer than I’d like to go produce all the binaries. Additionally, each device has to produce more than one binary (e.g. Dispmanx and SDL2 versions), and it can’t do that in parallel. And the whole workflow is not finished until all of the jobs contained in it are finished, so if I also wanted a new release, it would take more than 40 minutes in total to complete the whole thing. Surely we can do better than that!

We can’t make the Raspberry Pi compile faster than its CPU allows, so we’ll have to use something else to do the job: cross compile on another (faster) platform. Let’s see what we’ll need to make that happen:

  • a Linux environment where the magic will happen. Something like Debian or Ubuntu would do just fine.
  • the environment will need a few things installed:
    • a cross compiler (the one for the architecture we’ll be compiling for, e.g. armhf for 32-bit and aarch64 for 64-bit)
    • the Amiberry dependencies, for the architecture we’ll be compiling for (e.g. libsdl2)
    • git, build essentials and autoconf, since we’ll need those as well
    • environment variables configured accordingly, to use the cross compiler instead of the distro’s native one, when compiling Amiberry


We could just use another self-hosted solution for this, but I wanted to take things a step further and use something more portable. So, we’ll create a Docker image for each environment we’ll need (currently that will be ARM 32-bit, ARM 64-bit and Linux x86. I’ll keep the Mac Mini for the macOS compilation as a self-hosted runner).

I created 3 separate Dockerfiles, one for each environment I wanted to use:

First I tested that these worked to compile Amiberry locally, without errors. Then I deployed these to DockerHub, so that Github Actions (and everyone else) can grab them from there:

Then I tested that I can still compile Amiberry using the deployed images from DockerHub, like so:

docker run --rm -it -v D:\Github\amiberry:/build midwan/amiberry-docker-aarch64:latest

The above example uses the aarch64 image, as you can see. I’m giving it one parameter of the local path, where I have my Amiberry sources checked out (in this example, that’s D:\Github\amiberry), and it will map that to the image’s /build directory. Running the above will end up in a bash shell, waiting for further commands. All I have to do, is type the command to compile Amiberry for the platform of choice, and check the output:

make -j8 PLATFORM=rpi4-64-sdl2

After a few minutes, the compilation is finished and I have a binary ready. I copy the binary over to my Raspberry Pi running Manjaro Linux (64-bit), and test it – it works as expected, great! Now let’s add this to the workflow.

Adapting the workflow

We can now change a few things on the workflow, for each job:

  • Change the runs-on value, since we don’t want/need to run this on the self-hosted runners anymore. We can use ubuntu-latest instead.
  • We need a different step for the compilation, since we won’t be doing that on the self-hosted runner anymore. I needed an action that would allow me to specify an image docker to use, and give it some options to run in it. I found that worked perfectly for my needs.
  • The rest of the steps can remain as they were, since we don’t need to change anything else in the process. Just the compile step.

Considering the above, the compile step now becomes something like this:

    - name: Run the build process with Docker
      uses: addnab/docker-run-action@v3
        image: midwan/amiberry-docker-armhf:latest
        options: -v ${{ github.workspace }}:/build
        run: |
          make capsimg
          make -j8 PLATFORM=rpi4-sdl2

Obviously, the above changes slightly depending on the platform we are compiling for (we’ll use the aarch64 image for 64-bit ARM targets, and different PLATFORM=<value> options). One thing you may have noticed, is the use of a special variable: ${{ github.workspace }} – this one represents the directory where the sources were checked out in the previous step, and it’s exactly what we need to map to the /build directory of the Docker image, similar to what I did with my local test above.


I can’t use the docker images to produce the Dispmanx binaries, since the Dispmanx specific files are not in the image. That’s OK for me, I can still use the self-hosted runners to compile those. They will only need to compile those now, so it shouldn’t take too long to finish.


With the new Docker approach, we have a few extra benefits. Not only is the compilation time significantly reduced (from 10-13 down to 4-5 minutes per platform), it can also run most of these jobs in parallel, since it doesn’t have to wait for one to finish before starting the next one.

Here’s a sample of the latest compilation done after a commit I made:

Compile time, using Docker

Now compare that to the time it took to produce binaries for the 5.2 release, done with the self-hosted runners only:

Compile time, using self-hosted runners only

The total time for a new release has now gone from 40m 56s down to 18m 30s, for all the included binaries. Not bad!