pi-gen and Packer - building custom Raspberry Pi OS images from scratch

16 Nov 2022

Over the weekend I decided to take on a project of upgrading my home Raspberry Pi set up. That had to start by writing a blog post about it figuring out how to build OS images for the Pis to run on. So far, I’ve been flashing images built with pi-gen’s custom stages onto SD cards - a perfectly servicable, but toil-heavy solution. However, seeing how I planned to add another Pi, running another stack, into the mix, I needed something more automated and flexible.

Since I needed to figure out some things along the way, I decided to do this quick write-up on the off-chance someone else needs to reproduce this setup.

(tl;dr! Show me the code!)

Requirements: why pi-gen?

When revisiting a project like this, it’s often worth reconsidering the technology choices made along the way. After all, in the space of a year entire start-ups could’ve been built, funded, and bankrupted around the very same thing I was trying to solve.

I wanted to build relatively thin (primarily running Docker containers as service hosts), but not prohibitively thin (so that I could still easily use one as a base for setting up a dev box) OS images that I could write to a SD card.

Both the lite version of Raspberry Pi OS and the ARM version of the Ubuntu Server image ship with a couple of packages I didn’t foresee myself needing, so these were out of the question. The other side of the spectrum - anything geared towards being a pure container hosting solution, or a IoT distribution - would be missing some tools I find handy. That pretty much ruled out any ready-made image I could find.

Building any other distribution from scratch would require that I get the hardware support for Raspberry Pi in place myself.

pi-gen it was, then.

Requirements: why Packer?

Packer is the industry standard for producing OS images, and I work with it almost daily. Additionally, the Packer ARM image plugin seems to be the preferred choice for folks building on top of the available Raspberry Pi OS image. Therefore, it was an easy pick.

Setting up pi-gen

Since I wanted to give the 64-bit build a try (I paid for 64 bits, and I’m gonna use all of them, damn it!), I started off from the arm64 branch of pi-gen:

git clone
cd pi-gen
git checkout arm64

Firstly, I wanted to set up pi-gen to generate the image in a fashion the Packer ARM image plugin could consume with minimal overhead. That made a fine start to my pi-gen config - the file that configures the Raspberry Pi OS build.

DEPLOY_COMPRESSION=none # disable output compression for the Packer ARM image plugin

Next, I needed to get my own stage2 going. stage2 of the pi-gen process is where the Raspberry Pi OS Lite image is produced. According to the documentation, it’s also where you’d start trimming if looking to build a more minimalistic Lite-like image. And so trim I did:

cp -R stage2 stage2-base
vim stage2-base/01-sys-tweaks/00-packages # removed unneeded packages
vim stage2-base/EXPORT_IMAGE              # removed the unwanted `-lite` image name suffix

I then instructed pi-gen to the new list of stages to build in config, and to name the output aptly:

STAGE_LIST="stage0 stage1 stage2-base"

The last thing config needed was server-leaning configuration, and some personal touch (with secrets left out):



WPA_COUNTRY=PL # a valid country code is now required by pi-gen

PUBKEY_SSH_FIRST_USER="$(curl && curl"



I was now able to run sudo ./ and have a shiny Raspberry Pi OS image dropped in the deploy directory.

Setting up Packer

The next step was to set up my build pipelines in Packer. I decided to start with a fairly straightforward use-case: OctoPrint running in a Docker container, for my Prusa i3.

I set up my sources.pkg.hcl to accept paths and SHA sums from outside, leaving it to the impending build process to feed them into Packer:

source "arm-image" "prusa_i3" {
  iso_url           = var.source_iso_url
  iso_checksum      = var.source_iso_checksum
  image_type        = "raspberrypi"
  output_filename   = "${var.output_directory}/prusa_i3.img"
  target_image_size = var.target_image_size
  qemu_binary       = "qemu-aarch64-static"

And finally, I could get into provisioning the machine with shell scripts in build.pkr.hcl:

build {
  sources = ["source.arm-image.prusa_i3"]

  name = "common"

  provisioner "shell" {
    script = "scripts/"

  provisioner "shell" {
    script = "scripts/"
    environment_vars = [

  # ...

Tying it all together

The last thing I needed was a build process. I decided to write a quick and dirty Makefile that used config as source of information:

IMG_DATE = $(shell date +%Y-%m-%d)
IMG_NAME = $(shell bash -c 'source ./config; echo $$IMG_NAME')

# pi-gen outputs builds in `deploy` directory by default. We don't enforce
# this out of convenience, since it seems pretty unlikely to ever change.

FIRST_USER_NAME = $(shell bash -c 'source ./config; echo $$FIRST_USER_NAME')

PACKER_DIR = packer

# set PACKER, TEE, CUT, etc.

	$(SUDO) -E ./

%.sha256: %
	$(SHA256) $< | $(SUDO_TEE) $@

	cd packer && \
		$(PACKER) build \
		-var source_iso_url=../$(BASE_IMG_FILENAME) \
		-var source_iso_checksum=$$($(CUT) -d' ' -f1 < ../$(BASE_IMG_FILENAME).sha256) \
		-var output_directory=../$(PI_GEN_DEPLOY_DIR) \
		-var operator_user_name=$(FIRST_USER_NAME) \

	rm -rf $(PI_GEN_DEPLOY_DIR)/*.img

all: images

Running make images first builds the base image, then its checksum file, and finally runs Packer, feeding it relevant paths and information from the config file.

Afterwards, the images produced by Packer can be flashed to a SD card as usual:

sudo dd if=deploy/prusa_i3.img of=/dev/sda bs=4MB


Building Raspberry Pi OS images this way has been a huge win for me. It allowed for quick iteration in a familiar environment, while opening doors for growth if needed - this set up is not far off from using Ansible as the provisioning tool.

If you would like to check the set-up out in its entirety, it’s available in this GitHub repository. At present, it builds two images: the OctoPrint one, used as an example throughout this post, and a more complex Home Assistant. More will come as I build out the infrastructure.