Handling multiple errors in Rust iterator adapters
Approaches for handling multiple errors within iterator adapters
Better FastAPI Background Jobs
A more featureful background task runner for Async apps like FastAPI or Discord bots
Latest Updated Post
Running pip-tools in docker
There are a number of great CLI tools that help us to manage our packages, such as
pip-tools
or pipenv
for python and npm
for nodejs, that provide some useful
functionality, including the ability to snapshot (aka “pin” or “lock”) the exact
versions of packages installed, with versions and hashes of the code,
from a high level specification of the requirements.
Best practice is to save the “lock” file that stores these versions in a version control system (e.g. git) with the code, so that the runtime environment of the code is deterministically reproducible.
When running your application in docker, this becomes non-trivial, because one can not export files while building a docker image (i.e. there is no inverse of the COPY or ADD instructions). It has to be extracted after the images is build and run as a container, which can be a little complicated to work into a development or CI/CD workflow.
pip-tools in Docker
The following illustrates a minimal example setup for running pip-tools
(pip-compile
,
pip-sync
) within Docker, while writing the file back to the host system, where it can
be managed in your version control system e.g. committed with git
Dockerfile
:
FROM python:3.12-slim-bookworm AS base
ENV PYTHONUNBUFFERED=1
WORKDIR /opt/requirements
RUN pip install --upgrade --no-cache-dir \
pip \
pip-tools
FROM base AS compile_requirements
# Import settings (e.g. [tool.pip-tools])
COPY pyproject.toml .
# Import input files
COPY requirements.txt requirements.in ./
# Pin dependencies
ARG PIP_COMPILE_ARGS=''
ARG PIP_COMPILE_OUTPUT_FILE='requirements.txt'
ENV CUSTOM_COMPILE_COMMAND='./pip-compile-wrapper.sh'
RUN pip-compile ${PIP_COMPILE_ARGS} --output-file=${PIP_COMPILE_OUTPUT_FILE}
FROM base
WORKDIR /opt/app
COPY requirements.txt ./
RUN pip-sync ${REQUIREMENTS_FILES} --pip-args '--no-cache-dir --no-deps'
pyproject.toml
:
[tool.pip-tools]
generate-hashes = true
pip-compile-wrapper.sh
:
#!/bin/sh
set -eu
PIP_COMPILE_OUTPUT_FILE=${PIP_COMPILE_OUTPUT_FILE:-"requirements.txt"}
docker build -t compile-tag \
--target compile_requirements \
--build-arg PIP_COMPILE_ARGS \
--build-arg PIP_COMPILE_OUTPUT_FILE \
.
id=$(docker create compile-tag)
docker cp $id:/opt/requirements/${PIP_COMPILE_OUTPUT_FILE} ${PIP_COMPILE_OUTPUT_FILE}
docker rm -v $id
with this setup one can manage the contexts of the requirements.txt
file like this:
./pip-compile-wrapper.sh
# or to bump package versions
PIP_COMPILE_ARGS='--upgrade' ./pip-compile-wrapper.sh
# and then build your image with the updated requirements
docker build .
Alternative implementation
One can build the compile_requirements
stage, specify pip-compile
as the entrypoint,
and mount the local directory in a docker run command.
However this approach can lead to a number of challenges with file ownership on the host potentially being
different that inside the container. You can end up with the files being owned by another user (e.g. root)
when it’s written from to the mounted location.
These ownership issues can be overcome, but using docker cp
to copy the requirements.txt
file out of a container handles all of this file ownership issue for us