November 14, 2021
Historically, I’ve considered the game-playings AIs that I write a bit of a pain to share with others.
The AI engine itself is typically a native command-line program that implements something akin to the Universal Chess Interface. The program gets passed a serialized game state, and it returns the move it thinks strongest.
…to which the engine might respond:
b05 b09 trrgy e
I’m a lover of the command line, but even with the additional information and aesthetically-dubious ASCII boards included in the debug information, I’ll admit a real user interface is needed to play a serious game.
So to play against one of my AIs, you need:
This isn’t a whole lot, but it’s not trivial either. It’s no problem for the AI testing team, since I’ll maintain a private deployment for us to use. However, it feels like a little much to ask of someone with a casual interest in playing a game against one of the AIs.
I could run a public deployment for each of my AIs, but I’m a little leery of taking on the administrative and security considerations that come with running Internet-exposed and compute-heavy C programs. It could definitely be done, and I might do it some time in the future, but don’t have anything set up right now.
It took me way too long to realize this is the exact sort of situation Docker helps with. My veil of ignorance lifted, I decided to start with Kelvandor, the newest AI.
I wrote a
Dockerfile that builds a container
for the engine and API, and a
Compose file that combines
the engine/API container with an Nginx container that serves the UI. Together, this means you can
bring up a functional environment with a single
docker compose up command. Browse to
localhost:8000 and start playing!
The Dockerfile is pretty simple:
FROM alpine RUN apk update && \ apk add g++ make python3 py3-pip && \ pip install flask gunicorn WORKDIR /opt/kelvandor COPY ["src/", "."] RUN ["make"] ENTRYPOINT ["make", "httpapi"]
It starts with the
Linux container, installs the dependencies, copies Kelvandor’s
src/ directory to the container,
and builds the
kelvandor executable with
make. When started, the container runs
which is the Makefile target for running the API
app with Gunicorn.
The Compose file isn’t much worse:
services: api: build: . ports: - "5000:5000" ui: image: nginx volumes: - ./html:/usr/share/nginx/html:ro ports: - "8000:80" depends_on: - api
It defines a list of two containers,
api is built using the above Dockerfile. Port
5000 on the container is bound to port 5000 on the host, so the user’s browsers can access the API
when using the UI (
localhost:5000 is what Gunicorn binds to in
make httpapi and is the default
URL for the API in the UI).
ui container uses the official Nginx
image, and mounts Kelvandor’s
html/ directory at the webroot on
the container. Since we’re mounting and not copying, any changes to the UI in
html/ will be
immediately present in the container.
Nginx’s port 80 is bound to port 8000 on the host, so the user can access the UI at localhost:8000.
container. This is optional here—everything would work without it—but it reflects
the fact that the AI functionality in the UI won’t work without the API being available. The
depends_on option does a few useful things, namely making sure that
api is started before
docker compose up.
There are still improvements to be made, both to the Docker configuration and the Kelvandor project as a whole. However, these AI projects have always been about iterative improvement, bringing technical advancements and lessons learned on the previous project to the next one. Docker is a useful tool, and I plan to remember it more in the future!