Skip to main content

Docker is cool

Writing the C++ code for the EVE launcher is but the first step in getting the launcher to run on EVE players machines. The code is then compiled, the resulting executable and all its dependencies staged in a folder. This folder is then used to create an installer as well as packaged for downloads for the launcher auto-update mechanism. We use TeamCity to run these builds of the launcher from the source, triggered by checkins.

Build agents

The machine used to build the launcher has to have the correct software set up - the correct version of Qt, the compiler and the installer framework. For Windows, this meant manually setting up a virtual machine by running the Visual Studio installer, followed by the Qt installer and so forth. The Mac was similar, except it wasn't a VM but a Mac Mini sitting on my desk.

The EVE launcher is a small project and the build process is about 6 minutes (including unit tests, compilation of launcher and updater, installer and upload to Amazon) so there really is no need for multiple build agents. It is still important to be able to recreate a build agent easily - computers can fail - so I made sure to document all the steps required to set up a build agent carefully on our internal wiki. For larger projects, where you might have many programmers checking code in to source control and builds might take a lot longer you would likely want to have multiple build agents to keep up with the checkins. In that case, a manual setup quickly becomes a big issue.

Automating the setup

With this in mind I wanted to try using Docker when setting up the build environment for a Linux version of the launcher. While CCP doesn't officially support Linux as a platform for EVE, we know that there are players out there running under Wine, and we do now offer Wine support through the launcher on the Mac. Getting the same setup to work on Linux didn't require that much effort, and I've been poking at that on the side.

A while ago I got the launcher running on my Ubuntu box, and lately have been setting up the automated builds. With Docker, I could set up the build agent with a script - the Dockerfile. This file specifies the base image to use, essentially which Linux distribution to use. Then the file lists commands that are run to set up the image. In my case I have two RUN commands - one with a long list of packages to install via apt-get install, followed by a curl command to get the AWS command line tools.

Not quite

The only caveat is that the Qt installer does not seem to be able to do an unattended installation without showing any UI. It is possible to pass a script to the installer that automates the installation, but it still insists on opening the UI and this fails when building the Docker container. My workaround was to manually install Qt on my machine, make a tarball from the Qt folder and use the ADD command to inject that into the image.

I set up a second Dockerfile for the environment to build Wine. In that case I could fully automate the setup as all the dependencies for Wine are available as packages that can be installed via apt-get install.

Versioning

Dockerfiles are small text files so they can easily be versioned in the same source control system as the build scripts. This makes it easier to manage changing dependencies, and if you ever need to rebuild older versions you get a definition of the build environment for that older version.

The perfect pair

Using Docker with TeamCity also means that you can have generic build agents even if you are running all sorts of different jobs with different requirements. The build agent itself doesn't need to have exactly the right software installed, with the right versions of everything - the build agent just needs to be able to run docker commands. The build job then uses docker run with the appropriate docker image - the image has been built with the appropriate dependencies for the job. This also implies that it is easy to have many agents running, even if you have very specific requirements for running the jobs.

I feel I've just scratched at the surface here - I've got a fairly simple setup that the does the job I need it to, but I'm sure I'll find an excuse to play around with this some more.

Comments

Popular posts from this blog

Working with Xmpp in Python

Xmpp is an open standard for messaging and presence, used for instant messaging systems. It is also used for chat systems in several games, most notably League of Legends made by Riot Games. Xmpp is an xml based protocol. Normally you work with xml documents - with Xmpp you work with a stream of xml elements, or stanzas - see https://tools.ietf.org/html/rfc3920 for the full definitions of these concepts. This has some implications on how best to work with the xml. To experiment with Xmpp, let's start by installing a chat server based on Xmpp and start interacting with it. For my purposes I've chosen Prosody - it's nice and simple to install, especially on macOS with Homebrew : brew tap prosody/prosody brew install prosody Start the server with prosodyctl - you may need to edit the configuration file (/usr/local/etc/prosody/prosody.cfg.lua on the Mac), adding entries for prosody_user and pidfile. Once the server is up and running we can start poking at it

JumperBot

In a  previous blog  I described a simple echo bot, that echoes back anything you say to it. This time I will talk about a bot that generates traffic for the chat server, that can be used for load-testing both the chat server as well as any chat clients connected to it. I've dubbed it  JumperBot  - it jumps between chat rooms, saying a few random phrases in each room, then jumping to the next one. This bot builds on the same framework as the  EchoBot  - refer to the previous blog if you are interested in the details. The source lives on GitHub:  https://github.com/snorristurluson/xmpp-chatbot Configure the server In an  earlier blog  I described the setup of Prosody as the chat server to run against. Before we can connect bots to the server we have to make sure they can log in, either by creating accounts for them: prosodyctl register jumperbot_0 localhost jumperbot prosodyctl register jumperbot_1 localhost jumperbot ... or by  setting the authentication up  so that anyon

Simple JSON parsing in Erlang

I've been playing around with Erlang . It's an interesting programming language - it forces you to think somewhat differently about how to solve problems. It's all about pattern matching and recursion, so it takes bit getting used to before you can follow the flow in an Erlang program. Back in college I did some projects with Prolog  so some of the concepts in Erlang were vaguely familiar. Supposedly, Erlang's main strength is support for concurrency. I haven't gotten that far in my experiments but wanted to start somewhere with writing actual code. OTP - the Erlang standard library doesn't have support for JSON so I wanted to see if I could parse a simple JSON representation into a dictionary object. The code is available on Github:  https://github.com/snorristurluson/erl-simple-json This is still very much a work in progress, but the  parse_simple_json/1 now handles a string like {"ExpiresOn":"2017-09-28T15:19:13", "Scopes":