Is it possible to make friends with Gitlab CI + Docker + Systemd

A micro note on how to run Docker with Systemd inside Gitlab CI Runner. Maybe someone will be useful, maybe someone has already solved a similar problem in other ways and it will be interesting if you share in the comments.

Foreword
Gitlab Runner has been deployed inside the Docker container. At some point, there was an idea to assemble all the necessary infrastructure (for example, PostgreSQL and Tomcat) inside one container for installing the application after the compilation stage and running the autotests. The infrastructure container itself was already built based on the Debian image with Systemd and worked fine. But when used inside Runner, unexpected problems began. The step code was for simplicity, let it be:

run-autotests: image: debian/systemd before_script: - cp backend.jar /opt/ - cd /opt script: - java -jar autotests.jar 

Everything seems to be normal, but at startup the step will fall down with an error that systemd is not started as a process with ID 1 or perhaps another error will be that systemd is not started at all.

What would seem to be the problem?

As it turned out - on a fresh issue at the very Gitlab, I am not the only one who ran into this problem.
The problem is that the Gitlab Runner for Docker container always rewrites the CMD command, i.e. starts the container with this command:

 docker run --cmd /bin/bash ... 

And it is impossible to redefine the Hitlab CMD, you can only use the entrypoint inside the ci script, but dances with it lead nowhere.

All roles were covered by molecule tests and they successfully passed tests inside the GitLab runner. Drawing attention to this, I thought, why not run the container with systemd inside the running Runner, g * mor, of course, but the result was more important to me than the difficulties. You can just run the container with the help of the Docker’s commands, but not efficiently, and there will be no error handling - you can get some too unpredictable results, so I decided to write a little Python handicraft that simply runs the container, copy the archive with the necessary ones files and execute the list of commands inside the container.

→ The code is here: GitHub

You can run like this:

 cd <path-with-code> pip install virtualenv virtualenv venv source venv/bin/activate pip install -r requirements.txt python main.py \ --image dramaturg/docker-debian-systemd #   [--network host] #     [--volumes] "/sys/fs/cgroup:/sys/fs/cgroup:ro" "<>" #  volume   systemd,      [--cmd] "/lib/systemd/systemd" # ,      ,      [--data-archive] /opt/data.tar #     *.tar  *.tar.gz [--data-unarchive-path] /opt/data/logs #     ,      [--privileged] #   systemd ,        --exec-commands "touch /opt/example.log" "{bash} ls -la /opt" "mkdir -p /opt/tmp" #       

Commands in [] are optional. The special {bash} macro is needed for commands that require a shell, for example, ls -la, etc. It will be replaced during execution with / bin / bash -c "command" .

I wrote it on Python for the first time, so don't scold it. Perhaps in the code or at startup there will be problems, I will try to fix it quickly. Here I tried to explain the general simple idea of ​​the launch method. Share your solutions if you experience similar problems.

About the dramaturg / docker-debian-systemd image used
There are no complaints about this image, but at first there was an error that appeared in the console of the host machine, that some files created by systemd already exist. In the Nginx service there was no such problem, but in PostgreSQL it manifested itself. The solution was to remove the block “VOLUME [„ / sys / fs / cgroup "," / run "," / run / lock "," / tmp "]”, after that everything worked like a clock.

Source: https://habr.com/ru/post/413375/


All Articles