The gitlab ci/cd toolchain has become a great companion for me in the past years. Since I started using kubernetes more and more I always wanted to utilize the power of kubernetes for the gitlab runners. Furthermore I wanted to get rid of that pesky little badly secured gitlab instance and secure it with TLS (“https”) which became quite necessary later…
As always there is not much to do after you once made it work and invested an hour or two. To spare you the time, let’s get to the details.
Make sure you have the proper certificates at hand: One for the gitlab nginx and (if needed) your local CA.
Create a namespace for your runners.
Insert the certificates as a secret in your kubernetes cluster.
Modify the gitlab-runner yaml file for your needs.
Use the yaml file and helm to install the runners in your kubernetes cluster.
1. Security / Certificates
For obvious reasons it is a good idea to secure the gitlab nginx with TLS. In my case I used TinyCA for local network signing, which can be obtained easily through the package manager of various distributions. But anything that will result in a certificate which works for you will do.
2. Create the namespace
Basically you can be creative here, but seriously do not name it ‘gitlab’ because that can seriously interfere with your routing inside kubernetes!
Here let’s call it runners:
kubectl create namespace runners
3. Create a secret with your certificates
The runners need to be sure that your code repository is exactly what it is supposed to be. We don’t want a man-in-the-middle. So we create a secret with the certificates. This needs at least the certificate from your nginx and in my case I even had to include the CA certificate. (So the crt file contained two BEGIN CERTIFICATE and two END CERTIFICATE blocks.)
There isn’t much documentation on the great Poolboy worker pool factory. Here are some notes that will hopefully help to understand Poolboy a little bit better.
Notes on the usage
See https://github.com/devinus/poolboy for the original “documentation”. Many will know, but somehow I didn’t: The workers are idle in the pool. One takes them out of the pool and after the work is done, one puts them back in. It’s not like they’re – like I initially thought – working *in* the pool.
(Besides AFAIK checkout and checkin are the only things poolboy utilizes to know whether a worker is busy or not. So you better make sure the checkin is called at least sometime.)
% get a PID of a process that *is* checked in into the pool (=idle)
1> Worker = poolboy:checkout(PoolName).
% now the worker is "checked out" = out of the pool doing work
% => do the actual work here
2> gen_server:call(Worker, Request).
% when you're done => check the worker back in for future use
% you can do this in the worker itself by using the "self" function
3> poolboy:checkin(PoolName, Worker).
Blocking and non-blocking pool worker calls / asynchronous calls
Using “call” like the original usage example is a blocking call. No multicore fun & magic etc. there… For non-blocking consider using “cast” http://erlang.org/doc/man/gen_server.html#cast-2 In some cases it might be the best idea to let the worker do the checkin himself by utilizing the “self” function.
Blocking and non-blocking worker checkout
If you didn’t already do, checkout the source code of poolboy. There you can find – among other things – the full specs of the checkout function. Interestingly enough it has two more optional parameters blocking (default true) and timeout (default 5000).
1> Worker = poolboy:checkout(PoolName).
% returns PID in under 5 seconds or
2> Worker = poolboy:checkout(PoolName, false).
% no waiting, either you have an idle worker for me or not.
3> Worker = poolboy:checkout(PoolName, true, 10000).
% like the first one but wait 10 secs instead of 5
(At first my idea was to make a merge request for the documentation but the documentation is so straight to the point (for those who basically know how to use it already 😉 ) I didn’t want to ruin that.)
Since the second to latest kernel update of debian stable my screen flickered in normal xfce4 when there was movement on the screen. I thought it was a bug because i had nothing changed but the recent kernel update did not fix that. It took me some time but if you experience the same you may want to take a look at the dynamic power profiles which according to this
Normally programs like apt-get are very much power-loss persistent. If there is a sudden power loss you can pretty much resume where you got interrupted. This is possible because apt-get checks often if the data that should be on the disk is really written to it. Old school nerds may know the tool “sync” which is basically what apt-get does a lot.
This is very time consuming because file systems like ext4 and btrfs are not used to being forced to write all the time. They want to keep stuff in buffers and write when they decide it’s time. The is part of the magic speed they can achieve. Eatmydata redirects calls from following programs to sync-like functions programs into the void. Eatmydata makes other programs think they work pretty safe but actually they are not (and hell of a lot faster).
So it’s really fast but you really shouldn’t loose power (or sth. similar) when using eatmydata. But hey my last power failure is years from now and how are the chances you’re using eatmydata in this very moment? I wouldn’t recommend it for cronjobs though.
Update: Problems with Xbian on the Raspberry PI (rpi)
Looks like eatmydata causes problems on the raspberry pi with xbian installed. apt-get / dpkg exits with an error from time to time. But one can always proceed with
sudo dpkg --configure -a
But nevertheless eatmydata is not recommended for the rpi.
I’ve written a small script to remote control a computer via wake on lan (WOL) packages. This could come in quite handy if my raspberry pi finally arrives but any HTPC owner could make use of this too. (You don’t want to run xbmc along with your torrent client (and vice versa) all day long, do you?)
It basically executes predefined commands on the amount of WOL packages received without a timeout. Just arm your android smartphone with a WOL app, send e.g. two WOL pkgs and your box will execute whatever you told it to do. It needs root (because it depends on pcapy ) so you’re strongly advisedto use sudo wherever you can.
AFAIK the raspberry pi is not capable of booting up due to a WOL package. Of course it can receive and handle them nonetheless.