Over at Linuxserver we use Ansible for it’s convenient integration with Jinja, a templating engine for Python, along with its powerful framework to execute shell commands. With this we are able to automate quite a lot of work, like generating a uniform and recognizable README, or injecting files into the container(don’t worry, the files are committed and pushed to GitHub before the image is built) for some logic. In order for all this to work, most of the metadata is filled into two YAML files in each repository, jenkins-vars.yml and readme-vars.yml which are then presented to Ansible as variables used for consumption in the templates.
Having all the metadata stored in a couple of files that can be pragmatically read makes it relatively easy to expand what we are able to output, and ensuring changes propagates trough all outputs.
Aside from the Dockerfiles, the mentioned var files, and most of the root/ folder, the rest of the repository are actually templated. The benefit of this approach is huge, as we often only need to update a file once, and the changes are done in the repositories as they receive updates, like when we enabled and started promoting GHCR as the default registry for all our images, with a single pull request.
If you are used to the(now paid) feature of DockerHub, rebuilding images when the repository updated on GitHub, one would be accustomed to the idea of the readme on DockerHub to always reflect the readme on GitHub, however that is not the case, regardless if the repositories are linked. It is traditionally a thing you have to update by hand, however with some creative thinking you can update this with code, which we do. This means that the readme on GitHub and DockerHub is always up-to-date and identical(as long as it’s not to long for DockerHub). There’s also a derivative from the readme published to the documentation site for each image, going more in-depth for sections of the readme. To up the inception scale, we also template the CI/CD pipeline, from the small stuff like the greetings bot to the whole Jenkinsfile used to build, test and push the images.
As I touched on in the blog post announcing automated Unraid templates, creating templates for Unraid was a manual task, often depending on someone in the Linuxserver team using Unraid to actually creating one, this could mean it had the potential to take days or even weeks to push a template. As there is a decent amount of tasks tied to launching a image it might even be forgotten, so automating this step would be better for everyone involved.
As outlined earlier the important building-blocks are present, a templating engine and repository-level metadata, despite this I had to create some new blocks.
This adventure started with getting reacquainted to XML, as that’s how the Unraid templates are stored, remembering the specification will surely help with some future headaches.
I start by making some helpful notes to any potential contributor wanting to help us maintaining the template, pointing them to the correct file for changing the output. As this is done in the template, the full address for the readme-vars.yml file will point to the actual repository.
40
41
42
<?xml version="1.0"?><!-- DO NOT CHANGE THIS FILE MANUALLY, IT IS AUTOMATICALLY GENERATED --><!-- GENERATED FROM {{ project_github_asset }}/readme-vars.yml -->
There is a few things happening here, mostly normal substituting of variables, there is also some transforming done, as "" is not a valid value and literal booleans needed to be it’s string counterpart, along with some logic to conditionally append a link. The rest of the template consists mostly of these types of substitutions and transformations. We will get to ca()later
Unraid templates support multiple branches. When installing from a template with multiple branches defined using Community Applications, you will get prompted with a selection box with all the branches listed in the template.To populate these fields, I iterate from the same variable that lists the branches on the readme, however I have recently added some filtering here to avoid listing deprecated branches.
{# Set the Branches, if any Config items is overwritten. TODO: handle config items #}{%ifdevelopment_versionsisdefinedanddevelopment_versions=="true"%}{%foritemindevelopment_versions_itemsifnot"deprecate"initem.desc.lower()%} <Branch>
<Tag>{{ca(item.tag)}}</Tag>
<TagDescription>{{ca(item.desc)}}</TagDescription>
{%ifitem.tag!="latest"%} <ReadMe>{{project_github_repo_url}}{{"/tree/"+item.tag+"#readme"}}</ReadMe>
<GitHub>{{project_github_repo_url}}{{("/tree/"+item.tag+"#application-setup")ifapp_setup_block_enabledisdefinedandapp_setup_block_enabled}}</GitHub>
{%endif%}{%ifitem.extraisdefined%}{#- Allow for branch-specific stuff #}{{ca(item.extra)|indent(8)|trim}}{%endif%} </Branch>
{%endfor%}{%endif%}{# Set the Branches, if any #}
This snippet is just a simple loop going over the development_versions_items list of arrays if development_versions exists. The following readme-vars.yml produced the above screenshot:
1
2
3
4
5
6
7
# development versiondevelopment_versions:truedevelopment_versions_items:- {tag:"latest", desc:"Stable Radarr releases"}- {tag:"develop", desc:"Radarr releases from their develop branch"}- {tag:"nightly", desc:"Radarr releases from their nightly branch"}- {tag:"nightly-alpine", desc:"Radarr releases from their nightly branch using our Alpine baseimage"}
I took the opportunity to add a key called extra for the dictionary, as CA has the ability to have separate config variables per branch. Unfortunately this is implemented in a way which makes it hard to use, the presence of any branch-specific items disregards all other config tags specified in the Container tag. Meaning that a dictionary like { tag: "nightly", desc: "Radarr releases from their nightly branch", extra: { nightly_var: "Do monkeydance"} } would render all other configuration items(such as environment variables, bind mounts and port mappings) void, if this branch was chosen. This is something I might have to account for at some time, by essentially generating the same values once per branch.
The next part I wanted to tackle, was building the link for the WebUI, here I had to be creative, while the information needed was present, it is not easily accessible as it is stored in the format Groovy wants variables to be presented in a Jenkinsfile. The input would look like this for the SWAG image:
Getting the value of CI_PORT is not as easy as it should be, like using a getter on repo_vars does not work. Fortunately we can use some clever replacements and splits of each item under repo_vars to build a new and usable variable.
1
2
3
4
5
6
{#- Create a real object from repo_vars -#}{%- setbetter_vars={} -%}{%- foriinrepo_vars -%}{%- seti=(i|replace(' = ','=',1)|replace('=','¯\_(ツ)_/¯',1)|replace("'","")|replace('"',"")).split('¯\_(ツ)_/¯') -%}{%- setx=(better_vars.__setitem__(i[0],i[1])) -%}{%- endfor -%}
The new variable this creates is called better_vars, which is a dictionary-type. I and X is used as throwaway variables, as Jinja does not really have a good way to run straight up code(and with good reason, I imagine). Since repo_vars is a array-type, it serves as a iterator, and saves me from even more bodging. First order of business is to make the list uniform across both ways of placing the equals-sign, after this they don´t have any padding with spaces, I can start replacing and splitting the rest of the line until it has some resemblance of a typical python-like key-value pair.
In the first iteration of this code, there was no shrug, but a carefully chosen example above highlights how the split would work against us, the macro would fail when it arrived at CI_DOCKERENV, as the value of that key, is a key-value pair.
We are now working in Python land, so we can remove both double and single quotes, this can come back and bite us later, but as it stands right now this is not an issue. Currently CI_DOCKERENV='TEST_RUN=1' would be CI_DOCKERENV¯\_(ツ)_/¯TEST_RUN=1, this is not useful yet, but we just need to convert this string to a key-value pair, easily done by using ¯\_(ツ)_/¯ as the deliminator. Once we have this key-value pair we can use the __setitem__ function of the built in python dictionary-type.
After all that there is now a variable that’s easier to work with, simply by using a getter.
82
83
84
85
{# Set the WebUI link based on the link the CI runs against #}{%ifbetter_vars.get("CI_WEB")andbetter_vars.get("CI")=="true"%} <WebUI>{{"https"ifbetter_vars.get("CI_SSL")=="true"else"http"}}://[IP]:[PORT:{{better_vars.get("CI_PORT")}}]</WebUI>
{%endif%}
This value is not supposed to hold a real URL, just the parts necessary for Unraid to build one. To do this it needs to know what container port the application is running on, we do this by using the syntax [PORT:80]. Now, if a user maps the container port of 80 to say host port 180, the Unraid webui button would now point to the ip of Unraid with port 180.
When we told Squid(the guy running Community Applications) to switch us over to the new repo, we actually got blacklisted in CA because I forgot how more-than and the less-than sign got treated both by xml and CA(Community Applications) specifically. In CA they are blacklisted characters, simply having them in the user-facing parts of the template gets the whole template repository blacklisted. This prompted a new macro, one to filter out the illegal characters.
This macro simply replaces <, >, [ and ] with nothing, while turning & to a word. For extra safety I put the escape filter at the end. At the time of writing there is no supported syntax in CA to make a hyperlink from a word.
All “free text” input in the template goes trough this filter to prevent another blacklisting.
Since the schema made for CA supports showing a changelog, we might as well use it, the metadata needed is already present in readme-vars.yml so no real work to get the data is needed. As this is the internet, and people come from different places, the dateformat we use is of course incompatible with the one CA accept, so I made a macro to convert mm.dd.yy to yyyy.mm.dd.
Next up is a macro that gets called when creating environment variables, to determine if the variable should be masked.
Along with a entry to list potential requirements, the changelog macro is used like this:
Another thing you might want to do with your container is passing along some less common parameters, like memory or cpu limits. This is something I had to tackle with ““code””. As the metadata for this is more literal to the real compose way of writing it, implementing support for security options is also coming.
This logic defines a variable called ExtraParam, then massages different entries from the metadata, to a array of strings, where each item in the array is a valid docker run argument.
This article is not written in a chronological order based on the development cycle, rather following the structure in the end product. You can see this by the lack of macros in the rest of the template, if I ever have to do major revisions of this template, turning this into macros would be the first thing to do.
A good chunk of the applications we bundle, uses multiple ports for different purposes, this is why we have sections in our metadata for optional ports. Thankfully Unraid has the ability to display if a port is optional or not. The schema also exposes the protocol part of a port mapping, a value we also have support for in our metadata. Now we you can see the CA macro in action, it is used to clean characters from our metadata.
There is a lot of logic present to build the description and name of these ports. It will automatically name the first port as “WebUI”, or it will fall back to the naming Unraid would use. For the description it will use the one defined in the metadata, or fall back to the value Unraid would have used. Mostly the same logic is used in the optional ports.
{# Set required ports, gets the name from the name atribute if present, or "WebUI" if it is the first port #}{%ifparam_usage_include_ports|default(false)%}{%foriteminparam_ports%}{%setport,proto=item.internal_port.split('/')if"/"initem.internal_portelse[item.internal_port,false]%}{#- Logic to get the protocol #} <Config Name="{{ca(item.name)ifitem.nameisdefinedelse"WebUI"ifloop.firstelse"Port: "+port}}" Target="{{port}}" Default="{{ca(item.external_port)}}" Mode="{{protoifprotoelse"tcp"}}" Description="{{ca(item.port_desc)ifitem.port_descisdefinedelse"Container Port: "+port}}" Type="Port" Display="always" Required="true" Mask="false"/>
{%endfor%}{%endif%}{#- Set required ports, gets the name from the name atribute if present, or "WebUI" if it is the first port #}{#- Set optional ports #}{%ifopt_param_usage_include_ports|default(false)%}{%foriteminopt_param_ports%}{%setport,proto=item.internal_port.split('/')if"/"initem.internal_portelse[item.internal_port,false]%}{#- Logic to get the protocol #} <Config Name="{{ca(item.name)ifitem.nameisdefinedelse"Port: "+port}}" Target="{{port}}" Default="{{ca(item.external_port)}}" Mode="{{protoifprotoelse"tcp"}}" Description="{{ca(item.port_desc)ifitem.port_descisdefinedelse"Container Port: "+port}}" Type="Port" Display="always" Required="false" Mask="false"/>
{%endfor%}{%endif%}{#- Set optional ports #}
The logic used for volumes is pretty much a copy-paste from the ports-logic, but instead of looking “WebUI”, it is trying to find a volume to call “Appdata”. There is also a piece of extra logic to see if a bind-volume is marked as Read Only.
{#- Set required volumes, gets the name from the name atribute if present, or "Appdata" if it is the /config location #}{%ifparam_usage_include_vols|default(false)%}{%foriteminparam_volumes%}{%setpath,mode=item.vol_path.split(':')if":"initem.vol_pathelse[item.vol_path,false]%}{#- Logic to get the mode #} <Config Name="{{ca(item.name)ifitem.nameisdefinedelse"Appdata"ifpath=="/config"else"Path: "+path}}" Target="{{ca(path)}}" Default="{{ca(item.vol_host_path)ifitem.defaultisdefinedanditem.defaultissameastrue}}" Mode="{{modeifmodeelse"rw"}}" Description="{{ca(item.desc)ifitem.descisdefinedelse"Path: "+path}}" Type="Path" Display="{{"advanced"ifpath=="/config"else"always"}}" Required="true" Mask="false"/>
{%endfor%}{%endif%}{#- Set required volumes, gets the name from the name atribute if present, or "Appdata" if it is the /config location #}{#- Set optional volumes #}{%ifopt_param_usage_include_vols|default(false)%}{%foriteminopt_param_volumes%}{%setpath,mode=item.vol_path.split(':')if":"initem.vol_pathelse[item.vol_path,false]%}{#- Logic to get the mode #} <Config Name="{{ca(item.name)ifitem.nameisdefinedelse"Appdata"ifpath=="/config"else"Path: "+path}}" Target="{{ca(path)}}" Default="{{ca(item.vol_host_path)ifitem.defaultisdefinedanditem.defaultissameastrue}}" Mode="{{modeifmodeelse"rw"}}" Description="{{ca(item.desc)ifitem.descisdefinedelse"Path: "+path}}" Type="Path" Display="always" Required="false" Mask="false"/>
{%endfor%}{%endif%}{#- Set optional volumes #}
The base of the logic for variables is also based on the ports-logic, but it does filter away some variables we hardcode, or variables that Unraid automatically manages.
The id´s for puid and guid in Unraid, is following a agreed upon id from the early days, the 99 user is ´nobody´.
{%setskip_envs=["puid","pgid","tz","umask"]%}{#- Drop envs that are either hardcoded, or automaticcly added by unraid #}{#- Set required variables, gets the name from the name atribute #}{%ifparam_usage_include_env|default(false)%}{%foriteminparam_env_varsifnotitem.env_var|lowerisinskip_envs%} <Config Name="{{ca(item.name)ifitem.nameisdefinedelseitem.env_var}}" Target="{{item.env_var}}" Default="{{item.env_options|join('|')ifitem.env_optionsisdefinedelseca(item.env_value)}}" Description="{{ca(item.desc)ifitem.descisdefinedelse"Variable: "+path}}" Type="Variable" Display="always" Required="true" Mask="{{mask(item.env_var)}}"/>
{%endfor%}{%endif%}{#- Set required variables, gets the name from the name atribute #}{#- Set optional variables #}{%ifopt_param_usage_include_env|default(false)%}{%foriteminopt_param_env_varsifnotitem.env_var|lowerisinskip_envs%} <Config Name="{{ca(item.name)ifitem.nameisdefinedelseitem.env_var}}" Target="{{item.env_var}}" Default="{{ca(item.env_value)}}" Description="{{ca(item.desc)ifitem.descisdefinedelse"Variable: "+path}}" Type="Variable" Display="always" Required="false" Mask="{{mask(item.env_var)}}"/>
{%endfor%}{%endif%}{#- Set optional variables #} <Config Name="PUID" Target="PUID" Default="99" Description="Container Variable: PUID" Type="Variable" Display="advanced" Required="true" Mask="false"/>
<Config Name="PGID" Target="PGID" Default="100" Description="Container Variable: PGID" Type="Variable" Display="advanced" Required="true" Mask="false"/>
<Config Name="UMASK" Target="UMASK" Default="022" Description="Container Variable: UMASK" Type="Variable" Display="advanced" Required="false" Mask="false"/>
{# Set required devices, gets the name from the name atribute #}{%ifparam_device_map|default(false)%}{%foriteminparam_devices%} <Config Name="{{ca(item.name)ifitem.nameisdefinedelseitem.device_path}}" Default="{{item.device_path}}" Description="{{ca(item.desc)ifitem.descisdefinedelse"Device: "+path}}" Type="Device" Display="always" Required="true" Mask="false"/>
{%endfor%}{%endif%}{#- Set required variables, gets the name from the name atribute #}{#- Set optional devices #}{%ifopt_param_device_map|default(false)%}{%foriteminopt_param_devices%} <Config Name="{{ca(item.name)ifitem.nameisdefinedelseitem.device_path}}" Default="{{item.device_path}}" Description="{{ca(item.desc)ifitem.descisdefinedelse"Device: "+path}}" Type="Device" Display="always" Required="false" Mask="false"/>
{%endfor%}{%endif%}{#- Set optional devices #}</Container>