Skip to content

Advanced Installation of Traefik Enterprise Edition with One Control Node on Swarm with Compose Files

This installation guide is for experts who want to fine-tune their TraefikEE (Traefik Enterprise Edition) installation.

It covers how to install TraefikEE using docker compose files in a Docker Swarm cluster.

Swarm Knowledge

Assistance with configuring or setting up a Docker Swarm cluster is not included in this guide. If you need more information related to Docker Swarm, start with the following resources:

Requirements

  • The traefikeectl tool installed
  • A Docker Swarm (swarm mode) cluster:
    • Version: >= 1.13 (minimum API version 1.25)
    • At least 1 manager node, and 1 worker node
  • Docker client
    • Version: >= 1.13 (minimum API version 1.25)
    • Configured to communicate with your swarm cluster by setting correctly the --host flag or the DOCKER_HOST environment variable, and the security options according to your setup. For more information see here.

Ingress ports requirements

TraefikEE publishes multiple ports on your cluster ingress routing mesh to handle external traffic:

  • The HTTP and HTTPS ports (default: 80 and 443) one the data-node service
  • The Control API port, used by traefikeectl to communicate with traefikee (default: 55055), and the dashboard port, where the dashboard is served (default: 8080) on the control-node service

Customizing the ports can be useful if the standard ports are already used or in order to run multiple clusters in parallel.

Download and Extract Compose Files

curl -sSL \
    https://s3.amazonaws.com/traefikee/examples/v1.0.0/swarm/traefikee-swarm-v1.0.0.tar.gz | tar xvz
./bootstrap-node.yml
./control-node.yml
./data-node-global.yml
./data-node-replicated.yml
./single-control-node.yml

Create the TraefikEE Network

Create the network being used by TraefikEE to communicate internally.

docker network create --driver=overlay traefikee-net
pmvxcxzucmcshro6tfpta7az2 # newly created network ID, differs per execution

Note

You can personalize the network name being used, but make sure to report it in the following commands.

Create the TraefikEE License Secret

Create the docker swarm secret containing your license key.

# With the TRAEFIKEE_LICENSE_KEY environment variable previously defined
echo -n ${TRAEFIKEE_LICENSE_KEY} | docker secret create traefikee-license -
g7akfclckt71e0sej85doj8x4 # newly created secret ID, will difer per execution

Choose a Cluster Name

TraefikEE needs a common identifier called the cluster name (specified using the --clustername option) in order to recognize its resources. In the following guide examples, the cluster name is going to be clustername.

Choose Where to Run the Control Node

The TraefikEE control node will maintain a persisted state on a local volume. In order to always ensure that this state is being used, we need to ask swarm to always schedule the control node to the same swarm node.

To do that, we tag a node with two specific labels and define a placement constraint on the node that carry those.

Install on a Swarm manager

The node were the control node is installed must be a swarm manager.

Label structure

The labels must be:

  • com.containous.traefikee.<CLUSTER_NAME>.singlecn=true
  • com.containous.traefikee.<CLUSTER_NAME>.installing=true

With <CLUSTER_NAME> being the same value as the --clustername of the control node service.

Once you have selected the node, run the following command:

  docker node update <NODE_ID>\
    --label-add="com.containous.traefikee.clustername.singlecn=true" \
    --label-add="com.containous.traefikee.clustername.installing=true"
tf4wrrnyuksi0s8k1r8snx92e # Id of the updated node.

Create the Control Node

Installation behind a proxy

In order to be able to install TraefikEE behind a proxy, you must define the HTTP_PROXY and HTTPS_PROXY environment variables for each TraefikEE container.

To do so, you need to edit the compose files and add the following snippet into each of them:

services:
  control-node: # or bootstrap-node or data-node.
    # [...]
    environment:
      HTTP_PROXY: "http://127.0.0.1:3129"
      HTTPS_PROXY: "http://127.0.0.1:319"
  • Open the file ./single-control-node.yml with your favorite editor:
    • Replace the ${TRAEFIKEE_LICENSE_SECRET} variable with the name of the secret you just created
    • Replace the ${TRAEFIKEE_CLUSTER_NAME} variable with the desired cluster name
    • Replace the ${TRAEFIKEE_SINGLE_CN_LABEL} variable with com.containous.traefikee.clustername.singlecn (clustername being the same value than ${TRAEFIKEE_CLUSTER_NAME}
    • Replace the ${TRAEFIKEE_SWARM_NETWORK} variable with traefikee-net (or the one you chose)
    • Replace the ${TRAEFIKEE_LOG_LEVEL} variable with the desired log level (among DEBUG, INFO, ERROR and WARN)
    • Replace the ${TRAEFIKEE_DASHBOARD_PORT} variable with the desired ingress port for the dashboard
    • Replace the ${TRAEFIKEE_CTLAPI_PORT} variable with the desired ingress port for the control API
  • Save the file.

Note

Instead of replacing environment variables in the file, you can export them in your shell.

Create the bootstrap node to initialize the cluster:

docker stack deploy --compose-file=./single-control-node.yml clustername
Creating service clustername_control-node

Validate that your control node is up and running:

docker service ls
ID                  NAME                     MODE                REPLICAS            IMAGE                              PORTS
ausnb79nsewp        clustername_control-node replicated          1/1                 store/containous/traefikee:v1.0.0  *:8080->8080/tcp, *:55055->55055/tcp

Make sure the control node is running on the node you chose:

docker service ps clustername_control_node
ID                  NAME                         IMAGE                               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
u0tvy5hlewgc        clustername_control-node.1   store/containous/traefikee:v1.0.0   63f12b68ebec        Running             Running 21 minutes ago

The NODE field should be the hostname of the node you selected to install the control node.

Connect traefikeectl to the New Cluster

Custom Control API Port

If you specified a different control API port than the default value (55055) when creating the control node, do not forget to specify the --swarm.ctlapiport option when running traefikeectl connect.

Configure traefikeectl to have access to the new cluster.

traefikeectl connect --swarm
Retrieving TraefikEE Control credentials...ok
Removing cluster credentials from platform...ok
Credentials saved in "$HOME/.config/clustername/traefikee", please make sure to keep them safe as they can never be retrieved again.
✔ Successfuly gained access to the cluster. You can now use other traefikeectl commands.

One-time operation

When running traefikeectl connect, your credentials will be retrieved and it will not be possible to do it again in the future without re-installing a TraefikEE cluster. Remember to keep your credentials safe!

Check your API access by listing the cluster nodes.

traefikeectl list-nodes
Name          Availability  Role          Leader
----          ------------  ----          ------
030cc2b6230a  ACTIVE        CONTROL NODE  YES

Create the Data Nodes

Two options are available here:

Global Mode

  • Open the file ./data-node-global.yml with your favorite editor:
    • Replace the ${TRAEFIKEE_SWARM_NETWORK} variable with traefikee-net (or the one you chose)
    • Replace the ${TRAEFIKEE_DATA_NODE_JOIN_TOKEN} variable with the name of the secret containing the data node join token. In our case, clustername-data-node-join-token
    • Replace the ${TRAEFIKEE_PEER_ADDRESSES} variable with the address of the bootstrap node. In our case, clustername_bootstrap-node:4242
    • Replace the ${TRAEFIKEE_LOG_LEVEL} variable with INFO (or DEBUG if needed)
    • Replace the ${TRAEFIKEE_HTTP_PORT} variable with 80 (or any other port you want)
    • Replace the ${TRAEFIKEE_HTTPS_PORT} variable with 443 (or any other port you want)
  • Save the file.

Note

Instead of replacing environment variables in the file, you can export them in your shell.

Note

This behavior is required by TraefikEE and containers hostname cannot be overridden by the hostname option.

docker stack deploy --compose-file=./data-node-global.yml clustername
Creating service clustername_data-node

Validate that your data node is up and running:

docker service ls
ID                  NAME                       MODE                REPLICAS            IMAGE                               PORTS
t37nf8xvpw3b        clustername_control-node   global              3/3                 store/containous/traefikee:v1.0.0   *:8080->8080/tcp
khcwbiffzocq        clustername_data-node      global              2/2                 store/containous/traefikee:v1.0.0   *:80->80/tcp, *:443->443/tcp

Replicated Mode

  • Open the file ./data-node-replicated.yml with your favorite editor:
    • Replace the ${TRAEFIKEE_SWARM_NETWORK} variable with traefikee-net (or the one you chose)
    • Replace the ${TRAEFIKEE_DATA_NODE_JOIN_TOKEN} variable with the name of the secret containing the data node join token. In our case, clustername-data-node-join-token
    • Replace the ${TRAEFIKEE_PEER_ADDRESSES} variable with the address of the bootstrap node. In our case, clustername_bootstrap-node:4242
    • Replace the ${TRAEFIKEE_DATA_NODE_REPLICAS_COUNT} variable with the amount of data nodes you want
    • Replace the ${TRAEFIKEE_LOG_LEVEL} variable with INFO (or DEBUG if needed)
    • Replace the ${TRAEFIKEE_HTTP_PORT} variable with 80 (or any other port you want)
    • Replace the ${TRAEFIKEE_HTTPS_PORT} variable with 443 (or any other port you want)
  • Save the file.

Note

Instead of replacing environment variables in the file, you can export them in your shell.

docker stack deploy --compose-file=./data-node-replicated.yml clustername
Creating service clustername_data-node

Validate that your data node is up and running:

docker service ls
ID                  NAME                       MODE                REPLICAS            IMAGE                               PORTS
t37nf8xvpw3b        clustername_control-node   global              3/3                 store/containous/traefikee:v1.0.0   *:8080->8080/tcp
c4e7eqjir9gk        clustername_data-node      replicated          1/1                 store/containous/traefikee:v1.0.0   *:80->80/tcp, *:443->443/tcp

Validate your Deployment

You can use traefikeectl list-nodes from inside a control node container to see the nodes of your TraefikEE cluster.

traefikeectl list-nodes
Name          Availability  Role          Leader
----          ------------  ----          ------
e51a496c9ebc  ACTIVE        CONTROL NODE  YES
6222392b53dc  ACTIVE        DATA NODE
bfd19ebc1afa  ACTIVE        DATA NODE

Configure your TraefikEE Cluster

You can use traefikeectl deploy to configure your cluster.

traefikeectl deploy --docker.swarmmode

Backup your Installation

Don't forget to setup regular backups using the traefikeectl backup command. More information can be found in the backup and restore documentation.