Customizing the Docker Compose Files¶
Once the compose files are generated, they can be reviewed and if necessary customized.
The following aspects should be taken in account:
- Memory/CPU Resevations and Limits: Both controllers and proxies come with enforced resource limits. The defaults are voluntarily kept large to fit most of use cases. They can be adjusted to better fit the targeted installation.
- Network Configuration and Exposed Ports: By default, the
proxiesservice is configured to expose the ports
443to the swarm ingress network. This setting has to be adjusted according to the target installation. If network performance is needed, please consider bypassing the routing mesh.
- Volumes: Local volumes are being used by default in order to fit most installations. If a global volume driver is available on the targeted swarm cluster, it can be used instead. This would remove the placement constraints on the controllers, providing more flexibility. However, be careful with the I/O performance of these volumes as it is critical for the stability of the cluster.
- Placement constraints: By default, proxy services are configured to be deployed on workers only. If the workload is allowed on manager nodes or only certain swarm nodes are exposed to the internet then the placement constraint can be adjusted. However, unless a global volume is being used for the controller volumes, please do not remove the placement constraints on the controller services.
- Labels: To expose the dashboard, labels on the
proxiesservice is required. However, please make sure not to remove the
- Environment variables: To use Datadog or ACME DNS Challenge for instance, environment variables should be configured either on proxies or controllers.
We don't provide a swarm compliant health check at this moment.
Common Scenarios Requiring Customization¶
External Load Balancer¶
By default, the manifest files generated by
teectl setup gen include a service definition with default port mode for the proxies, which means they will use the Swarm routing mesh.
While this routing mesh can be convenient, it does have some disadvantages such as reduced performance and overall network stability perception.
When using an external load balancer, it's possible to bypass the routing mesh and expose the proxy service ports directly to the host machine. To achieve this the port mode must be changed in the compose file before deploying it to the cluster.
And as the host mode will determine that only one container of a service can run on each docker node, because of the binded port, it's also recommended to add a placement constraint and set the service deploy mode to global:
services: proxies: #[...] deploy: mode: global placement: constraints: - node.role != manager - node.labels.traefikee-enabled == true #[...] ports: - target: 80 published: 80 protocol: tcp mode: host - target: 443 published: 443 protocol: tcp mode: host #[...]
With this configuration the load balancer can target the address of individual nodes running the proxy service.
Client IP address
When using host mode, the real client ip address is shown on request headers and TraefikEE logs instead of the Docker internal overlay address.
Port management in host mode
When using host mode, the management of port conflicts is the cluster operator's responsibility.