tsurud 1.2.0 release notes¶
Welcome to tsurud 1.2.0!
Experimental support for multiple provisioners. This release of tsuru is the first in a long time to support multiple provisioners. The provisioners in tsuru are responsible for, among other things, schedule containers on different nodes and handle moving containers in case of failures.
Our default provisioner implementation remains the same, it includes a battle-tested containers scheduler and healer and has been in production for over 3 years, managing thousands of containers every day.
However, the scenario has changed a lot since tsuru first started 3 years ago. Where the options for container orchestration/scheduling were few and imature, now they are plenty and (in some cases) stable. Because of this change we thought it would be nice to experiment on how to integrate other container schedulers as tsuru provisioners. These experiments have the potential of motivating us to change the default provisioner used in tsuru and remove a whole bunch of code from tsuru.
To allow a seemless experience, first, a
provisioner attribute was added to
pools. It can be set using
tsuru pool-add --provisioner and
pool-update --provisioner. This allows changing the provisioner of single
pool, you can also set the default provisioner in the config file.
Over the course of the next tsuru releases we intend to add experimental support as provisioners for:
- Docker Swarm mode (
- Mesos/Marathon (
- Kubernetes (
This release focused on adding support for the
swarm provisioner. Please
note that as much as we’d love feedback on the new added provisioners, they
should be considered as highly experimental and may be removed from tsuru
in the future. Because of that we cannot recommend them for production
environments just yet. That said, please do play and report any bugs found
while using them.
IaaS integration with Docker Machine¶
Apart from containers orchestration one thing that sets tsuru apart is the ability to also orchestrate virtual machines. This is acomplished using tsuru managed nodes. Previously we had support for only 3 IaaS providers: Amazon EC2, Digital Ocean and Cloudstack.
Starting on this version we added a new IaaS provider that uses Docker Machine as a backend, this means all drivers supported by Docker Machine and also community supported drivers can be used to add managed nodes to tsuru. This is huge and adds support for big names like Azure, Google Compute Engine, among others.
Docker TLS support for provisioners¶
In this version we added support for orchestrating containers on docker nodes configured with TLS. TLS is mandatory for nodes created using the newly introduced Docker Machine IaaS and can be also configured for nonmanaged and nodes provisioned with other IaaS providers. Both provisioners, native and swarm, support docker with TLS.
HTTPS routing support for apps¶
In this version, we added support for configuring TLS certificates for applications. The certificate and private key are passed directly to the application router which is responsible for TLS termination. Currently, the planB router is the only router that supports managing TLS certificates and HTTPs routing directly from tsuru.
Certificates should be configured for each app cname using
certificate-set -a <app> -c <cname> cert.crt key.crt and can be removed by
tsuru certificate-unset -a <app> -c <cname>.
tsuru certificate-list -a <app> may be used to list certificates bound to a
- gops can be used to extract information from tsurud process. #1495
- Basic support for prometheus style metrics on
- Improved documentation on how to extract and process metrics from application containers. #1460
- Improved documentation on how to install and use tsuru-dashboard. #1444
- When a single IaaS is configured tsuru will use it as default. #1259
- Due to a bug in tsuru, it was possible for duplicated entries to be added to
db.routerscollection in MongoDB. This collection keeps track of swapped application routers when
tsuru app-swapis used. To fix the duplicated entries simply run
tsurud migrate. The migration will try its best to fix the entries but it might fail in some extreme corner cases. In case of failure it will print the offending entries that will have to be manually fixed in MongoDB (i.e. removing one of the duplicated entries).