* Tidy up and refactoring of tasks
- `k3s_config_dir` derived from `k3s_config_file`, reused throughout the role
to allow for easy removal of "Rancher" references #73.
- `k3s_token_location` has moved to be in `k3s_config_dir`.
- Tasks for creating directories now looped to caputure configuration from
`k3s_server` and `k3s_agent` and ensure directories exist before k3s
starts, see #75.
- Server token collected directly from token file, not symlinked file
(node-token).
- `k3s_runtime_config` defined in `vars/` for validation and overwritten in
tasks for control plane and workers.
- Removed unused references to GitHub API.
* set_fact now uses FQCN
* re-pin molecule<3.2
* Command module now uses FQCN
* Added package checks for #72
* Reorder task files
- Docker tasks moved into a separate directory for ease of removal #67
- Bugfix: Control plane on alternate port didn't work.
- Validation tasks grouped
* Fix Fedora tests
* Add optional documentation links to validations steps #76
* Removed jmespath requirement
* Fix issue with data collection
* Release candidate
- Added option to skip validation checks #47
- Add SELinux support in containerd #48
- Added check for Etcd member count #46
- Moved token to a file #50
- Added Etcd snapshot configuration options #49
This release also fixes:
- #38 : removing the --disable-agent option. Please use node taints.
- #39 : clarified where jmespath should be installed in README.md
```
FAILED! => {"attempts": 3, "changed": false, "msg": "Unable to enable service k3s: Failed to enable unit: Access denied\n"}
```
The task never sets become to true, hence failing due to lack of permissions on the user that is executing it by default.
Fixes#17
There appeared to be a race condition where starting all secondary
masters all at once would cause the k3s service to fail on a number of
the other masters. A retry has been added to the task to attempt to
bring them all up until they stop failing.
Fixes#16
This is because without a CNI, nodes will never be ready and the task
will fail. You need to deploy your choice of CNI manually (such as
Calico) then check the state of the cluster using `kubectl get nodes`.