Otherwise, when completely uninstalling the etcd-enabled cluster, it fails with:
```
TASK [xanmanning.k3s : Check the conditions when embedded etcd is defined] ***************************************
fatal: [vm0]: FAILED! => {
"assertion": "(((k3s_controller_list | length) % 2) == 1)",
"changed": false,
"evaluated_to": false,
"msg": "Etcd should have a minimum of 3 defined members and the number of members should be odd. Please see notes about HA in README.md"
}
fatal: [vm1]: FAILED! => {
"assertion": "(((k3s_controller_list | length) % 2) == 1)",
"changed": false,
"evaluated_to": false,
"msg": "Etcd should have a minimum of 3 defined members and the number of members should be odd. Please see notes about HA in README.md"
}
fatal: [vm2]: FAILED! => {
"assertion": "(((k3s_controller_list | length) % 2) == 1)",
"changed": false,
"evaluated_to": false,
"msg": "Etcd should have a minimum of 3 defined members and the number of members should be odd. Please see notes about HA in README.md"
}
```
If you run the role on an ansible configured with that setting, it will fail with:
fatal: [vm0]: FAILED! => {"msg": "Unexpected templating type error occurred on ({% for host in ansible_play_hosts_all %}\n{% filter string %}\n{% filter replace('\\n', ' ') %}\n{{ host }}\n@@@\n{{ hostvars[host].ansible_host | default(hostvars[host].ansible_fqdn) }}\n@@@\nC_{{ hostvars[host].k3s_control_node }}\n@@@\nP_{{ hostvars[host].k3s_primary_control_node | default(False) }}\n{% endfilter %}\n{% endfilter %}\n@@@ END:{{ host }}\n{% endfor %}): sequence item 4: expected str instance, bool found"}
os ubuntu amd64 16.04 LTS
ansible 2.9.20
python version 2.7
```
FAILED! => {"msg": "The conditional check 'item in kubectl_get_nodes_result.stdout' failed. The error was: error while evaluating conditional (item in kubectl_get_nodes_result.stdout): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/rancher/.ansible/roles/xanmanning.k3s/tasks/teardown/drain-and-remove-nodes.yml': line 39, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Ensure uninstalled nodes are removed\n ^ here\n"}
```
- Added uninstall task to remove hard-linked files #88
- Fixed missing become for `systemd` operations tasks. #89
- Added `k3s_start_on_boot` to control `systemd.enabled`.
* Tidy up and refactoring of tasks
- `k3s_config_dir` derived from `k3s_config_file`, reused throughout the role
to allow for easy removal of "Rancher" references #73.
- `k3s_token_location` has moved to be in `k3s_config_dir`.
- Tasks for creating directories now looped to caputure configuration from
`k3s_server` and `k3s_agent` and ensure directories exist before k3s
starts, see #75.
- Server token collected directly from token file, not symlinked file
(node-token).
- `k3s_runtime_config` defined in `vars/` for validation and overwritten in
tasks for control plane and workers.
- Removed unused references to GitHub API.
* set_fact now uses FQCN
* re-pin molecule<3.2
* Command module now uses FQCN
* Added package checks for #72
* Reorder task files
- Docker tasks moved into a separate directory for ease of removal #67
- Bugfix: Control plane on alternate port didn't work.
- Validation tasks grouped
* Fix Fedora tests
* Add optional documentation links to validations steps #76
* Removed jmespath requirement
* Fix issue with data collection
* Release candidate
- Added option to skip validation checks #47
- Add SELinux support in containerd #48
- Added check for Etcd member count #46
- Moved token to a file #50
- Added Etcd snapshot configuration options #49
This release also fixes:
- #38 : removing the --disable-agent option. Please use node taints.
- #39 : clarified where jmespath should be installed in README.md
```
FAILED! => {"attempts": 3, "changed": false, "msg": "Unable to enable service k3s: Failed to enable unit: Access denied\n"}
```
The task never sets become to true, hence failing due to lack of permissions on the user that is executing it by default.
Fixes#17
There appeared to be a race condition where starting all secondary
masters all at once would cause the k3s service to fail on a number of
the other masters. A retry has been added to the task to attempt to
bring them all up until they stop failing.
Fixes#16
This is because without a CNI, nodes will never be ready and the task
will fail. You need to deploy your choice of CNI manually (such as
Calico) then check the state of the cluster using `kubectl get nodes`.
1. Now does not remove prerequisite packages, lvm2 was included in
these packages (not good when you use LVM2 for real).
2. Added a bit more idempotency to the shell scripts - only delete if
it exists.
3. Check that the process isn't running and binaries are gone.
I attempted to install on arm64 and armhf. Both fail because the
[checksum filter](e07903a5cf/tasks/build/download-k3s.yml (L21))
finds the first line with "k3s". On the arm checksum files,
the first lines are for "k3s-airgap-images-arm64.tar" and "k3s-airgap-images-arm.tar"
so the wrong checksum is grabbed.
I attempted to fix this with a more specific filter:
`select('search', 'k3s'+k3s_arch_suffix)`.
This works for both arm architectures,
but fails for amd64 because the key is simply "k3s" and not "k3s-amd64".
The solution I settled on is not ideal for future proofing,
but works for now at least.