ansible-role-k3s/tasks/validate/state/nodes.yml
Xan Manning e7c714424c
Tiidy up and refactoring of tasks (#80)
* Tidy up and refactoring of tasks

  - `k3s_config_dir` derived from `k3s_config_file`, reused throughout the role
    to allow for easy removal of "Rancher" references #73.
  - `k3s_token_location` has moved to be in `k3s_config_dir`.
  - Tasks for creating directories now looped to caputure configuration from
    `k3s_server` and `k3s_agent` and ensure directories exist before k3s
    starts, see #75.
  - Server token collected directly from token file, not symlinked file
    (node-token).
  - `k3s_runtime_config` defined in `vars/` for validation and overwritten in
    tasks for control plane and workers.
  - Removed unused references to GitHub API.

* set_fact now uses FQCN

* re-pin molecule<3.2

* Command module now uses FQCN

* Added package checks for #72

* Reorder task files

  - Docker tasks moved into a separate directory for ease of removal #67
  - Bugfix: Control plane on alternate port didn't work.
  - Validation tasks grouped

* Fix Fedora tests

* Add optional documentation links to validations steps #76

* Removed jmespath requirement

* Fix issue with data collection

* Release candidate
2020-12-21 19:14:52 +00:00

17 lines
759 B
YAML

---
- name: Check that all nodes to be ready
ansible.builtin.command: "{{ k3s_install_dir }}/kubectl get nodes"
changed_when: false
failed_when: kubectl_get_nodes_result.stdout.find("was refused") != -1 or
kubectl_get_nodes_result.stdout.find("ServiceUnavailable") != -1
register: kubectl_get_nodes_result
until: kubectl_get_nodes_result.rc == 0
and kubectl_get_nodes_result.stdout.find("NotReady") == -1
retries: 30
delay: 20
when: k3s_control_node
and (("disable" not in k3s_runtime_config)
or ("disable" in k3s_runtime_config and "flannel" not in k3s_runtime_config.disable))
and not ansible_check_mode
become: "{{ k3s_become_for_kubectl | ternary(true, false, k3s_become_for_all) }}"