[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This feature will be removed from ansible-core in version 2.19. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. No config file found; using defaults running playbook inside collection fedora.linux_system_roles PLAY [Test qdevice - minimal configuration] ************************************ TASK [Gathering Facts] ********************************************************* Thursday 25 July 2024 08:24:06 -0400 (0:00:00.009) 0:00:00.009 ********* [WARNING]: Platform linux on host managed_node1 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed_node1] TASK [Set qnetd address] ******************************************************* Thursday 25 July 2024 08:24:08 -0400 (0:00:01.228) 0:00:01.237 ********* ok: [managed_node1] => { "ansible_facts": { "__test_qnetd_address": "localhost" }, "changed": false } TASK [Run test] **************************************************************** Thursday 25 July 2024 08:24:08 -0400 (0:00:00.024) 0:00:01.261 ********* included: /var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/template_qdevice.yml for managed_node1 TASK [Set up test environment] ************************************************* Thursday 25 July 2024 08:24:08 -0400 (0:00:00.024) 0:00:01.286 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters] *** Thursday 25 July 2024 08:24:08 -0400 (0:00:00.035) 0:00:01.322 ********* ok: [managed_node1] => { "ansible_facts": { "inventory_hostname": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Ensure facts used by tests] ******* Thursday 25 July 2024 08:24:08 -0400 (0:00:00.024) 0:00:01.346 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'distribution' not in ansible_facts", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** Thursday 25 July 2024 08:24:08 -0400 (0:00:00.015) 0:00:01.361 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** Thursday 25 July 2024 08:24:08 -0400 (0:00:00.455) 0:00:01.817 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories] *** Thursday 25 July 2024 08:24:08 -0400 (0:00:00.024) 0:00:01.841 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution == 'RedHat'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd] *** Thursday 25 July 2024 08:24:08 -0400 (0:00:00.015) 0:00:01.857 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__ha_cluster_is_ostree | d(false)", "skip_reason": "Conditional result was False" } TASK [Clean up test environment for qnetd] ************************************* Thursday 25 July 2024 08:24:08 -0400 (0:00:00.023) 0:00:01.881 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed] *** Thursday 25 July 2024 08:24:08 -0400 (0:00:00.036) 0:00:01.917 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present] *** Thursday 25 July 2024 08:24:09 -0400 (0:00:01.058) 0:00:02.975 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/corosync/qnetd", "state": "absent" } TASK [Set up test environment for qnetd] *************************************** Thursday 25 July 2024 08:24:10 -0400 (0:00:00.467) 0:00:03.443 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Install qnetd packages] *********** Thursday 25 July 2024 08:24:10 -0400 (0:00:00.039) 0:00:03.482 ********* changed: [managed_node1] => { "changed": true, "rc": 0, "results": [ "Installed: corosync-qnetd-3.0.2-2.el9.x86_64" ] } lsrpackages: corosync-qnetd pcs TASK [fedora.linux_system_roles.ha_cluster : Set up qnetd] ********************* Thursday 25 July 2024 08:24:12 -0400 (0:00:02.201) 0:00:05.684 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "pcs", "--start", "--", "qdevice", "setup", "model", "net" ], "delta": "0:00:01.160196", "end": "2024-07-25 08:24:14.219018", "failed_when_result": false, "rc": 0, "start": "2024-07-25 08:24:13.058822" } STDERR: Quorum device 'net' initialized Starting quorum device... quorum device started TASK [Back up qnetd] *********************************************************** Thursday 25 July 2024 08:24:14 -0400 (0:00:01.606) 0:00:07.290 ********* included: /var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tasks/qnetd_backup_restore.yml for managed_node1 TASK [Create /etc/corosync/qnetd_backup directory] ***************************** Thursday 25 July 2024 08:24:14 -0400 (0:00:00.031) 0:00:07.322 ********* ok: [managed_node1] => { "changed": false, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/corosync/qnetd_backup", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 19, "state": "directory", "uid": 0 } TASK [Back up qnetd settings] ************************************************** Thursday 25 July 2024 08:24:14 -0400 (0:00:00.363) 0:00:07.685 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "cp", "--preserve=all", "--recursive", "/etc/corosync/qnetd", "/etc/corosync/qnetd_backup" ], "delta": "0:00:00.010421", "end": "2024-07-25 08:24:14.989597", "rc": 0, "start": "2024-07-25 08:24:14.979176" } TASK [Restore qnetd settings] ************************************************** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.365) 0:00:08.051 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "operation == \"restore\"", "skip_reason": "Conditional result was False" } TASK [Start qnetd] ************************************************************* Thursday 25 July 2024 08:24:15 -0400 (0:00:00.014) 0:00:08.066 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "operation == \"restore\"", "skip_reason": "Conditional result was False" } TASK [Run HA Cluster role] ***************************************************** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.014) 0:00:08.080 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.068) 0:00:08.149 ********* included: /var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.025) 0:00:08.175 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__ha_cluster_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.024) 0:00:08.199 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.021) 0:00:08.220 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.020) 0:00:08.240 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [], "__ha_cluster_fence_agent_packages_default": "{{ ['fence-agents-all'] + (['fence-virt'] if ansible_architecture == 'x86_64' else []) }}", "__ha_cluster_fullstack_node_packages": [ "corosync", "libknet1-plugins-all", "resource-agents", "pacemaker", "openssl" ], "__ha_cluster_pcs_provider": "pcs-0.10", "__ha_cluster_qdevice_node_packages": [ "corosync-qdevice", "bash", "coreutils", "curl", "grep", "nss-tools", "openssl", "sed" ], "__ha_cluster_repos": [], "__ha_cluster_role_essential_packages": [ "pcs", "corosync-qnetd" ], "__ha_cluster_sbd_packages": [ "sbd" ], "__ha_cluster_services": [ "corosync", "corosync-qdevice", "pacemaker" ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed_node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.050) 0:00:08.291 ********* ok: [managed_node1] => { "ansible_facts": {}, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/shell_pcs.yml" ], "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Enable package repositories] ****** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.019) 0:00:08.310 ********* included: /var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.027) 0:00:08.338 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/RedHat.yml" }, "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } ok: [managed_node1] => (item=CentOS.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml" }, "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml" } skipping: [managed_node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.046) 0:00:08.385 ********* included: /var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : List active CentOS repositories] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.034) 0:00:08.419 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "dnf", "repolist" ], "delta": "0:00:00.215836", "end": "2024-07-25 08:24:15.919369", "rc": 0, "start": "2024-07-25 08:24:15.703533" } STDOUT: repo id repo name appstream CentOS Stream 9 - AppStream baseos CentOS Stream 9 - BaseOS beaker-client Beaker Client - RedHatEnterpriseLinux9 beaker-harness Beaker harness beakerlib-libraries Copr repo for beakerlib-libraries owned by bgoncalv copr:copr.devel.redhat.com:lpol:qa-tools Copr repo for qa-tools owned by lpol epel-cisco-openh264 Extra Packages for Enterprise Linux 9 openh264 (From Cisco) - x86_64 epel-next Extra Packages for Enterprise Linux 9 - Next - x86_64 extras-common CentOS Stream 9 - Extras packages highavailability CentOS Stream 9 - HighAvailability TASK [fedora.linux_system_roles.ha_cluster : Enable CentOS repositories] ******* Thursday 25 July 2024 08:24:15 -0400 (0:00:00.562) 0:00:08.982 ********* skipping: [managed_node1] => (item={'id': 'highavailability', 'name': 'HighAvailability'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.id not in __ha_cluster_repolist.stdout", "item": { "id": "highavailability", "name": "HighAvailability" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'id': 'resilientstorage', 'name': 'ResilientStorage'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.name != \"ResilientStorage\" or ha_cluster_enable_repos_resilient_storage", "item": { "id": "resilientstorage", "name": "ResilientStorage" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.ha_cluster : Install role essential packages] *** Thursday 25 July 2024 08:24:15 -0400 (0:00:00.022) 0:00:09.004 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: corosync-qnetd pcs TASK [fedora.linux_system_roles.ha_cluster : Check and prepare role variables] *** Thursday 25 July 2024 08:24:16 -0400 (0:00:00.937) 0:00:09.942 ********* included: /var/ARTIFACTS/work-generaltrikmi59/plans/general/tree/tmp.iSTcmu54KQ/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Discover cluster node names] ****** Thursday 25 July 2024 08:24:16 -0400 (0:00:00.045) 0:00:09.988 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_node_name": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Collect cluster node names] ******* Thursday 25 July 2024 08:24:17 -0400 (0:00:00.025) 0:00:10.013 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_all_node_names": [ "localhost" ] }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if ha_cluster_node_options contains unknown or duplicate nodes] *** Thursday 25 July 2024 08:24:17 -0400 (0:00:00.029) 0:00:10.043 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "(\n __nodes_from_options != (__nodes_from_options | unique)\n) or (\n __nodes_from_options | difference(__ha_cluster_all_node_names)\n)\n", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Extract node options] ************* Thursday 25 July 2024 08:24:17 -0400 (0:00:00.025) 0:00:10.069 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_local_node": {} }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if passwords are not specified] *** Thursday 25 July 2024 08:24:17 -0400 (0:00:00.030) 0:00:10.099 ********* failed: [managed_node1] (item=ha_cluster_hacluster_password) => { "ansible_loop_var": "item", "changed": false, "item": "ha_cluster_hacluster_password" } MSG: ha_cluster_hacluster_password must be specified TASK [Clean up test environment for qnetd] ************************************* Thursday 25 July 2024 08:24:17 -0400 (0:00:00.032) 0:00:10.131 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed] *** Thursday 25 July 2024 08:24:17 -0400 (0:00:00.065) 0:00:10.197 ********* changed: [managed_node1] => { "changed": true, "rc": 0, "results": [ "Removed: corosync-qnetd-3.0.2-2.el9.x86_64" ] } TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present] *** Thursday 25 July 2024 08:24:18 -0400 (0:00:01.480) 0:00:11.677 ********* changed: [managed_node1] => { "changed": true, "path": "/etc/corosync/qnetd", "state": "absent" } PLAY RECAP ********************************************************************* managed_node1 : ok=32 changed=5 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 Thursday 25 July 2024 08:24:19 -0400 (0:00:00.377) 0:00:12.054 ********* =============================================================================== fedora.linux_system_roles.ha_cluster : Install qnetd packages ----------- 2.20s fedora.linux_system_roles.ha_cluster : Set up qnetd --------------------- 1.61s fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed --- 1.48s Gathering Facts --------------------------------------------------------- 1.23s fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed --- 1.06s fedora.linux_system_roles.ha_cluster : Install role essential packages --- 0.94s fedora.linux_system_roles.ha_cluster : List active CentOS repositories --- 0.56s fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present --- 0.47s fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.46s fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present --- 0.38s Back up qnetd settings -------------------------------------------------- 0.37s Create /etc/corosync/qnetd_backup directory ----------------------------- 0.36s Run HA Cluster role ----------------------------------------------------- 0.07s Clean up test environment for qnetd ------------------------------------- 0.07s fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.05s fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories --- 0.05s fedora.linux_system_roles.ha_cluster : Check and prepare role variables --- 0.05s Set up test environment for qnetd --------------------------------------- 0.04s Clean up test environment for qnetd ------------------------------------- 0.04s Set up test environment ------------------------------------------------- 0.04s