ansible-playbook [core 2.17.13] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-sHE executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_skip_toolkit.yml *********************************************** 1 plays in /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml PLAY [Verify if role configures a custom storage properly] ********************* TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:3 Monday 01 September 2025 10:52:01 -0400 (0:00:00.018) 0:00:00.018 ****** [WARNING]: Platform linux on host managed-node1 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node1] TASK [Ensure test packages] **************************************************** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:22 Monday 01 September 2025 10:52:03 -0400 (0:00:01.278) 0:00:01.297 ****** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:31 Monday 01 September 2025 10:52:05 -0400 (0:00:02.313) 0:00:03.611 ****** ok: [managed-node1] => { "changed": false, "disks": [ "sda", "sdb" ], "info": [ "Line: NAME=\"/dev/sda\" TYPE=\"disk\" SIZE=\"10737418240\" FSTYPE=\"\" LOG_SEC=\"512\"", "Line: NAME=\"/dev/sdb\" TYPE=\"disk\" SIZE=\"10737418240\" FSTYPE=\"\" LOG_SEC=\"512\"", "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG_SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"xfs\" LOG_SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"xfs\" LOG_SEC=\"512\"", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:40 Monday 01 September 2025 10:52:06 -0400 (0:00:00.539) 0:00:04.150 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "'Unable to find unused disk' in unused_disks_return.disks", "skip_reason": "Conditional result was False" } TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:49 Monday 01 September 2025 10:52:06 -0400 (0:00:00.014) 0:00:04.165 ****** ok: [managed-node1] => { "ansible_facts": { "unused_disks": [ "sda", "sdb" ] }, "changed": false } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:54 Monday 01 September 2025 10:52:06 -0400 (0:00:00.016) 0:00:04.181 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)", "skip_reason": "Conditional result was False" } TASK [Prepare storage] ********************************************************* task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:59 Monday 01 September 2025 10:52:06 -0400 (0:00:00.032) 0:00:04.213 ****** included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Monday 01 September 2025 10:52:06 -0400 (0:00:00.023) 0:00:04.237 ****** included: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Monday 01 September 2025 10:52:06 -0400 (0:00:00.017) 0:00:04.255 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Monday 01 September 2025 10:52:06 -0400 (0:00:00.033) 0:00:04.289 ****** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Monday 01 September 2025 10:52:06 -0400 (0:00:00.039) 0:00:04.328 ****** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Monday 01 September 2025 10:52:06 -0400 (0:00:00.513) 0:00:04.842 ****** ok: [managed-node1] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Monday 01 September 2025 10:52:06 -0400 (0:00:00.021) 0:00:04.863 ****** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Monday 01 September 2025 10:52:06 -0400 (0:00:00.014) 0:00:04.878 ****** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Monday 01 September 2025 10:52:06 -0400 (0:00:00.012) 0:00:04.891 ****** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Monday 01 September 2025 10:52:06 -0400 (0:00:00.040) 0:00:04.931 ****** fatal: [managed-node1]: FAILED! => { "changed": false, "failures": [], "rc": 1, "results": [] } MSG: Depsolve Error occurred: Problem: package kmod-kvdo-8.2.6.3-173.el9.x86_64 from baseos requires kernel-modules-uname-r >= 5.14.0-605.el9, but none of the providers can be installed - cannot install the best candidate for the job - package kernel-modules-5.14.0-605.el9.x86_64 from baseos is filtered out by exclude filtering - package kernel-modules-5.14.0-611.el9.x86_64 from baseos is filtered out by exclude filtering TASK [Remove both of the LVM logical volumes in 'foo' created above] *********** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:93 Monday 01 September 2025 10:52:08 -0400 (0:00:01.570) 0:00:06.502 ****** included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Monday 01 September 2025 10:52:08 -0400 (0:00:00.024) 0:00:06.526 ****** included: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Monday 01 September 2025 10:52:08 -0400 (0:00:00.018) 0:00:06.544 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Monday 01 September 2025 10:52:08 -0400 (0:00:00.035) 0:00:06.580 ****** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Monday 01 September 2025 10:52:08 -0400 (0:00:00.043) 0:00:06.624 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Monday 01 September 2025 10:52:08 -0400 (0:00:00.019) 0:00:06.643 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Monday 01 September 2025 10:52:08 -0400 (0:00:00.018) 0:00:06.661 ****** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Monday 01 September 2025 10:52:08 -0400 (0:00:00.016) 0:00:06.678 ****** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Monday 01 September 2025 10:52:08 -0400 (0:00:00.014) 0:00:06.692 ****** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Monday 01 September 2025 10:52:08 -0400 (0:00:00.036) 0:00:06.729 ****** fatal: [managed-node1]: FAILED! => { "changed": false, "failures": [], "rc": 1, "results": [] } MSG: Depsolve Error occurred: Problem: package kmod-kvdo-8.2.6.3-173.el9.x86_64 from baseos requires kernel-modules-uname-r >= 5.14.0-605.el9, but none of the providers can be installed - cannot install the best candidate for the job - package kernel-modules-5.14.0-605.el9.x86_64 from baseos is filtered out by exclude filtering - package kernel-modules-5.14.0-611.el9.x86_64 from baseos is filtered out by exclude filtering PLAY RECAP ********************************************************************* managed-node1 : ok=18 changed=0 unreachable=0 failed=2 skipped=6 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.13", "end_time": "2025-09-01T14:52:08.425384+00:00Z", "host": "managed-node1", "message": "Depsolve Error occurred: \n Problem: package kmod-kvdo-8.2.6.3-173.el9.x86_64 from baseos requires kernel-modules-uname-r >= 5.14.0-605.el9, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kernel-modules-5.14.0-605.el9.x86_64 from baseos is filtered out by exclude filtering\n - package kernel-modules-5.14.0-611.el9.x86_64 from baseos is filtered out by exclude filtering", "rc": 1, "start_time": "2025-09-01T14:52:06.858876+00:00Z", "task_name": "Make sure blivet is available", "task_path": "/tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2" }, { "ansible_version": "2.17.13", "end_time": "2025-09-01T14:52:10.201693+00:00Z", "host": "managed-node1", "message": "Depsolve Error occurred: \n Problem: package kmod-kvdo-8.2.6.3-173.el9.x86_64 from baseos requires kernel-modules-uname-r >= 5.14.0-605.el9, but none of the providers can be installed\n - cannot install the best candidate for the job\n - package kernel-modules-5.14.0-605.el9.x86_64 from baseos is filtered out by exclude filtering\n - package kernel-modules-5.14.0-611.el9.x86_64 from baseos is filtered out by exclude filtering", "rc": 1, "start_time": "2025-09-01T14:52:08.656350+00:00Z", "task_name": "Make sure blivet is available", "task_path": "/tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Monday 01 September 2025 10:52:10 -0400 (0:00:01.547) 0:00:08.276 ****** =============================================================================== Ensure test packages ---------------------------------------------------- 2.31s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:22 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.57s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.55s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Gathering Facts --------------------------------------------------------- 1.28s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:3 Find unused disks in the system ----------------------------------------- 0.54s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:31 fedora.linux_system_roles.storage : Check if system is ostree ----------- 0.51s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.04s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.04s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.04s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.04s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 fedora.linux_system_roles.storage : Ensure ansible_facts used by role --- 0.04s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 fedora.linux_system_roles.storage : Ensure ansible_facts used by role --- 0.03s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Exit playbook when there's not enough unused disks in the system -------- 0.03s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:54 Remove both of the LVM logical volumes in 'foo' created above ----------- 0.02s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:93 Prepare storage --------------------------------------------------------- 0.02s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:59 fedora.linux_system_roles.storage : Set flag to indicate system is ostree --- 0.02s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 fedora.linux_system_roles.storage : Check if system is ostree ----------- 0.02s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 fedora.linux_system_roles.storage : Set flag to indicate system is ostree --- 0.02s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.02s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.02s /tmp/collections-sHE/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Sep 01 10:52:01 managed-node1 python3.9[52833]: ansible-ansible.legacy.systemd Invoked with name=waagent state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Sep 01 10:52:01 managed-node1 sshd[53821]: Accepted publickey for root from 10.31.45.122 port 36266 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 01 10:52:01 managed-node1 systemd-logind[607]: New session 16 of user root. ░░ Subject: A new session 16 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 16 has been created for the user root. ░░ ░░ The leading process of the session is 53821. Sep 01 10:52:01 managed-node1 systemd[1]: Started Session 16 of User root. ░░ Subject: A start job for unit session-16.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-16.scope has finished successfully. ░░ ░░ The job identifier is 2346. Sep 01 10:52:01 managed-node1 sshd[53821]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Sep 01 10:52:01 managed-node1 sshd[54081]: Received disconnect from 10.31.45.122 port 36266:11: disconnected by user Sep 01 10:52:01 managed-node1 sshd[54081]: Disconnected from user root 10.31.45.122 port 36266 Sep 01 10:52:01 managed-node1 sshd[53821]: pam_unix(sshd:session): session closed for user root Sep 01 10:52:01 managed-node1 systemd[1]: session-16.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-16.scope has successfully entered the 'dead' state. Sep 01 10:52:01 managed-node1 systemd-logind[607]: Session 16 logged out. Waiting for processes to exit. Sep 01 10:52:01 managed-node1 systemd-logind[607]: Removed session 16. ░░ Subject: Session 16 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 16 has been terminated. Sep 01 10:52:01 managed-node1 sshd[54302]: Accepted publickey for root from 10.31.45.122 port 35784 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 01 10:52:01 managed-node1 systemd-logind[607]: New session 17 of user root. ░░ Subject: A new session 17 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 17 has been created for the user root. ░░ ░░ The leading process of the session is 54302. Sep 01 10:52:01 managed-node1 systemd[1]: Started Session 17 of User root. ░░ Subject: A start job for unit session-17.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-17.scope has finished successfully. ░░ ░░ The job identifier is 2415. Sep 01 10:52:01 managed-node1 sshd[54302]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Sep 01 10:52:01 managed-node1 sshd[54502]: Received disconnect from 10.31.45.122 port 35784:11: disconnected by user Sep 01 10:52:01 managed-node1 sshd[54502]: Disconnected from user root 10.31.45.122 port 35784 Sep 01 10:52:01 managed-node1 sshd[54302]: pam_unix(sshd:session): session closed for user root Sep 01 10:52:01 managed-node1 systemd-logind[607]: Session 17 logged out. Waiting for processes to exit. Sep 01 10:52:01 managed-node1 systemd[1]: session-17.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-17.scope has successfully entered the 'dead' state. Sep 01 10:52:01 managed-node1 systemd-logind[607]: Removed session 17. ░░ Subject: Session 17 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 17 has been terminated. Sep 01 10:52:02 managed-node1 systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Sep 01 10:52:02 managed-node1 systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 2281. Sep 01 10:52:02 managed-node1 systemd[1]: man-db-cache-update.service: Consumed 10.631s CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service completed and consumed the indicated resources. Sep 01 10:52:02 managed-node1 systemd[1]: run-re923fdfc2a984535a827aae59b2d49c2.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-re923fdfc2a984535a827aae59b2d49c2.service has successfully entered the 'dead' state. Sep 01 10:52:02 managed-node1 python3.9[55871]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Sep 01 10:52:03 managed-node1 python3.9[56072]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 01 10:52:04 managed-node1 chronyd[614]: Selected source 10.2.32.38 Sep 01 10:52:05 managed-node1 python3.9[56246]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=10 min_size=0 max_size=0 match_sector_size=False with_interface=None Sep 01 10:52:06 managed-node1 python3.9[56423]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Sep 01 10:52:07 managed-node1 python3.9[56596]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 01 10:52:09 managed-node1 python3.9[56770]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Sep 01 10:52:10 managed-node1 sshd[56800]: Accepted publickey for root from 10.31.45.122 port 35796 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 01 10:52:10 managed-node1 systemd-logind[607]: New session 18 of user root. ░░ Subject: A new session 18 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 18 has been created for the user root. ░░ ░░ The leading process of the session is 56800. Sep 01 10:52:10 managed-node1 systemd[1]: Started Session 18 of User root. ░░ Subject: A start job for unit session-18.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-18.scope has finished successfully. ░░ ░░ The job identifier is 2484. Sep 01 10:52:10 managed-node1 sshd[56800]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Sep 01 10:52:10 managed-node1 sshd[56803]: Received disconnect from 10.31.45.122 port 35796:11: disconnected by user Sep 01 10:52:10 managed-node1 sshd[56803]: Disconnected from user root 10.31.45.122 port 35796 Sep 01 10:52:10 managed-node1 sshd[56800]: pam_unix(sshd:session): session closed for user root Sep 01 10:52:10 managed-node1 systemd[1]: session-18.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-18.scope has successfully entered the 'dead' state. Sep 01 10:52:10 managed-node1 systemd-logind[607]: Session 18 logged out. Waiting for processes to exit. Sep 01 10:52:10 managed-node1 systemd-logind[607]: Removed session 18. ░░ Subject: Session 18 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 18 has been terminated. Sep 01 10:52:10 managed-node1 sshd[56832]: Accepted publickey for root from 10.31.45.122 port 35810 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Sep 01 10:52:10 managed-node1 systemd-logind[607]: New session 19 of user root. ░░ Subject: A new session 19 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 19 has been created for the user root. ░░ ░░ The leading process of the session is 56832. Sep 01 10:52:10 managed-node1 systemd[1]: Started Session 19 of User root. ░░ Subject: A start job for unit session-19.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-19.scope has finished successfully. ░░ ░░ The job identifier is 2553. Sep 01 10:52:10 managed-node1 sshd[56832]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)