ansible-playbook 2.9.27 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.19 (main, May 16 2024, 11:40:09) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] No config file found; using defaults [WARNING]: running playbook inside collection fedora.linux_system_roles statically imported: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml statically imported: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml statically imported: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/check_candlepin.yml statically imported: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml statically imported: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_proxy.yml ****************************************************** 1 plays in /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tests_proxy.yml PLAY [Basic proxy test] ******************************************************** META: ran handlers TASK [Get LSR_RHC_TEST_DATA environment variable] ****************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml:3 Saturday 14 February 2026 14:32:22 -0500 (0:00:00.022) 0:00:00.022 ***** ok: [managed-node1] => { "ansible_facts": { "lsr_rhc_test_data_file": "" }, "changed": false } TASK [Import test data] ******************************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml:12 Saturday 14 February 2026 14:32:22 -0500 (0:00:00.035) 0:00:00.058 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Get facts for external test data] **************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml:16 Saturday 14 February 2026 14:32:22 -0500 (0:00:00.031) 0:00:00.089 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Set local lsr_rhc_test_data] ********************************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml:24 Saturday 14 February 2026 14:32:22 -0500 (0:00:00.031) 0:00:00.120 ***** ok: [managed-node1] => { "ansible_facts": { "lsr_rhc_test_data": { "baseurl": "http://localhost:8080", "candlepin_host": "candlepin.local", "candlepin_insecure": false, "candlepin_port": 8443, "candlepin_prefix": "/candlepin", "env_nonworking": "Ceci n'est pas une environment", "envs_register": [ "Environment 2" ], "insights": false, "proxy_auth_hostname": "localhost", "proxy_auth_password": "proxypass", "proxy_auth_port": 3130, "proxy_auth_scheme": "https", "proxy_auth_username": "proxyuser", "proxy_noauth_hostname": "localhost", "proxy_noauth_port": 3128, "proxy_noauth_scheme": "https", "proxy_nonworking_hostname": "wrongproxy", "proxy_nonworking_password": "wrong-proxypassword", "proxy_nonworking_port": 4000, "proxy_nonworking_username": "wrong-proxyuser", "reg_activation_keys": [ "default_key" ], "reg_invalid_password": "invalid-password", "reg_invalid_username": "invalid-user", "reg_organization": "donaldduck", "reg_password": "password", "reg_username": "doc", "release": null, "repositories": [ { "name": "donaldy-content-label-7051", "state": "enabled" }, { "name": "content-label-32060", "state": "disabled" } ] } }, "ansible_included_var_files": [ "/tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/../files/candlepin_data.yml" ], "changed": false } TASK [Check if system is ostree] *********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml:32 Saturday 14 February 2026 14:32:22 -0500 (0:00:00.034) 0:00:00.155 ***** ok: [managed-node1] => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "stat": { "exists": false } } TASK [Set flag to indicate system is ostree] *********************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml:37 Saturday 14 February 2026 14:32:22 -0500 (0:00:00.525) 0:00:00.681 ***** ok: [managed-node1] => { "ansible_facts": { "__rhc_is_ostree": false }, "changed": false } TASK [Get facts for external test data] **************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:9 Saturday 14 February 2026 14:32:22 -0500 (0:00:00.035) 0:00:00.716 ***** ok: [managed-node1] TASK [Set helper fact for Candlepin base URL] ********************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:17 Saturday 14 February 2026 14:32:23 -0500 (0:00:00.651) 0:00:01.367 ***** ok: [managed-node1] => { "ansible_facts": { "_cp_url": "https://candlepin.local:8443/candlepin" }, "changed": false } TASK [Set helper fact for Candlepin owner URL] ********************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:21 Saturday 14 February 2026 14:32:23 -0500 (0:00:00.034) 0:00:01.401 ***** ok: [managed-node1] => { "ansible_facts": { "_cp_url_owner": "https://candlepin.local:8443/candlepin/owners/donaldduck" }, "changed": false } TASK [Add candlepin hostname to /etc/hosts] ************************************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:25 Saturday 14 February 2026 14:32:23 -0500 (0:00:00.034) 0:00:01.436 ***** changed: [managed-node1] => { "backup": "", "changed": true } MSG: line added TASK [Install needed packages] ************************************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:31 Saturday 14 February 2026 14:32:24 -0500 (0:00:00.433) 0:00:01.869 ***** changed: [managed-node1] => { "changed": true, "rc": 0, "results": [ "Installed: podman-gvproxy-3:4.9.4-0.1.module_el8+971+3d3df00d.x86_64", "Installed: podman-3:4.9.4-0.1.module_el8+971+3d3df00d.x86_64", "Installed: libnet-1.1.6-15.el8.x86_64", "Installed: podman-catatonit-3:4.9.4-0.1.module_el8+971+3d3df00d.x86_64", "Installed: podman-plugins-3:4.9.4-0.1.module_el8+971+3d3df00d.x86_64", "Installed: containernetworking-plugins-1:1.4.0-2.module_el8+974+0c52b299.x86_64", "Installed: runc-1:1.1.12-1.module_el8+885+7da147f3.x86_64", "Installed: conmon-3:2.1.10-1.module_el8+804+f131391c.x86_64", "Installed: fuse-common-3.3.0-19.el8.x86_64", "Installed: shadow-utils-subid-2:4.6-22.el8.x86_64", "Installed: criu-3.18-4.module_el8+804+f131391c.x86_64", "Installed: container-selinux-2:2.229.0-2.module_el8+847+7863d4e6.noarch", "Installed: dnsmasq-2.79-33.el8.x86_64", "Installed: libslirp-4.4.0-1.module_el8+804+f131391c.x86_64", "Installed: protobuf-c-1.3.0-8.el8.x86_64", "Installed: slirp4netns-1.2.3-1.module_el8+951+32019cde.x86_64", "Installed: fuse3-libs-3.3.0-19.el8.x86_64", "Installed: fuse3-3.3.0-19.el8.x86_64", "Installed: containers-common-2:1-81.module_el8+968+fbb249c7.x86_64", "Installed: fuse-overlayfs-1.13-1.module_el8+804+f131391c.x86_64" ] } lsrpackages: podman TASK [Clean up Candlepin container] ******************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:38 Saturday 14 February 2026 14:33:14 -0500 (0:00:50.749) 0:00:52.618 ***** included: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml for managed-node1 TASK [Check if the candlepin container exists] ********************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml:6 Saturday 14 February 2026 14:33:14 -0500 (0:00:00.044) 0:00:52.662 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "podman", "ps", "-a", "--filter", "name=candlepin" ], "delta": "0:00:01.575163", "end": "2026-02-14 14:33:16.877392", "rc": 0, "start": "2026-02-14 14:33:15.302229" } STDOUT: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES TASK [Ensure that Candlepin container doesn't exist] *************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml:17 Saturday 14 February 2026 14:33:16 -0500 (0:00:02.032) 0:00:54.695 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Start Candlepin container] *********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:41 Saturday 14 February 2026 14:33:16 -0500 (0:00:00.034) 0:00:54.730 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "podman", "run", "--rm", "--detach", "--hostname", "candlepin.local", "--name", "candlepin", "--publish", "8443:8443", "--publish", "8080:8080", "ghcr.io/candlepin/candlepin-unofficial" ], "delta": "0:00:15.650134", "end": "2026-02-14 14:33:32.925217", "rc": 0, "start": "2026-02-14 14:33:17.275083" } STDOUT: 948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334 STDERR: Trying to pull ghcr.io/candlepin/candlepin-unofficial:latest... Getting image source signatures Copying blob sha256:5baae3f93712d079b6030b8c02b29acecd6a7a6cdce52ab304b31425a048be6b Copying config sha256:6c8d0128d946443dc2cb0b755129351b01ff7b7c65670349e7d53b40a05309c5 Writing manifest to image destination TASK [Ensure directories exist] ************************************************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:62 Saturday 14 February 2026 14:33:33 -0500 (0:00:16.055) 0:01:10.785 ***** ok: [managed-node1] => (item=/etc/pki/product) => { "ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": "/etc/pki/product", "mode": "0755", "owner": "root", "path": "/etc/pki/product", "secontext": "unconfined_u:object_r:cert_t:s0", "size": 6, "state": "directory", "uid": 0 } ok: [managed-node1] => (item=/etc/pki/product-default) => { "ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": "/etc/pki/product-default", "mode": "0755", "owner": "root", "path": "/etc/pki/product-default", "secontext": "unconfined_u:object_r:cert_t:s0", "size": 6, "state": "directory", "uid": 0 } ok: [managed-node1] => (item=/etc/rhsm/ca) => { "ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": "/etc/rhsm/ca", "mode": "0755", "owner": "root", "path": "/etc/rhsm/ca", "secontext": "system_u:object_r:rhsmcertd_config_t:s0", "size": 68, "state": "directory", "uid": 0 } TASK [Copy product certificates] *********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:72 Saturday 14 February 2026 14:33:36 -0500 (0:00:03.733) 0:01:14.519 ***** ok: [managed-node1] => (item=7050) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "podman", "cp", "candlepin:/home/candlepin/devel/candlepin/generated_certs/7050.pem", "/etc/pki/product-default/" ], "delta": "0:00:00.746202", "end": "2026-02-14 14:33:39.090927", "item": "7050", "rc": 0, "start": "2026-02-14 14:33:38.344725" } TASK [Copy Candlepin CA certificate for subscription-manager] ****************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:83 Saturday 14 February 2026 14:33:39 -0500 (0:00:02.618) 0:01:17.138 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "podman", "cp", "candlepin:/etc/candlepin/certs/candlepin-ca.crt", "/etc/rhsm/ca/candlepin-ca.pem" ], "delta": "0:00:00.729062", "end": "2026-02-14 14:33:41.723500", "rc": 0, "start": "2026-02-14 14:33:40.994438" } TASK [Copy Candlepin CA certificate for system] ******************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:92 Saturday 14 February 2026 14:33:41 -0500 (0:00:02.502) 0:01:19.640 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "podman", "cp", "candlepin:/etc/candlepin/certs/candlepin-ca.crt", "/etc/pki/ca-trust/source/anchors/candlepin-ca.pem" ], "delta": "0:00:00.655443", "end": "2026-02-14 14:33:43.596275", "rc": 0, "start": "2026-02-14 14:33:42.940832" } TASK [Update system certificates store] **************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:101 Saturday 14 February 2026 14:33:43 -0500 (0:00:01.919) 0:01:21.560 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "update-ca-trust", "extract" ], "delta": "0:00:01.796844", "end": "2026-02-14 14:33:46.819314", "rc": 0, "start": "2026-02-14 14:33:45.022470" } TASK [Wait for started Candlepin] ********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:108 Saturday 14 February 2026 14:33:47 -0500 (0:00:03.260) 0:01:24.820 ***** ok: [managed-node1] => { "attempts": 1, "changed": false, "connection": "close", "content_type": "application/json", "cookies": {}, "cookies_string": "", "date": "Sat, 14 Feb 2026 19:33:59 GMT", "elapsed": 11, "redirected": true, "status": 200, "transfer_encoding": "chunked", "url": "https://candlepin.local:8443/candlepin/", "vary": "accept-encoding", "x_candlepin_request_uuid": "55a39787-85a5-42e8-9056-371f99e6a63a", "x_version": "4.7.3-1" } MSG: OK (unknown bytes) TASK [Install GPG key for RPM repositories] ************************************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:118 Saturday 14 February 2026 14:34:00 -0500 (0:00:12.970) 0:01:37.790 ***** changed: [managed-node1] => { "changed": true, "checksum_dest": null, "checksum_src": "e535dabdc941afb531fa9bb75b9a98d22bca8b81", "dest": "/etc/pki/rpm-gpg/RPM-GPG-KEY-candlepin", "elapsed": 0, "gid": 0, "group": "root", "md5sum": "eeaf1f5c1d5537f19a46506be9014ae6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:cert_t:s0", "size": 1660, "src": "/root/.ansible/tmp/ansible-tmp-1771097640.0826979-10403-91040941885603/tmpfbev6qn2", "state": "file", "status_code": 200, "uid": 0, "url": "http://candlepin.local:8080/RPM-GPG-KEY-candlepin" } MSG: OK (1660 bytes) TASK [Add environments] ******************************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:127 Saturday 14 February 2026 14:34:00 -0500 (0:00:00.550) 0:01:38.340 ***** skipping: [managed-node1] => (item={'name': 'Environment 1', 'desc': 'The environment 1', 'id': 'envId1'}) => { "ansible_loop_var": "item", "changed": false, "item": { "desc": "The environment 1", "id": "envId1", "name": "Environment 1" }, "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item={'name': 'Environment 2', 'desc': 'The environment 2', 'id': 'envId2'}) => { "ansible_loop_var": "item", "changed": false, "item": { "desc": "The environment 2", "id": "envId2", "name": "Environment 2" }, "skip_reason": "Conditional result was False" } TASK [Check Candlepin works] *************************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/check_candlepin.yml:3 Saturday 14 February 2026 14:34:00 -0500 (0:00:00.042) 0:01:38.383 ***** ok: [managed-node1] => { "changed": false, "connection": "close", "content_type": "application/json", "cookies": {}, "cookies_string": "", "date": "Sat, 14 Feb 2026 19:34:01 GMT", "elapsed": 0, "redirected": true, "status": 200, "transfer_encoding": "chunked", "url": "https://candlepin.local:8443/candlepin/", "vary": "accept-encoding", "x_candlepin_request_uuid": "38e2f7d0-3fcb-49a2-9240-7a0bc1957236", "x_version": "4.7.3-1" } MSG: OK (unknown bytes) TASK [Install packages for squid] ********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:7 Saturday 14 February 2026 14:34:01 -0500 (0:00:00.509) 0:01:38.892 ***** changed: [managed-node1] => { "changed": true, "rc": 0, "results": [ "Installed: apr-util-bdb-1.6.1-9.el8.x86_64", "Installed: perl-Math-Complex-1.59-422.el8.noarch", "Installed: squid-7:4.15-10.module_el8+997+5764cec8.x86_64", "Installed: httpd-tools-2.4.37-64.module_el8+965+1ad5c49d.x86_64", "Installed: perl-Digest-SHA-1:6.02-1.el8.x86_64", "Installed: apr-util-openssl-1.6.1-9.el8.x86_64", "Installed: libtool-ltdl-2.4.6-25.el8.x86_64", "Installed: libecap-1.0.1-2.module_el8+660+c5a9a808.x86_64", "Installed: apr-1.6.3-12.el8.x86_64", "Installed: perl-DBI-1.641-4.module_el8+332+132e4365.x86_64", "Installed: apr-util-1.6.1-9.el8.x86_64", "Installed: perl-Math-BigInt-1:1.9998.11-7.el8.noarch" ] } lsrpackages: httpd-tools squid TASK [Check the status of the backup of configuration] ************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:15 Saturday 14 February 2026 14:34:07 -0500 (0:00:06.646) 0:01:45.539 ***** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [Backup the configuration] ************************************************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:20 Saturday 14 February 2026 14:34:08 -0500 (0:00:00.456) 0:01:45.995 ***** changed: [managed-node1] => { "changed": true, "checksum": "03416f7b93f3c21eedb46d4e75c2ccd76be402e4", "dest": "/etc/squid/squid.conf.BACKUP", "gid": 0, "group": "root", "md5sum": "d5d9b333b227e203ea890877e8587e84", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:squid_conf_t:s0", "size": 2482, "src": "/etc/squid/squid.conf", "state": "file", "uid": 0 } TASK [Copy the pristine configuration back] ************************************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:29 Saturday 14 February 2026 14:34:08 -0500 (0:00:00.525) 0:01:46.521 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Open the Candlepin port] ************************************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:38 Saturday 14 February 2026 14:34:08 -0500 (0:00:00.041) 0:01:46.563 ***** changed: [managed-node1] => { "backup": "", "changed": true } MSG: line added TASK [Set the shutdown lifetime] *********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:47 Saturday 14 February 2026 14:34:09 -0500 (0:00:00.364) 0:01:46.927 ***** changed: [managed-node1] => { "backup": "", "changed": true } MSG: line added TASK [Set the port] ************************************************************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:57 Saturday 14 February 2026 14:34:09 -0500 (0:00:00.401) 0:01:47.329 ***** ok: [managed-node1] => { "backup": "", "changed": false } TASK [Create the new passwd file] ********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:66 Saturday 14 February 2026 14:34:09 -0500 (0:00:00.362) 0:01:47.692 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Set the port] ************************************************************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:78 Saturday 14 February 2026 14:34:09 -0500 (0:00:00.034) 0:01:47.726 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Disable HTTP access allow] *********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:84 Saturday 14 February 2026 14:34:09 -0500 (0:00:00.032) 0:01:47.759 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Insert initial auth config] ********************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:90 Saturday 14 February 2026 14:34:10 -0500 (0:00:00.034) 0:01:47.793 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Add authenticated acl] *************************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:103 Saturday 14 February 2026 14:34:10 -0500 (0:00:00.033) 0:01:47.827 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Allow authenticated acl] ************************************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:111 Saturday 14 February 2026 14:34:10 -0500 (0:00:00.034) 0:01:47.862 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Restart squid] *********************************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:119 Saturday 14 February 2026 14:34:10 -0500 (0:00:00.033) 0:01:47.895 ***** changed: [managed-node1] => { "changed": true, "name": "squid", "state": "started", "status": { "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network-online.target network.target nss-lookup.target systemd-journald.socket basic.target sysinit.target system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Squid caching proxy", "DevicePolicy": "auto", "Documentation": "man:squid(8)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/squid (ignore_errors=no)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/usr/bin/kill ; argv[]=/usr/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/squid ; argv[]=/usr/sbin/squid --foreground $SQUID_OPTS -f ${SQUID_CONF} ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartPre": "{ path=/usr/libexec/squid/cache_swap.sh ; argv[]=/usr/libexec/squid/cache_swap.sh ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/squid.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "squid.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "16384", "LimitNOFILESoft": "16384", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "squid.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PIDFile": "/run/squid.pid", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "UtmpMode": "init", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [Add SELinux policy for proxy ports] ************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tests_proxy.yml:26 Saturday 14 February 2026 14:34:11 -0500 (0:00:01.106) 0:01:49.001 ***** ERROR! the role 'fedora.linux_system_roles.selinux' was not found in fedora.linux_system_roles:ansible.legacy:/tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc The error appears to be in '/tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tests_proxy.yml': line 28, column 19, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: include_role: name: fedora.linux_system_roles.selinux ^ here TASK [Unregister] ************************************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tests_proxy.yml:353 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.031) 0:01:49.033 ***** TASK [fedora.linux_system_roles.rhc : Set ansible_facts required by role] ****** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/main.yml:3 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.051) 0:01:49.084 ***** included: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.rhc : Ensure ansible_facts used by role] ******* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/set_vars.yml:3 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.053) 0:01:49.138 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Check if system is ostree] *************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/set_vars.yml:11 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.034) 0:01:49.173 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Set flag to indicate system is ostree] *** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/set_vars.yml:16 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.031) 0:01:49.205 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Check if insights-packages are installed] *** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/set_vars.yml:20 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.031) 0:01:49.236 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Handle insights unregistration] ********** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/main.yml:6 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.031) 0:01:49.268 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Handle system subscription] ************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/main.yml:15 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.030) 0:01:49.299 ***** included: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/subscription-manager.yml for managed-node1 TASK [fedora.linux_system_roles.rhc : Ensure required packages are installed] *** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/subscription-manager.yml:3 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.023) 0:01:49.322 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Get subscription status] ***************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/subscription-manager.yml:10 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.031) 0:01:49.354 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Call subscription-manager] *************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/subscription-manager.yml:23 Saturday 14 February 2026 14:34:11 -0500 (0:00:00.035) 0:01:49.389 ***** ok: [managed-node1] => { "changed": false } MSG: System already unregistered. TASK [fedora.linux_system_roles.rhc : Set or unset the release] **************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/subscription-manager.yml:49 Saturday 14 February 2026 14:34:12 -0500 (0:00:00.918) 0:01:50.308 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.rhc : Configure repositories] ****************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/subscription-manager.yml:58 Saturday 14 February 2026 14:34:12 -0500 (0:00:00.034) 0:01:50.343 ***** TASK [fedora.linux_system_roles.rhc : Handle insights registration] ************ task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/main.yml:18 Saturday 14 February 2026 14:34:12 -0500 (0:00:00.031) 0:01:50.374 ***** skipping: [managed-node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Clean up Candlepin container] ******************************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tests_proxy.yml:359 Saturday 14 February 2026 14:34:12 -0500 (0:00:00.032) 0:01:50.406 ***** included: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml for managed-node1 TASK [Check if the candlepin container exists] ********************************* task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml:6 Saturday 14 February 2026 14:34:12 -0500 (0:00:00.033) 0:01:50.440 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "podman", "ps", "-a", "--filter", "name=candlepin" ], "delta": "0:00:00.039316", "end": "2026-02-14 14:34:13.016525", "rc": 0, "start": "2026-02-14 14:34:12.977209" } STDOUT: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 948ae7843d6e ghcr.io/candlepin/candlepin-unofficial:latest /sbin/init 41 seconds ago Up 41 seconds 0.0.0.0:8080->8080/tcp, 0.0.0.0:8443->8443/tcp candlepin TASK [Ensure that Candlepin container doesn't exist] *************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml:17 Saturday 14 February 2026 14:34:13 -0500 (0:00:00.394) 0:01:50.835 ***** changed: [managed-node1] => { "changed": true, "cmd": [ "podman", "stop", "candlepin" ], "delta": "0:00:00.817846", "end": "2026-02-14 14:34:14.194335", "rc": 0, "start": "2026-02-14 14:34:13.376489" } STDOUT: candlepin TASK [Remove SELinux policy for proxy ports] *********************************** task path: /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tests_proxy.yml:362 Saturday 14 February 2026 14:34:14 -0500 (0:00:01.184) 0:01:52.020 ***** ERROR! the role 'fedora.linux_system_roles.selinux' was not found in fedora.linux_system_roles:ansible.legacy:/tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc The error appears to be in '/tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tests_proxy.yml': line 364, column 19, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: include_role: name: fedora.linux_system_roles.selinux ^ here PLAY RECAP ********************************************************************* managed-node1 : ok=33 changed=9 unreachable=0 failed=0 skipped=21 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 14 February 2026 14:34:14 -0500 (0:00:00.034) 0:01:52.055 ***** =============================================================================== Install needed packages ------------------------------------------------ 50.75s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:31 Start Candlepin container ---------------------------------------------- 16.06s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:41 Wait for started Candlepin --------------------------------------------- 12.97s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:108 Install packages for squid ---------------------------------------------- 6.65s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:7 Ensure directories exist ------------------------------------------------ 3.73s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:62 Update system certificates store ---------------------------------------- 3.26s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:101 Copy product certificates ----------------------------------------------- 2.62s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:72 Copy Candlepin CA certificate for subscription-manager ------------------ 2.50s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:83 Check if the candlepin container exists --------------------------------- 2.03s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml:6 Copy Candlepin CA certificate for system -------------------------------- 1.92s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:92 Ensure that Candlepin container doesn't exist --------------------------- 1.18s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/teardown_candlepin.yml:17 Restart squid ----------------------------------------------------------- 1.11s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:119 fedora.linux_system_roles.rhc : Call subscription-manager --------------- 0.92s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/roles/rhc/tasks/subscription-manager.yml:23 Get facts for external test data ---------------------------------------- 0.65s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:9 Install GPG key for RPM repositories ------------------------------------ 0.55s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:118 Backup the configuration ------------------------------------------------ 0.53s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:20 Check if system is ostree ----------------------------------------------- 0.53s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_test_data.yml:32 Check Candlepin works --------------------------------------------------- 0.51s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/check_candlepin.yml:3 Check the status of the backup of configuration ------------------------- 0.46s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_squid.yml:15 Add candlepin hostname to /etc/hosts ------------------------------------ 0.43s /tmp/collections-9qD/ansible_collections/fedora/linux_system_roles/tests/rhc/tasks/setup_candlepin.yml:25 -- Logs begin at Sat 2026-02-14 14:26:19 EST, end at Sat 2026-02-14 14:34:14 EST. -- Feb 14 14:32:21 managed-node1 sshd[9050]: Accepted publickey for root from 10.31.11.228 port 33426 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Feb 14 14:32:21 managed-node1 systemd-logind[597]: New session 14 of user root. -- Subject: A new session 14 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 14 has been created for the user root. -- -- The leading process of the session is 9050. Feb 14 14:32:21 managed-node1 systemd[1]: Started Session 14 of user root. -- Subject: Unit session-14.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-14.scope has finished starting up. -- -- The start-up result is done. Feb 14 14:32:21 managed-node1 sshd[9050]: pam_unix(sshd:session): session opened for user root by (uid=0) Feb 14 14:32:21 managed-node1 sshd[9053]: Received disconnect from 10.31.11.228 port 33426:11: disconnected by user Feb 14 14:32:21 managed-node1 sshd[9053]: Disconnected from user root 10.31.11.228 port 33426 Feb 14 14:32:21 managed-node1 sshd[9050]: pam_unix(sshd:session): session closed for user root Feb 14 14:32:21 managed-node1 systemd[1]: session-14.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-14.scope has successfully entered the 'dead' state. Feb 14 14:32:21 managed-node1 systemd-logind[597]: Session 14 logged out. Waiting for processes to exit. Feb 14 14:32:21 managed-node1 systemd-logind[597]: Removed session 14. -- Subject: Session 14 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 14 has been terminated. Feb 14 14:32:22 managed-node1 sudo[9215]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glovwelziwbjjqaynwwlexoueoaaeeth ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097542.4450898-9384-194457019065713/AnsiballZ_stat.py' Feb 14 14:32:22 managed-node1 sudo[9215]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:32:22 managed-node1 platform-python[9218]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 14:32:22 managed-node1 sudo[9215]: pam_unix(sudo:session): session closed for user root Feb 14 14:32:23 managed-node1 sudo[9341]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdydhqcgruszarbntrfaeukwjohzkjxj ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097543.0055952-9398-257624247185832/AnsiballZ_setup.py' Feb 14 14:32:23 managed-node1 sudo[9341]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:32:23 managed-node1 platform-python[9344]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Feb 14 14:32:23 managed-node1 sudo[9341]: pam_unix(sudo:session): session closed for user root Feb 14 14:32:23 managed-node1 sudo[9471]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mazgwrpuoqwnvsmtsocnqkkblazdcvsm ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097543.7287664-9411-241550447434185/AnsiballZ_lineinfile.py' Feb 14 14:32:23 managed-node1 sudo[9471]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:32:24 managed-node1 platform-python[9474]: ansible-lineinfile Invoked with path=/etc/hosts line=127.0.0.1 candlepin.local regexp=.*candlepin.local state=present backrefs=False create=False backup=False firstmatch=False follow=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None src=None force=None content=NOT_LOGGING_PARAMETER remote_src=None delimiter=None directory_mode=None Feb 14 14:32:24 managed-node1 sudo[9471]: pam_unix(sudo:session): session closed for user root Feb 14 14:32:24 managed-node1 sudo[9597]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqfxgumxfuxmirowshhaqwyrqyoazbd ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097544.1613567-9427-150929472820414/AnsiballZ_setup.py' Feb 14 14:32:24 managed-node1 sudo[9597]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:32:24 managed-node1 platform-python[9600]: ansible-setup Invoked with filter=ansible_pkg_mgr gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 14 14:32:24 managed-node1 sudo[9597]: pam_unix(sudo:session): session closed for user root Feb 14 14:32:24 managed-node1 sudo[9668]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnilflodghqxczzgeonmpolqxbjdchhk ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097544.1613567-9427-150929472820414/AnsiballZ_dnf.py' Feb 14 14:32:24 managed-node1 sudo[9668]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:32:25 managed-node1 platform-python[9671]: ansible-dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 14 14:32:30 managed-node1 dbus-daemon[598]: [system] Reloaded configuration Feb 14 14:32:30 managed-node1 setsebool[9703]: The virt_use_nfs policy boolean was changed to 1 by root Feb 14 14:32:30 managed-node1 setsebool[9703]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Feb 14 14:32:47 managed-node1 kernel: SELinux: Converting 389 SID table entries... Feb 14 14:32:47 managed-node1 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 14:32:47 managed-node1 kernel: SELinux: policy capability open_perms=1 Feb 14 14:32:47 managed-node1 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 14:32:47 managed-node1 kernel: SELinux: policy capability always_check_network=0 Feb 14 14:32:47 managed-node1 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 14:32:47 managed-node1 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 14:32:47 managed-node1 dbus-daemon[598]: [system] Reloaded configuration Feb 14 14:32:47 managed-node1 kernel: fuse: init (API version 7.34) Feb 14 14:32:47 managed-node1 systemd[1]: Mounting FUSE Control File System... -- Subject: Unit sys-fs-fuse-connections.mount has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sys-fs-fuse-connections.mount has begun starting up. Feb 14 14:32:47 managed-node1 systemd[1]: Mounted FUSE Control File System. -- Subject: Unit sys-fs-fuse-connections.mount has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit sys-fs-fuse-connections.mount has finished starting up. -- -- The start-up result is done. Feb 14 14:32:48 managed-node1 dbus-daemon[598]: [system] Reloaded configuration Feb 14 14:32:48 managed-node1 dbus-daemon[598]: [system] Reloaded configuration Feb 14 14:33:13 managed-node1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-rccadeeb196134dfab5d0074f464c292a.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-rccadeeb196134dfab5d0074f464c292a.service has finished starting up. -- -- The start-up result is done. Feb 14 14:33:13 managed-node1 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Feb 14 14:33:13 managed-node1 systemd[1]: Reloading. Feb 14 14:33:14 managed-node1 sudo[9668]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:14 managed-node1 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Feb 14 14:33:14 managed-node1 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Feb 14 14:33:14 managed-node1 systemd[1]: run-rccadeeb196134dfab5d0074f464c292a.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-rccadeeb196134dfab5d0074f464c292a.service has successfully entered the 'dead' state. Feb 14 14:33:15 managed-node1 sudo[12179]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clgcfwfpnnzruicdclgmrbfkfnjajfea ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097594.9554527-9827-280339108299070/AnsiballZ_command.py' Feb 14 14:33:15 managed-node1 sudo[12179]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:15 managed-node1 platform-python[12182]: ansible-command Invoked with argv=['podman', 'ps', '-a', '--filter', 'name=candlepin'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:33:16 managed-node1 kernel: evm: overlay not supported Feb 14 14:33:16 managed-node1 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 14:33:16 managed-node1 sudo[12179]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:17 managed-node1 sudo[12314]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spioggyxpjuldqibfwrkcwdohceeqihi ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097597.0263817-9863-25184053279218/AnsiballZ_command.py' Feb 14 14:33:17 managed-node1 sudo[12314]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:17 managed-node1 platform-python[12317]: ansible-command Invoked with argv=['podman', 'run', '--rm', '--detach', '--hostname', 'candlepin.local', '--name', 'candlepin', '--publish', '8443:8443', '--publish', '8080:8080', 'ghcr.io/candlepin/candlepin-unofficial'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:33:32 managed-node1 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck186469908-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-volatile\x2dcheck186469908-merged.mount has successfully entered the 'dead' state. Feb 14 14:33:32 managed-node1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1119] manager: (cni-podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/3) Feb 14 14:33:32 managed-node1 systemd-udevd[12349]: Using default interface naming scheme 'rhel-8.0'. Feb 14 14:33:32 managed-node1 systemd-udevd[12349]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 14 14:33:32 managed-node1 systemd-udevd[12349]: Could not generate persistent MAC address for cni-podman0: No such file or directory Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1316] manager: (veth6ea29213): new Veth device (/org/freedesktop/NetworkManager/Devices/4) Feb 14 14:33:32 managed-node1 systemd-udevd[12346]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 14 14:33:32 managed-node1 systemd-udevd[12346]: Could not generate persistent MAC address for veth6ea29213: No such file or directory Feb 14 14:33:32 managed-node1 kernel: IPv6: ADDRCONF(NETDEV_UP): veth6ea29213: link is not ready Feb 14 14:33:32 managed-node1 kernel: cni-podman0: port 1(veth6ea29213) entered blocking state Feb 14 14:33:32 managed-node1 kernel: cni-podman0: port 1(veth6ea29213) entered disabled state Feb 14 14:33:32 managed-node1 kernel: device veth6ea29213 entered promiscuous mode Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1363] device (cni-podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1367] device (cni-podman0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1374] device (cni-podman0): Activation: starting connection 'cni-podman0' (05e9aaaf-0bfa-4686-8918-970b7c0bad6d) Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1375] device (cni-podman0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1377] device (cni-podman0): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1379] device (cni-podman0): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1381] device (cni-podman0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 dbus-daemon[598]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=662 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Feb 14 14:33:32 managed-node1 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 14 14:33:32 managed-node1 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 14 14:33:32 managed-node1 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6ea29213: link becomes ready Feb 14 14:33:32 managed-node1 kernel: cni-podman0: port 1(veth6ea29213) entered blocking state Feb 14 14:33:32 managed-node1 kernel: cni-podman0: port 1(veth6ea29213) entered forwarding state Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1544] device (veth6ea29213): carrier: link connected Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.1547] device (cni-podman0): carrier: link connected Feb 14 14:33:32 managed-node1 systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has begun starting up. Feb 14 14:33:32 managed-node1 dbus-daemon[598]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Feb 14 14:33:32 managed-node1 systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.2009] device (cni-podman0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.2011] device (cni-podman0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Feb 14 14:33:32 managed-node1 NetworkManager[662]: [1771097612.2015] device (cni-podman0): Activation: successful, device activated. Feb 14 14:33:32 managed-node1 systemd[1]: Created slice machine.slice. -- Subject: Unit machine.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine.slice has finished starting up. -- -- The start-up result is done. Feb 14 14:33:32 managed-node1 systemd[1]: Started libpod-conmon-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope. -- Subject: Unit libpod-conmon-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope has finished starting up. -- -- The start-up result is done. Feb 14 14:33:32 managed-node1 systemd[1]: run-runc-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334-runc.7xtIT3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-runc-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334-runc.7xtIT3.mount has successfully entered the 'dead' state. Feb 14 14:33:32 managed-node1 systemd[1]: Started libcontainer container 948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334. -- Subject: Unit libpod-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope has finished starting up. -- -- The start-up result is done. Feb 14 14:33:32 managed-node1 sudo[12314]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:33 managed-node1 sudo[12727]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdeqkqgummxebkstjmmxqpboaxnqqnra ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097613.0781546-10028-256501757944949/AnsiballZ_file.py' Feb 14 14:33:33 managed-node1 sudo[12727]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:33 managed-node1 platform-python[12730]: ansible-file Invoked with path=/etc/pki/product state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Feb 14 14:33:33 managed-node1 sudo[12727]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:34 managed-node1 sudo[12854]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueayahonvtsfyjmbcehohyzoouzsgxsv ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097614.0289974-10028-61271393611639/AnsiballZ_file.py' Feb 14 14:33:34 managed-node1 sudo[12854]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:34 managed-node1 platform-python[12857]: ansible-file Invoked with path=/etc/pki/product-default state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Feb 14 14:33:34 managed-node1 sudo[12854]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:36 managed-node1 sudo[12980]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfvmskpxsshyzgbbpqubyxsfgtnkoxqq ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097615.2505708-10028-267819711732490/AnsiballZ_file.py' Feb 14 14:33:36 managed-node1 sudo[12980]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:36 managed-node1 platform-python[12983]: ansible-file Invoked with path=/etc/rhsm/ca state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Feb 14 14:33:36 managed-node1 sudo[12980]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:38 managed-node1 sudo[13106]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcvhchkfthzdzxpxrreqsbshscksjnsa ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097617.0742633-10072-177113008160347/AnsiballZ_command.py' Feb 14 14:33:38 managed-node1 sudo[13106]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:38 managed-node1 platform-python[13109]: ansible-command Invoked with argv=['podman', 'cp', 'candlepin:/home/candlepin/devel/candlepin/generated_certs/7050.pem', '/etc/pki/product-default/'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:33:39 managed-node1 sudo[13106]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:40 managed-node1 sudo[13268]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hssgktbeoifjwzespenbthwwpecjwpso ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097619.6109102-10102-14401884493845/AnsiballZ_command.py' Feb 14 14:33:40 managed-node1 sudo[13268]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:40 managed-node1 platform-python[13271]: ansible-command Invoked with argv=['podman', 'cp', 'candlepin:/etc/candlepin/certs/candlepin-ca.crt', '/etc/rhsm/ca/candlepin-ca.pem'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:33:41 managed-node1 sudo[13268]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:42 managed-node1 systemd[1]: NetworkManager-dispatcher.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Feb 14 14:33:42 managed-node1 sudo[13431]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzkckiygopcwlpbnohmohvswyapuddr ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097622.0531633-10150-196517667350964/AnsiballZ_command.py' Feb 14 14:33:42 managed-node1 sudo[13431]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:42 managed-node1 platform-python[13434]: ansible-command Invoked with argv=['podman', 'cp', 'candlepin:/etc/candlepin/certs/candlepin-ca.crt', '/etc/pki/ca-trust/source/anchors/candlepin-ca.pem'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:33:43 managed-node1 sudo[13431]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:44 managed-node1 sudo[13599]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmsciyzmkwtmzdjlkabbjprbrrdfdhxj ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097623.9677832-10214-125103148948064/AnsiballZ_command.py' Feb 14 14:33:44 managed-node1 sudo[13599]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:45 managed-node1 platform-python[13602]: ansible-command Invoked with argv=['update-ca-trust', 'extract'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:33:46 managed-node1 sudo[13599]: pam_unix(sudo:session): session closed for user root Feb 14 14:33:48 managed-node1 sudo[13732]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzcltgbrxfrxbrlaffrsvqozhamgtcyg ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097627.2674747-10257-186848056538679/AnsiballZ_uri.py' Feb 14 14:33:48 managed-node1 sudo[13732]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:33:48 managed-node1 platform-python[13735]: ansible-uri Invoked with url=https://candlepin.local:8443/candlepin method=HEAD validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False body_format=raw return_content=False follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Feb 14 14:33:59 managed-node1 sudo[13732]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:00 managed-node1 sudo[13953]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-litawsjferwznkrdxotyorvbzmqwscwg ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097640.0826979-10403-91040941885603/AnsiballZ_get_url.py' Feb 14 14:34:00 managed-node1 sudo[13953]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:00 managed-node1 platform-python[13956]: ansible-get_url Invoked with url=http://candlepin.local:8080/RPM-GPG-KEY-candlepin dest=/etc/pki/rpm-gpg/RPM-GPG-KEY-candlepin mode=0644 force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False sha256sum= checksum= timeout=10 follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None backup=None headers=None tmp_dest=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None src=None content=NOT_LOGGING_PARAMETER remote_src=None regexp=None delimiter=None directory_mode=None Feb 14 14:34:00 managed-node1 sudo[13953]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:00 managed-node1 sudo[14079]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sykzfgifkhvgsijgmrjgdhrpnpgzhuae ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097640.6738286-10421-141236890441491/AnsiballZ_uri.py' Feb 14 14:34:00 managed-node1 sudo[14079]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:00 managed-node1 platform-python[14082]: ansible-uri Invoked with url=https://candlepin.local:8443/candlepin method=HEAD validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False body_format=raw return_content=False follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Feb 14 14:34:01 managed-node1 sudo[14079]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:01 managed-node1 sudo[14205]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffxmmttlqlbuceetqtyyrmhwivtynozl ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097641.1902025-10447-249710518215779/AnsiballZ_setup.py' Feb 14 14:34:01 managed-node1 sudo[14205]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:01 managed-node1 platform-python[14208]: ansible-setup Invoked with filter=ansible_pkg_mgr gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 14 14:34:01 managed-node1 sudo[14205]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:01 managed-node1 sudo[14276]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuiobisfclhemdprtnzpbqbeeimwkyub ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097641.1902025-10447-249710518215779/AnsiballZ_dnf.py' Feb 14 14:34:01 managed-node1 sudo[14276]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:01 managed-node1 platform-python[14279]: ansible-dnf Invoked with name=['squid', 'httpd-tools'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Feb 14 14:34:05 managed-node1 groupadd[14306]: group added to /etc/group: name=squid, GID=23 Feb 14 14:34:05 managed-node1 groupadd[14306]: group added to /etc/gshadow: name=squid Feb 14 14:34:05 managed-node1 groupadd[14306]: new group: name=squid, GID=23 Feb 14 14:34:05 managed-node1 useradd[14313]: new user: name=squid, UID=23, GID=23, home=/var/spool/squid, shell=/sbin/nologin Feb 14 14:34:06 managed-node1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-r9a990bc864974b41a7d35e8e048cb823.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-r9a990bc864974b41a7d35e8e048cb823.service has finished starting up. -- -- The start-up result is done. Feb 14 14:34:06 managed-node1 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Feb 14 14:34:06 managed-node1 systemd[1]: Reloading. Feb 14 14:34:07 managed-node1 sudo[14276]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:07 managed-node1 sudo[16217]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifreknzzkszsbmvkrsrjwjpeiclxwwpy ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097647.8375335-10772-166310068266562/AnsiballZ_stat.py' Feb 14 14:34:07 managed-node1 sudo[16217]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:08 managed-node1 platform-python[16243]: ansible-stat Invoked with path=/etc/squid/squid.conf.BACKUP follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 14:34:08 managed-node1 sudo[16217]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:08 managed-node1 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Feb 14 14:34:08 managed-node1 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Feb 14 14:34:08 managed-node1 systemd[1]: run-r9a990bc864974b41a7d35e8e048cb823.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-r9a990bc864974b41a7d35e8e048cb823.service has successfully entered the 'dead' state. Feb 14 14:34:08 managed-node1 sudo[16831]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fewmbwpxgjifpfwcxstcvwrauwabvuqv ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097648.32146-10790-163271936400542/AnsiballZ_copy.py' Feb 14 14:34:08 managed-node1 sudo[16831]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:08 managed-node1 platform-python[16834]: ansible-copy Invoked with src=/etc/squid/squid.conf dest=/etc/squid/squid.conf.BACKUP remote_src=True mode=0644 backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Feb 14 14:34:08 managed-node1 sudo[16831]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:08 managed-node1 sudo[16959]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbttlpztgtjlybjhciyumktkogasjdao ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097648.8589103-10820-239867882790510/AnsiballZ_lineinfile.py' Feb 14 14:34:08 managed-node1 sudo[16959]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:09 managed-node1 platform-python[16962]: ansible-lineinfile Invoked with path=/etc/squid/squid.conf regexp=^acl SSL_ports port 8443 insertbefore=^acl Safe_ports firstmatch=True line=acl SSL_ports port 8443 # Candlepin state=present backrefs=False create=False backup=False follow=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None src=None force=None content=NOT_LOGGING_PARAMETER remote_src=None delimiter=None directory_mode=None Feb 14 14:34:09 managed-node1 sudo[16959]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:09 managed-node1 sudo[17085]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bllondmtpcecnhgmndiikefzyeassces ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097649.2363572-10830-238533717151339/AnsiballZ_lineinfile.py' Feb 14 14:34:09 managed-node1 sudo[17085]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:09 managed-node1 platform-python[17088]: ansible-lineinfile Invoked with path=/etc/squid/squid.conf regexp=^shutdown_lifetime line=shutdown_lifetime 5 seconds state=present backrefs=False create=False backup=False firstmatch=False follow=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None src=None force=None content=NOT_LOGGING_PARAMETER remote_src=None delimiter=None directory_mode=None Feb 14 14:34:09 managed-node1 sudo[17085]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:09 managed-node1 sudo[17211]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itpojcgknfnyswvjkjsoesbdahvjcoke ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097649.62359-10850-275339734019398/AnsiballZ_lineinfile.py' Feb 14 14:34:09 managed-node1 sudo[17211]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:09 managed-node1 platform-python[17214]: ansible-lineinfile Invoked with path=/etc/squid/squid.conf regexp=^http_port line=http_port 3128 state=present backrefs=False create=False backup=False firstmatch=False follow=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None src=None force=None content=NOT_LOGGING_PARAMETER remote_src=None delimiter=None directory_mode=None Feb 14 14:34:09 managed-node1 sudo[17211]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:10 managed-node1 sudo[17337]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqmtlcszrtdkofyhkakgnrwitrvsmkhk ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097650.1864395-10871-211161618785643/AnsiballZ_setup.py' Feb 14 14:34:10 managed-node1 sudo[17337]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:10 managed-node1 platform-python[17340]: ansible-setup Invoked with gather_subset=['!all'] filter=ansible_service_mgr gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 14 14:34:10 managed-node1 sudo[17337]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:10 managed-node1 sudo[17408]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfcjacampwxmcxgtpggvcqzbdfncsowb ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097650.1864395-10871-211161618785643/AnsiballZ_systemd.py' Feb 14 14:34:10 managed-node1 sudo[17408]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:11 managed-node1 platform-python[17411]: ansible-systemd Invoked with name=squid state=restarted daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None scope=None Feb 14 14:34:11 managed-node1 systemd[1]: Starting Squid caching proxy... -- Subject: Unit squid.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit squid.service has begun starting up. Feb 14 14:34:11 managed-node1 squid[17424]: Squid Parent: will start 1 kids Feb 14 14:34:11 managed-node1 squid[17424]: Squid Parent: (squid-1) process 17426 started Feb 14 14:34:11 managed-node1 systemd[1]: Started Squid caching proxy. -- Subject: Unit squid.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit squid.service has finished starting up. -- -- The start-up result is done. Feb 14 14:34:11 managed-node1 sudo[17408]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:11 managed-node1 sudo[17549]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cymtfunwuuoawfwczpihvmthsjgblwll ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097651.7251751-10921-271920149433735/AnsiballZ_redhat_subscription.py' Feb 14 14:34:11 managed-node1 sudo[17549]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:12 managed-node1 platform-python[17552]: ansible-community.general.redhat_subscription Invoked with state=absent force_register=False pool_ids=[] username=None password=NOT_LOGGING_PARAMETER token=NOT_LOGGING_PARAMETER server_hostname=None server_insecure=None server_prefix=None server_port=None rhsm_baseurl=None rhsm_repo_ca_cert=None auto_attach=None activationkey=NOT_LOGGING_PARAMETER org_id=None environment=None consumer_type=None consumer_name=None consumer_id=None server_proxy_hostname=None server_proxy_scheme=None server_proxy_port=None server_proxy_user=None server_proxy_password=NOT_LOGGING_PARAMETER release=None syspurpose=None Feb 14 14:34:12 managed-node1 sudo[17549]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:12 managed-node1 sudo[17677]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkfcpdecrenpbcxnnzibhhnrbgaynxvy ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097652.7314227-10966-260969087771789/AnsiballZ_command.py' Feb 14 14:34:12 managed-node1 sudo[17677]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:12 managed-node1 platform-python[17680]: ansible-command Invoked with argv=['podman', 'ps', '-a', '--filter', 'name=candlepin'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:34:13 managed-node1 sudo[17677]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:13 managed-node1 sudo[17810]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eznufydzwtqfvcmqvhmlxxxtcnvwyxng ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1771097653.1275537-10979-52806368448915/AnsiballZ_command.py' Feb 14 14:34:13 managed-node1 sudo[17810]: pam_unix(sudo:session): session opened for user root by root(uid=0) Feb 14 14:34:13 managed-node1 platform-python[17813]: ansible-command Invoked with argv=['podman', 'stop', 'candlepin'] warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True _raw_params=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 14:34:14 managed-node1 systemd[1]: libpod-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope has successfully entered the 'dead' state. Feb 14 14:34:14 managed-node1 systemd[1]: libpod-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope: Consumed 47.763s CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope completed and consumed the indicated resources. Feb 14 14:34:14 managed-node1 kernel: cni-podman0: port 1(veth6ea29213) entered disabled state Feb 14 14:34:14 managed-node1 kernel: device veth6ea29213 left promiscuous mode Feb 14 14:34:14 managed-node1 kernel: cni-podman0: port 1(veth6ea29213) entered disabled state Feb 14 14:34:14 managed-node1 systemd[1]: run-netns-netns\x2d6a0e01e1\x2d587d\x2d23c4\x2d37d9\x2dd1bf4328dfa9.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d6a0e01e1\x2d587d\x2d23c4\x2d37d9\x2dd1bf4328dfa9.mount has successfully entered the 'dead' state. Feb 14 14:34:14 managed-node1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334-userdata-shm.mount has successfully entered the 'dead' state. Feb 14 14:34:14 managed-node1 systemd[1]: var-lib-containers-storage-overlay-5a31804ccb728f5b5233660052ddc4cb9902131857a2b6677f707fa4329e0a6f-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-5a31804ccb728f5b5233660052ddc4cb9902131857a2b6677f707fa4329e0a6f-merged.mount has successfully entered the 'dead' state. Feb 14 14:34:14 managed-node1 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 14:34:14 managed-node1 sudo[17810]: pam_unix(sudo:session): session closed for user root Feb 14 14:34:14 managed-node1 systemd[1]: libpod-conmon-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-948ae7843d6e56b32d415c4c2b12befe6b959f8355e7610a10a6ebeed560b334.scope has successfully entered the 'dead' state. Feb 14 14:34:14 managed-node1 sshd[17964]: Accepted publickey for root from 10.31.11.228 port 45308 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Feb 14 14:34:14 managed-node1 systemd[1]: Started Session 15 of user root. -- Subject: Unit session-15.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-15.scope has finished starting up. -- -- The start-up result is done. Feb 14 14:34:14 managed-node1 systemd-logind[597]: New session 15 of user root. -- Subject: A new session 15 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 15 has been created for the user root. -- -- The leading process of the session is 17964. Feb 14 14:34:14 managed-node1 sshd[17964]: pam_unix(sshd:session): session opened for user root by (uid=0) Feb 14 14:34:14 managed-node1 sshd[17967]: Received disconnect from 10.31.11.228 port 45308:11: disconnected by user Feb 14 14:34:14 managed-node1 sshd[17967]: Disconnected from user root 10.31.11.228 port 45308 Feb 14 14:34:14 managed-node1 sshd[17964]: pam_unix(sshd:session): session closed for user root Feb 14 14:34:14 managed-node1 systemd[1]: session-15.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-15.scope has successfully entered the 'dead' state. Feb 14 14:34:14 managed-node1 systemd-logind[597]: Session 15 logged out. Waiting for processes to exit. Feb 14 14:34:14 managed-node1 systemd-logind[597]: Removed session 15. -- Subject: Session 15 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 15 has been terminated. Feb 14 14:34:14 managed-node1 sshd[17988]: Accepted publickey for root from 10.31.11.228 port 45314 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Feb 14 14:34:14 managed-node1 systemd[1]: Started Session 16 of user root. -- Subject: Unit session-16.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-16.scope has finished starting up. -- -- The start-up result is done. Feb 14 14:34:14 managed-node1 systemd-logind[597]: New session 16 of user root. -- Subject: A new session 16 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 16 has been created for the user root. -- -- The leading process of the session is 17988. Feb 14 14:34:14 managed-node1 sshd[17988]: pam_unix(sshd:session): session opened for user root by (uid=0)