[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This feature will be removed from ansible-core in version 2.19. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. No config file found; using defaults running playbook inside collection fedora.linux_system_roles PLAY [Test change fs] ********************************************************** TASK [Gathering Facts] ********************************************************* Thursday 25 July 2024 06:56:34 -0400 (0:00:00.021) 0:00:00.021 ********* [WARNING]: Platform linux on host managed_node1 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed_node1] TASK [Run the role] ************************************************************ Thursday 25 July 2024 06:56:35 -0400 (0:00:01.227) 0:00:01.248 ********* included: fedora.linux_system_roles.storage for managed_node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** Thursday 25 July 2024 06:56:35 -0400 (0:00:00.026) 0:00:01.275 ********* included: /var/ARTIFACTS/work-generalqnk78t_o/plans/general/tree/tmp.ehjejR9Ex1/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** Thursday 25 July 2024 06:56:35 -0400 (0:00:00.021) 0:00:01.296 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** Thursday 25 July 2024 06:56:35 -0400 (0:00:00.024) 0:00:01.321 ********* skipping: [managed_node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli" ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqnk78t_o/plans/general/tree/tmp.ehjejR9Ex1/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli" ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqnk78t_o/plans/general/tree/tmp.ehjejR9Ex1/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** Thursday 25 July 2024 06:56:35 -0400 (0:00:00.046) 0:00:01.367 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** Thursday 25 July 2024 06:56:36 -0400 (0:00:00.489) 0:00:01.857 ********* ok: [managed_node1] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** Thursday 25 July 2024 06:56:36 -0400 (0:00:00.023) 0:00:01.881 ********* ok: [managed_node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** Thursday 25 July 2024 06:56:36 -0400 (0:00:00.016) 0:00:01.897 ********* ok: [managed_node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** Thursday 25 July 2024 06:56:36 -0400 (0:00:00.015) 0:00:01.913 ********* included: /var/ARTIFACTS/work-generalqnk78t_o/plans/general/tree/tmp.ehjejR9Ex1/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed_node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* Thursday 25 July 2024 06:56:36 -0400 (0:00:00.047) 0:00:01.961 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: libblockdev-crypto libblockdev-dm libblockdev-lvm libblockdev-mdraid libblockdev-swap python3-blivet stratis-cli stratisd xfsprogs TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** Thursday 25 July 2024 06:56:37 -0400 (0:00:00.831) 0:00:02.792 ********* ok: [managed_node1] => { "storage_pools": "VARIABLE IS NOT DEFINED!: 'storage_pools' is undefined" } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** Thursday 25 July 2024 06:56:37 -0400 (0:00:00.018) 0:00:02.811 ********* ok: [managed_node1] => { "storage_volumes": "VARIABLE IS NOT DEFINED!: 'storage_volumes' is undefined" } TASK [fedora.linux_system_roles.storage : Get required packages] *************** Thursday 25 July 2024 06:56:37 -0400 (0:00:00.018) 0:00:02.830 ********* ok: [managed_node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** Thursday 25 July 2024 06:56:38 -0400 (0:00:00.696) 0:00:03.526 ********* included: /var/ARTIFACTS/work-generalqnk78t_o/plans/general/tree/tmp.ehjejR9Ex1/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed_node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** Thursday 25 July 2024 06:56:38 -0400 (0:00:00.032) 0:00:03.558 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** Thursday 25 July 2024 06:56:38 -0400 (0:00:00.017) 0:00:03.576 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ Thursday 25 July 2024 06:56:38 -0400 (0:00:00.018) 0:00:03.594 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** Thursday 25 July 2024 06:56:38 -0400 (0:00:00.016) 0:00:03.611 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kpartx TASK [fedora.linux_system_roles.storage : Get service facts] ******************* Thursday 25 July 2024 06:56:38 -0400 (0:00:00.722) 0:00:04.334 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "failed" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "inactive", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** Thursday 25 July 2024 06:56:41 -0400 (0:00:02.188) 0:00:06.523 ********* ok: [managed_node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** Thursday 25 July 2024 06:56:41 -0400 (0:00:00.043) 0:00:06.566 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** Thursday 25 July 2024 06:56:41 -0400 (0:00:00.016) 0:00:06.583 ********* ok: [managed_node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** Thursday 25 July 2024 06:56:41 -0400 (0:00:00.582) 0:00:07.166 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** Thursday 25 July 2024 06:56:41 -0400 (0:00:00.024) 0:00:07.190 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1721904798.6675503, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "219ae35841c7f1248c4b202baddaaec1663c74ad", "ctime": 1721812898.2355514, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 2097283, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1721812898.2355514, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "944747262", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.410) 0:00:07.600 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.022) 0:00:07.623 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.018) 0:00:07.641 ********* ok: [managed_node1] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.022) 0:00:07.664 ********* ok: [managed_node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.022) 0:00:07.686 ********* ok: [managed_node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.022) 0:00:07.708 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.025) 0:00:07.733 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "blivet_output['mounts']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.018) 0:00:07.751 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.027) 0:00:07.779 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.026) 0:00:07.805 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "blivet_output['mounts']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.018) 0:00:07.823 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1721904946.3584592, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1721811859.821, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 2097284, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1721811609.074, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "4148334151", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** Thursday 25 July 2024 06:56:42 -0400 (0:00:00.408) 0:00:08.232 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ Thursday 25 July 2024 06:56:42 -0400 (0:00:00.016) 0:00:08.249 ********* ok: [managed_node1] TASK [Mark tasks to be skipped] ************************************************ Thursday 25 July 2024 06:56:43 -0400 (0:00:00.937) 0:00:09.186 ********* ok: [managed_node1] => { "ansible_facts": { "storage_skip_checks": [ "blivet_available", "packages_installed", "service_facts" ] }, "changed": false } TASK [Get unused disks] ******************************************************** Thursday 25 July 2024 06:56:43 -0400 (0:00:00.027) 0:00:09.213 ********* included: /var/ARTIFACTS/work-generalqnk78t_o/plans/general/tree/tmp.ehjejR9Ex1/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed_node1 TASK [Ensure test packages] **************************************************** Thursday 25 July 2024 06:56:43 -0400 (0:00:00.029) 0:00:09.242 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: util-linux-core TASK [Find unused disks in the system] ***************************************** Thursday 25 July 2024 06:56:44 -0400 (0:00:00.731) 0:00:09.973 ********* ok: [managed_node1] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "filename [xvda2] is a partition", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* Thursday 25 July 2024 06:56:45 -0400 (0:00:00.500) 0:00:10.474 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.034809", "end": "2024-07-25 06:56:45.491132", "rc": 0, "start": "2024-07-25 06:56:45.456323" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="1048576" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda2" TYPE="part" SIZE="268433341952" FSTYPE="xfs" LOG-SEC="512" + journalctl -ex Jul 25 06:52:55 localhost systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 25 06:52:55 localhost systemd[1]: systemd-pcrmachine.service - TPM PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionSecurity=measured-uki). Jul 25 06:52:55 localhost systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 25 06:52:55 localhost systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 25 06:52:55 localhost systemd[1]: systemd-tpm2-setup-early.service - Early TPM SRK Setup was skipped because of an unmet condition check (ConditionSecurity=measured-uki). Jul 25 06:52:55 localhost systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 25 06:52:55 localhost systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 25 06:52:55 localhost systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 25 06:52:55 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 25 06:52:55 localhost systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 25 06:52:55 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 25 06:52:55 localhost systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 25 06:52:55 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 25 06:52:55 localhost systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 25 06:52:55 localhost systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 25 06:52:55 localhost systemd-journald[476]: Collecting audit messages is disabled. Jul 25 06:52:55 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 25 06:52:55 localhost kernel: device-mapper: uevent: version 1.0.3 Jul 25 06:52:55 localhost kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 25 06:52:55 localhost systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 25 06:52:55 localhost systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 25 06:52:55 localhost systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 25 06:52:55 localhost systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 25 06:52:55 localhost systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 25 06:52:55 localhost systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 25 06:52:55 localhost kernel: loop: module loaded Jul 25 06:52:55 localhost systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 25 06:52:55 localhost systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 25 06:52:55 localhost systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 25 06:52:55 localhost systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 25 06:52:55 localhost systemd[1]: systemd-hwdb-update.service - Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc). Jul 25 06:52:55 localhost systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 25 06:52:55 localhost kernel: fuse: init (API version 7.40) Jul 25 06:52:55 localhost systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 25 06:52:55 localhost systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 25 06:52:55 localhost systemd[1]: systemd-tpm2-setup.service - TPM SRK Setup was skipped because of an unmet condition check (ConditionSecurity=measured-uki). Jul 25 06:52:55 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 25 06:52:55 localhost systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 25 06:52:55 localhost systemd-journald[476]: Journal started ░░ Subject: The journal has been started ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The system journal process has started up, opened the journal ░░ files for writing and is now ready to process requests. Jul 25 06:52:55 localhost systemd-journald[476]: Runtime Journal (/run/log/journal/ec2eebc0ef7d0ce96c9cd04a1a6e6bfd) is 8M, max 70.5M, 62.5M free. ░░ Subject: Disk space used by the journal ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ Runtime Journal (/run/log/journal/ec2eebc0ef7d0ce96c9cd04a1a6e6bfd) is currently using 8M. ░░ Maximum allowed usage is set to 70.5M. ░░ Leaving at least 35.2M free (of currently available 689.8M of disk space). ░░ Enforced usage limit is thus 70.5M, of which 62.5M are still available. ░░ ░░ The limits controlling how much disk space is used by the journal may ░░ be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, ░░ RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in ░░ /etc/systemd/journald.conf. See journald.conf(5) for details. Jul 25 06:52:54 localhost systemd[1]: Queued start job for default target multi-user.target. Jul 25 06:52:55 localhost systemd[1]: Started systemd-journald.service - Journal Service. Jul 25 06:52:54 localhost systemd[1]: systemd-journald.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-journald.service has successfully entered the 'dead' state. Jul 25 06:52:55 localhost systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. ░░ Subject: A start job for unit systemd-network-generator.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-network-generator.service has finished successfully. ░░ ░░ The job identifier is 168. Jul 25 06:52:55 localhost systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. ░░ Subject: A start job for unit systemd-random-seed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-random-seed.service has finished successfully. ░░ ░░ The job identifier is 183. Jul 25 06:52:55 localhost systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... ░░ Subject: A start job for unit systemd-journal-flush.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-journal-flush.service has begun execution. ░░ ░░ The job identifier is 139. Jul 25 06:52:55 localhost systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... ░░ Subject: A start job for unit sys-fs-fuse-connections.mount has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sys-fs-fuse-connections.mount has begun execution. ░░ ░░ The job identifier is 157. Jul 25 06:52:55 localhost systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. ░░ Subject: A start job for unit sys-fs-fuse-connections.mount has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sys-fs-fuse-connections.mount has finished successfully. ░░ ░░ The job identifier is 157. Jul 25 06:52:55 localhost systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. ░░ Subject: A start job for unit systemd-sysctl.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysctl.service has finished successfully. ░░ ░░ The job identifier is 130. Jul 25 06:52:55 localhost systemd-journald[476]: Runtime Journal (/run/log/journal/ec2eebc0ef7d0ce96c9cd04a1a6e6bfd) is 8M, max 70.5M, 62.5M free. ░░ Subject: Disk space used by the journal ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ Runtime Journal (/run/log/journal/ec2eebc0ef7d0ce96c9cd04a1a6e6bfd) is currently using 8M. ░░ Maximum allowed usage is set to 70.5M. ░░ Leaving at least 35.2M free (of currently available 689.8M of disk space). ░░ Enforced usage limit is thus 70.5M, of which 62.5M are still available. ░░ ░░ The limits controlling how much disk space is used by the journal may ░░ be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, ░░ RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in ░░ /etc/systemd/journald.conf. See journald.conf(5) for details. Jul 25 06:52:55 localhost systemd-journald[476]: Received client request to flush runtime journal. Jul 25 06:52:55 localhost systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. ░░ Subject: A start job for unit systemd-journal-flush.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-journal-flush.service has finished successfully. ░░ ░░ The job identifier is 139. Jul 25 06:52:55 localhost systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. ░░ Subject: A start job for unit systemd-udev-load-credentials.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udev-load-credentials.service has finished successfully. ░░ ░░ The job identifier is 134. Jul 25 06:52:56 localhost systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. ░░ Subject: A start job for unit systemd-udev-trigger.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udev-trigger.service has finished successfully. ░░ ░░ The job identifier is 190. Jul 25 06:52:57 localhost systemd[1]: Finished lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. ░░ Subject: A start job for unit lvm2-monitor.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit lvm2-monitor.service has finished successfully. ░░ ░░ The job identifier is 164. Jul 25 06:52:58 localhost systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. ░░ Subject: A start job for unit systemd-tmpfiles-setup-dev-early.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup-dev-early.service has finished successfully. ░░ ░░ The job identifier is 197. Jul 25 06:52:58 localhost systemd[1]: systemd-sysusers.service - Create System Users was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-sysusers.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysusers.service has finished successfully. ░░ ░░ The job identifier is 153. Jul 25 06:52:58 localhost systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... ░░ Subject: A start job for unit systemd-tmpfiles-setup-dev.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup-dev.service has begun execution. ░░ ░░ The job identifier is 152. Jul 25 06:52:58 localhost systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. ░░ Subject: A start job for unit systemd-tmpfiles-setup-dev.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup-dev.service has finished successfully. ░░ ░░ The job identifier is 152. Jul 25 06:52:58 localhost systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. ░░ Subject: A start job for unit local-fs-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit local-fs-pre.target has finished successfully. ░░ ░░ The job identifier is 151. Jul 25 06:52:58 localhost systemd[1]: Reached target local-fs.target - Local File Systems. ░░ Subject: A start job for unit local-fs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit local-fs.target has finished successfully. ░░ ░░ The job identifier is 147. Jul 25 06:52:58 localhost systemd[1]: Listening on systemd-bootctl.socket - Boot Entries Service Socket. ░░ Subject: A start job for unit systemd-bootctl.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-bootctl.socket has finished successfully. ░░ ░░ The job identifier is 215. Jul 25 06:52:58 localhost systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. ░░ Subject: A start job for unit systemd-sysext.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysext.socket has finished successfully. ░░ ░░ The job identifier is 216. Jul 25 06:52:58 localhost systemd[1]: ldconfig.service - Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit ldconfig.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ldconfig.service has finished successfully. ░░ ░░ The job identifier is 160. Jul 25 06:52:58 localhost systemd[1]: selinux-autorelabel-mark.service - Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). ░░ Subject: A start job for unit selinux-autorelabel-mark.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit selinux-autorelabel-mark.service has finished successfully. ░░ ░░ The job identifier is 166. Jul 25 06:52:58 localhost systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-binfmt.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-binfmt.service has finished successfully. ░░ ░░ The job identifier is 192. Jul 25 06:52:58 localhost systemd[1]: systemd-boot-random-seed.service - Update Boot Loader Random Seed was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-boot-random-seed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-boot-random-seed.service has finished successfully. ░░ ░░ The job identifier is 143. Jul 25 06:52:58 localhost systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-confext.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-confext.service has finished successfully. ░░ ░░ The job identifier is 172. Jul 25 06:52:58 localhost systemd[1]: systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/ was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-sysext.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysext.service has finished successfully. ░░ ░░ The job identifier is 191. Jul 25 06:52:58 localhost systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... ░░ Subject: A start job for unit systemd-tmpfiles-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup.service has begun execution. ░░ ░░ The job identifier is 177. Jul 25 06:52:58 localhost systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... ░░ Subject: A start job for unit systemd-udevd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udevd.service has begun execution. ░░ ░░ The job identifier is 133. Jul 25 06:52:58 localhost systemd-udevd[518]: Using default interface naming scheme 'rhel-10.0'. Jul 25 06:52:58 localhost systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. ░░ Subject: A start job for unit systemd-tmpfiles-setup.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup.service has finished successfully. ░░ ░░ The job identifier is 177. Jul 25 06:52:58 localhost systemd[1]: Mounting var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System... ░░ Subject: A start job for unit var-lib-nfs-rpc_pipefs.mount has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit var-lib-nfs-rpc_pipefs.mount has begun execution. ░░ ░░ The job identifier is 244. Jul 25 06:52:58 localhost systemd[1]: Starting audit-rules.service - Load Audit Rules... ░░ Subject: A start job for unit audit-rules.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit audit-rules.service has begun execution. ░░ ░░ The job identifier is 231. Jul 25 06:52:58 localhost systemd[1]: systemd-firstboot.service - First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). ░░ Subject: A start job for unit systemd-firstboot.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-firstboot.service has finished successfully. ░░ ░░ The job identifier is 187. Jul 25 06:52:58 localhost systemd[1]: first-boot-complete.target - First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). ░░ Subject: A start job for unit first-boot-complete.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit first-boot-complete.target has finished successfully. ░░ ░░ The job identifier is 184. Jul 25 06:52:58 localhost systemd[1]: systemd-journal-catalog-update.service - Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var). ░░ Subject: A start job for unit systemd-journal-catalog-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-journal-catalog-update.service has finished successfully. ░░ ░░ The job identifier is 180. Jul 25 06:52:58 localhost systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... ░░ Subject: A start job for unit systemd-machine-id-commit.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-machine-id-commit.service has begun execution. ░░ ░░ The job identifier is 145. Jul 25 06:52:58 localhost systemd[1]: systemd-update-done.service - Update is Completed was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-update-done.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-done.service has finished successfully. ░░ ░░ The job identifier is 178. Jul 25 06:52:58 localhost systemd[1]: etc-machine\x2did.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit etc-machine\x2did.mount has successfully entered the 'dead' state. Jul 25 06:52:58 localhost systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. ░░ Subject: A start job for unit systemd-machine-id-commit.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-machine-id-commit.service has finished successfully. ░░ ░░ The job identifier is 145. Jul 25 06:52:58 localhost kernel: RPC: Registered named UNIX socket transport module. Jul 25 06:52:58 localhost kernel: RPC: Registered udp transport module. Jul 25 06:52:58 localhost kernel: RPC: Registered tcp transport module. Jul 25 06:52:58 localhost kernel: RPC: Registered tcp-with-tls transport module. Jul 25 06:52:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 25 06:52:58 localhost systemd[1]: Mounted var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System. ░░ Subject: A start job for unit var-lib-nfs-rpc_pipefs.mount has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit var-lib-nfs-rpc_pipefs.mount has finished successfully. ░░ ░░ The job identifier is 244. Jul 25 06:52:58 localhost systemd[1]: Reached target rpc_pipefs.target. ░░ Subject: A start job for unit rpc_pipefs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc_pipefs.target has finished successfully. ░░ ░░ The job identifier is 243. Jul 25 06:52:59 localhost systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. ░░ Subject: A start job for unit systemd-udevd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udevd.service has finished successfully. ░░ ░░ The job identifier is 133. Jul 25 06:52:59 localhost systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... ░░ Subject: A start job for unit modprobe@configfs.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit modprobe@configfs.service has begun execution. ░░ ░░ The job identifier is 287. Jul 25 06:52:59 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit modprobe@configfs.service has successfully entered the 'dead' state. Jul 25 06:52:59 localhost systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. ░░ Subject: A start job for unit modprobe@configfs.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit modprobe@configfs.service has finished successfully. ░░ ░░ The job identifier is 287. Jul 25 06:52:59 localhost systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. ░░ Subject: A start job for unit dev-ttyS0.device has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dev-ttyS0.device has finished successfully. ░░ ░░ The job identifier is 261. Jul 25 06:52:59 localhost 55-scsi-sg3_id.rules[563]: WARNING: SCSI device xvda has no device ID, consider changing .SCSI_ID_SERIAL_SRC in 00-scsi-sg3_config.rules Jul 25 06:52:59 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5 Jul 25 06:52:59 localhost kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 655360 ms ovfl timer Jul 25 06:52:59 localhost (udev-worker)[540]: Network interface NamePolicy= disabled on kernel command line. Jul 25 06:52:59 localhost augenrules[522]: /sbin/augenrules: No change Jul 25 06:52:59 localhost kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 25 06:52:59 localhost systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... ░░ Subject: A start job for unit systemd-vconsole-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has begun execution. ░░ ░░ The job identifier is 295. Jul 25 06:52:59 localhost augenrules[578]: No rules Jul 25 06:52:59 localhost augenrules[578]: enabled 0 Jul 25 06:52:59 localhost augenrules[578]: failure 1 Jul 25 06:52:59 localhost augenrules[578]: pid 0 Jul 25 06:52:59 localhost augenrules[578]: rate_limit 0 Jul 25 06:52:59 localhost augenrules[578]: backlog_limit 8192 Jul 25 06:52:59 localhost augenrules[578]: lost 0 Jul 25 06:52:59 localhost augenrules[578]: backlog 0 Jul 25 06:52:59 localhost augenrules[578]: backlog_wait_time 60000 Jul 25 06:52:59 localhost augenrules[578]: backlog_wait_time_actual 0 Jul 25 06:52:59 localhost augenrules[578]: enabled 0 Jul 25 06:52:59 localhost augenrules[578]: failure 1 Jul 25 06:52:59 localhost augenrules[578]: pid 0 Jul 25 06:52:59 localhost augenrules[578]: rate_limit 0 Jul 25 06:52:59 localhost augenrules[578]: backlog_limit 8192 Jul 25 06:52:59 localhost augenrules[578]: lost 0 Jul 25 06:52:59 localhost augenrules[578]: backlog 0 Jul 25 06:52:59 localhost augenrules[578]: backlog_wait_time 60000 Jul 25 06:52:59 localhost augenrules[578]: backlog_wait_time_actual 0 Jul 25 06:52:59 localhost augenrules[578]: enabled 0 Jul 25 06:52:59 localhost augenrules[578]: failure 1 Jul 25 06:52:59 localhost augenrules[578]: pid 0 Jul 25 06:52:59 localhost augenrules[578]: rate_limit 0 Jul 25 06:52:59 localhost augenrules[578]: backlog_limit 8192 Jul 25 06:52:59 localhost augenrules[578]: lost 0 Jul 25 06:52:59 localhost augenrules[578]: backlog 0 Jul 25 06:52:59 localhost augenrules[578]: backlog_wait_time 60000 Jul 25 06:52:59 localhost augenrules[578]: backlog_wait_time_actual 0 Jul 25 06:52:59 localhost systemd[1]: audit-rules.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit audit-rules.service has successfully entered the 'dead' state. Jul 25 06:52:59 localhost systemd[1]: Finished audit-rules.service - Load Audit Rules. ░░ Subject: A start job for unit audit-rules.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit audit-rules.service has finished successfully. ░░ ░░ The job identifier is 231. Jul 25 06:52:59 localhost systemd[1]: Starting auditd.service - Security Audit Logging Service... ░░ Subject: A start job for unit auditd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit auditd.service has begun execution. ░░ ░░ The job identifier is 230. Jul 25 06:53:00 localhost kernel: cirrus 0000:00:02.0: vgaarb: deactivate vga console Jul 25 06:53:00 localhost kernel: Console: switching to colour dummy device 80x25 Jul 25 06:53:00 localhost kernel: [drm] Initialized cirrus 2.0.0 2019 for 0000:00:02.0 on minor 0 Jul 25 06:53:00 localhost kernel: fbcon: cirrusdrmfb (fb0) is primary device Jul 25 06:53:00 localhost kernel: Console: switching to colour frame buffer device 128x48 Jul 25 06:53:00 localhost kernel: cirrus 0000:00:02.0: [drm] fb0: cirrusdrmfb frame buffer device Jul 25 06:53:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-vconsole-setup.service has successfully entered the 'dead' state. Jul 25 06:53:00 localhost systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. ░░ Subject: A stop job for unit systemd-vconsole-setup.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit systemd-vconsole-setup.service has finished. ░░ ░░ The job identifier is 295 and the job result is done. Jul 25 06:53:00 localhost systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-credentials-systemd\x2dvconsole\x2dsetup.service.mount has successfully entered the 'dead' state. Jul 25 06:53:00 localhost systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... ░░ Subject: A start job for unit systemd-vconsole-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has begun execution. ░░ ░░ The job identifier is 295. Jul 25 06:53:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-vconsole-setup.service has successfully entered the 'dead' state. Jul 25 06:53:00 localhost systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. ░░ Subject: A stop job for unit systemd-vconsole-setup.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit systemd-vconsole-setup.service has finished. ░░ ░░ The job identifier is 295 and the job result is done. Jul 25 06:53:00 localhost systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... ░░ Subject: A start job for unit systemd-vconsole-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has begun execution. ░░ ░░ The job identifier is 295. Jul 25 06:53:00 localhost systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. ░░ Subject: A start job for unit systemd-vconsole-setup.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has finished successfully. ░░ ░░ The job identifier is 295. Jul 25 06:53:00 localhost auditd[599]: No plugins found, not dispatching events Jul 25 06:53:00 localhost auditd[599]: Init complete, auditd 4.0 listening for events (startup state enable) Jul 25 06:53:00 localhost systemd[1]: Started auditd.service - Security Audit Logging Service. ░░ Subject: A start job for unit auditd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit auditd.service has finished successfully. ░░ ░░ The job identifier is 230. Jul 25 06:53:00 localhost systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... ░░ Subject: A start job for unit systemd-update-utmp.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp.service has begun execution. ░░ ░░ The job identifier is 229. Jul 25 06:53:01 localhost systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. ░░ Subject: A start job for unit systemd-update-utmp.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp.service has finished successfully. ░░ ░░ The job identifier is 229. Jul 25 06:53:01 localhost systemd[1]: Reached target sysinit.target - System Initialization. ░░ Subject: A start job for unit sysinit.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sysinit.target has finished successfully. ░░ ░░ The job identifier is 120. Jul 25 06:53:01 localhost systemd[1]: Started dnf-makecache.timer - dnf makecache --timer. ░░ Subject: A start job for unit dnf-makecache.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dnf-makecache.timer has finished successfully. ░░ ░░ The job identifier is 202. Jul 25 06:53:01 localhost systemd[1]: Started fstrim.timer - Discard unused filesystem blocks once a week. ░░ Subject: A start job for unit fstrim.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit fstrim.timer has finished successfully. ░░ ░░ The job identifier is 200. Jul 25 06:53:01 localhost systemd[1]: Started logrotate.timer - Daily rotation of log files. ░░ Subject: A start job for unit logrotate.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.timer has finished successfully. ░░ ░░ The job identifier is 199. Jul 25 06:53:01 localhost systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. ░░ Subject: A start job for unit systemd-tmpfiles-clean.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-clean.timer has finished successfully. ░░ ░░ The job identifier is 201. Jul 25 06:53:01 localhost systemd[1]: Started unbound-anchor.timer - daily update of the root trust anchor for DNSSEC. ░░ Subject: A start job for unit unbound-anchor.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit unbound-anchor.timer has finished successfully. ░░ ░░ The job identifier is 209. Jul 25 06:53:01 localhost systemd[1]: Reached target timers.target - Timer Units. ░░ Subject: A start job for unit timers.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit timers.target has finished successfully. ░░ ░░ The job identifier is 198. Jul 25 06:53:01 localhost systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. ░░ Subject: A start job for unit dbus.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dbus.socket has finished successfully. ░░ ░░ The job identifier is 207. Jul 25 06:53:01 localhost systemd[1]: Listening on pcscd.socket - PC/SC Smart Card Daemon Activation Socket. ░░ Subject: A start job for unit pcscd.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit pcscd.socket has finished successfully. ░░ ░░ The job identifier is 220. Jul 25 06:53:01 localhost systemd[1]: Listening on sssd-kcm.socket - SSSD Kerberos Cache Manager responder socket. ░░ Subject: A start job for unit sssd-kcm.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sssd-kcm.socket has finished successfully. ░░ ░░ The job identifier is 221. Jul 25 06:53:01 localhost systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. ░░ Subject: A start job for unit systemd-hostnamed.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.socket has finished successfully. ░░ ░░ The job identifier is 212. Jul 25 06:53:01 localhost systemd[1]: Reached target sockets.target - Socket Units. ░░ Subject: A start job for unit sockets.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sockets.target has finished successfully. ░░ ░░ The job identifier is 211. Jul 25 06:53:01 localhost systemd[1]: Starting dbus-broker.service - D-Bus System Message Bus... ░░ Subject: A start job for unit dbus-broker.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dbus-broker.service has begun execution. ░░ ░░ The job identifier is 206. Jul 25 06:53:01 localhost systemd[1]: systemd-pcrphase-sysinit.service - TPM PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ░░ Subject: A start job for unit systemd-pcrphase-sysinit.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-pcrphase-sysinit.service has finished successfully. ░░ ░░ The job identifier is 162. Jul 25 06:53:01 localhost systemd[1]: Started dbus-broker.service - D-Bus System Message Bus. ░░ Subject: A start job for unit dbus-broker.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dbus-broker.service has finished successfully. ░░ ░░ The job identifier is 206. Jul 25 06:53:01 localhost systemd[1]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit basic.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit basic.target has finished successfully. ░░ ░░ The job identifier is 119. Jul 25 06:53:01 localhost dbus-broker-launch[605]: Ready Jul 25 06:53:01 localhost systemd[1]: Starting chronyd.service - NTP client/server... ░░ Subject: A start job for unit chronyd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit chronyd.service has begun execution. ░░ ░░ The job identifier is 255. Jul 25 06:53:01 localhost systemd[1]: Starting cloud-init-local.service - Initial cloud-init job (pre-networking)... ░░ Subject: A start job for unit cloud-init-local.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init-local.service has begun execution. ░░ ░░ The job identifier is 272. Jul 25 06:53:01 localhost systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... ░░ Subject: A start job for unit dracut-shutdown.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dracut-shutdown.service has begun execution. ░░ ░░ The job identifier is 146. Jul 25 06:53:01 localhost systemd[1]: Started irqbalance.service - irqbalance daemon. ░░ Subject: A start job for unit irqbalance.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit irqbalance.service has finished successfully. ░░ ░░ The job identifier is 227. Jul 25 06:53:01 localhost systemd[1]: Started rngd.service - Hardware RNG Entropy Gatherer Daemon. ░░ Subject: A start job for unit rngd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rngd.service has finished successfully. ░░ ░░ The job identifier is 237. Jul 25 06:53:01 localhost (qbalance)[610]: irqbalance.service: Referenced but unset environment variable evaluates to an empty string: IRQBALANCE_ARGS Jul 25 06:53:01 localhost systemd[1]: Starting rsyslog.service - System Logging Service... ░░ Subject: A start job for unit rsyslog.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has begun execution. ░░ ░░ The job identifier is 247. Jul 25 06:53:01 localhost systemd[1]: ssh-host-keys-migration.service - Update OpenSSH host key permissions was skipped because of an unmet condition check (ConditionPathExists=!/var/lib/.ssh-host-keys-migration). ░░ Subject: A start job for unit ssh-host-keys-migration.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ssh-host-keys-migration.service has finished successfully. ░░ ░░ The job identifier is 254. Jul 25 06:53:01 localhost systemd[1]: sshd-keygen@ecdsa.service - OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ecdsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ecdsa.service has finished successfully. ░░ ░░ The job identifier is 253. Jul 25 06:53:01 localhost systemd[1]: sshd-keygen@ed25519.service - OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ed25519.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ed25519.service has finished successfully. ░░ ░░ The job identifier is 252. Jul 25 06:53:01 localhost systemd[1]: sshd-keygen@rsa.service - OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@rsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@rsa.service has finished successfully. ░░ ░░ The job identifier is 250. Jul 25 06:53:01 localhost systemd[1]: Reached target sshd-keygen.target. ░░ Subject: A start job for unit sshd-keygen.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen.target has finished successfully. ░░ ░░ The job identifier is 249. Jul 25 06:53:01 localhost systemd[1]: sssd.service - System Security Services Daemon was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit sssd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sssd.service has finished successfully. ░░ ░░ The job identifier is 225. Jul 25 06:53:01 localhost systemd[1]: Reached target nss-user-lookup.target - User and Group Name Lookups. ░░ Subject: A start job for unit nss-user-lookup.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit nss-user-lookup.target has finished successfully. ░░ ░░ The job identifier is 226. Jul 25 06:53:01 localhost systemd[1]: Starting systemd-logind.service - User Login Management... ░░ Subject: A start job for unit systemd-logind.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-logind.service has begun execution. ░░ ░░ The job identifier is 233. Jul 25 06:53:02 localhost systemd[1]: Starting unbound-anchor.service - update of the root trust anchor for DNSSEC validation in unbound... ░░ Subject: A start job for unit unbound-anchor.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit unbound-anchor.service has begun execution. ░░ ░░ The job identifier is 383. Jul 25 06:53:02 localhost systemd-logind[614]: New seat seat0. ░░ Subject: A new seat seat0 is now available ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new seat seat0 has been configured and is now available. Jul 25 06:53:02 localhost systemd[1]: Starting logrotate.service - Rotate log files... ░░ Subject: A start job for unit logrotate.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.service has begun execution. ░░ ░░ The job identifier is 304. Jul 25 06:53:02 localhost systemd-logind[614]: Watching system buttons on /dev/input/event0 (Power Button) Jul 25 06:53:02 localhost systemd-logind[614]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 25 06:53:02 localhost systemd-logind[614]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) Jul 25 06:53:02 localhost systemd[1]: Started systemd-logind.service - User Login Management. ░░ Subject: A start job for unit systemd-logind.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-logind.service has finished successfully. ░░ ░░ The job identifier is 233. Jul 25 06:53:02 localhost systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. ░░ Subject: A start job for unit dracut-shutdown.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dracut-shutdown.service has finished successfully. ░░ ░░ The job identifier is 146. Jul 25 06:53:02 localhost rsyslogd[613]: imjournal: filecreatemode is not set, using default 0644 [v8.2312.0-2.el10 try https://www.rsyslog.com/e/2186 ] Jul 25 06:53:02 localhost rsyslogd[613]: [origin software="rsyslogd" swVersion="8.2312.0-2.el10" x-pid="613" x-info="https://www.rsyslog.com"] start Jul 25 06:53:02 localhost systemd[1]: Started rsyslog.service - System Logging Service. ░░ Subject: A start job for unit rsyslog.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has finished successfully. ░░ ░░ The job identifier is 247. Jul 25 06:53:02 localhost rsyslogd[613]: imjournal: journal files changed, reloading... [v8.2312.0-2.el10 try https://www.rsyslog.com/e/0 ] Jul 25 06:53:02 localhost logrotate[616]: error: skipping "/var/log/sssd/*.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation. Jul 25 06:53:02 localhost systemd[1]: logrotate.service: Main process exited, code=exited, status=1/FAILURE ░░ Subject: Unit process exited ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ An ExecStart= process belonging to unit logrotate.service has exited. ░░ ░░ The process' exit code is 'exited' and its exit status is 1. Jul 25 06:53:02 localhost systemd[1]: logrotate.service: Failed with result 'exit-code'. ░░ Subject: Unit failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit logrotate.service has entered the 'failed' state with result 'exit-code'. Jul 25 06:53:02 localhost systemd[1]: Failed to start logrotate.service - Rotate log files. ░░ Subject: A start job for unit logrotate.service has failed ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.service has finished with a failure. ░░ ░░ The job identifier is 304 and the job result is failed. Jul 25 06:53:03 localhost rngd[611]: Disabling 7: PKCS11 Entropy generator (pkcs11) Jul 25 06:53:03 localhost rngd[611]: Disabling 5: NIST Network Entropy Beacon (nist) Jul 25 06:53:03 localhost rngd[611]: Disabling 9: Qrypt quantum entropy beacon (qrypt) Jul 25 06:53:03 localhost rngd[611]: Disabling 10: Named pipe entropy input (namedpipe) Jul 25 06:53:03 localhost rngd[611]: Initializing available sources Jul 25 06:53:03 localhost rngd[611]: [hwrng ]: Initialization Failed Jul 25 06:53:03 localhost rngd[611]: [rdrand]: Enabling RDRAND rng support Jul 25 06:53:03 localhost rngd[611]: [rdrand]: Initialized Jul 25 06:53:03 localhost rngd[611]: [jitter]: JITTER timeout set to 5 sec Jul 25 06:53:03 localhost rngd[611]: [jitter]: Initializing AES buffer Jul 25 06:53:03 localhost systemd[1]: unbound-anchor.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit unbound-anchor.service has successfully entered the 'dead' state. Jul 25 06:53:03 localhost systemd[1]: Finished unbound-anchor.service - update of the root trust anchor for DNSSEC validation in unbound. ░░ Subject: A start job for unit unbound-anchor.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit unbound-anchor.service has finished successfully. ░░ ░░ The job identifier is 383. Jul 25 06:53:03 localhost chronyd[642]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Jul 25 06:53:03 localhost chronyd[642]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift Jul 25 06:53:03 localhost chronyd[642]: Using right/UTC timezone to obtain leap second data Jul 25 06:53:03 localhost chronyd[642]: Loaded seccomp filter (level 2) Jul 25 06:53:03 localhost systemd[1]: Started chronyd.service - NTP client/server. ░░ Subject: A start job for unit chronyd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit chronyd.service has finished successfully. ░░ ░░ The job identifier is 255. Jul 25 06:53:08 localhost rngd[611]: [jitter]: Unable to obtain AES key, disabling JITTER source Jul 25 06:53:08 localhost rngd[611]: [jitter]: Initialization Failed Jul 25 06:53:08 localhost rngd[611]: Process privileges have been dropped to 2:2 Jul 25 06:53:11 localhost cloud-init[647]: Cloud-init v. 24.1.4-13.el10 running 'init-local' at Thu, 25 Jul 2024 10:53:11 +0000. Up 44.21 seconds. Jul 25 06:53:12 localhost dhcpcd[649]: dhcpcd-10.0.6 starting Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 0 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 0 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 48 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 48 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 49 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 49 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 50 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 50 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 51 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 51 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 52 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 52 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 53 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 53 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 54 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 54 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 55 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 55 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 56 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 56 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 57 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 57 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 58 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 58 affinity is now unmanaged Jul 25 06:53:12 localhost irqbalance[610]: Cannot change IRQ 59 affinity: Input/output error Jul 25 06:53:12 localhost irqbalance[610]: IRQ 59 affinity is now unmanaged Jul 25 06:53:12 localhost kernel: 8021q: 802.1Q VLAN Support v1.8 Jul 25 06:53:13 localhost systemd[1]: Listening on systemd-rfkill.socket - Load/Save RF Kill Switch Status /dev/rfkill Watch. ░░ Subject: A start job for unit systemd-rfkill.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-rfkill.socket has finished successfully. ░░ ░░ The job identifier is 464. Jul 25 06:53:13 localhost kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database Jul 25 06:53:13 localhost kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7' Jul 25 06:53:13 localhost kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600' Jul 25 06:53:13 localhost dhcpcd[652]: DUID 00:01:00:01:2e:34:eb:19:12:7b:b2:51:da:5b Jul 25 06:53:13 localhost dhcpcd[652]: eth0: IAID b2:51:da:5b Jul 25 06:53:13 localhost kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 Jul 25 06:53:13 localhost kernel: cfg80211: failed to load regulatory.db Jul 25 06:53:14 localhost dhcpcd[652]: eth0: soliciting a DHCP lease Jul 25 06:53:14 localhost dhcpcd[652]: eth0: offered 10.31.9.229 from 10.31.8.1 Jul 25 06:53:14 localhost dhcpcd[652]: eth0: leased 10.31.9.229 for 3600 seconds Jul 25 06:53:14 localhost dhcpcd[652]: eth0: adding route to 10.31.8.0/22 Jul 25 06:53:14 localhost dhcpcd[652]: eth0: adding default route via 10.31.8.1 Jul 25 06:53:14 localhost dhcpcd[652]: control command: /usr/sbin/dhcpcd --dumplease --ipv4only eth0 Jul 25 06:53:14 localhost systemd[1]: Starting systemd-hostnamed.service - Hostname Service... ░░ Subject: A start job for unit systemd-hostnamed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has begun execution. ░░ ░░ The job identifier is 473. Jul 25 06:53:14 localhost systemd[1]: Started systemd-hostnamed.service - Hostname Service. ░░ Subject: A start job for unit systemd-hostnamed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has finished successfully. ░░ ░░ The job identifier is 473. Jul 25 06:53:14 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-hostnamed[671]: Hostname set to (static) Jul 25 06:53:14 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished cloud-init-local.service - Initial cloud-init job (pre-networking). ░░ Subject: A start job for unit cloud-init-local.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init-local.service has finished successfully. ░░ ░░ The job identifier is 272. Jul 25 06:53:14 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target network-pre.target - Preparation for Network. ░░ Subject: A start job for unit network-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network-pre.target has finished successfully. ░░ ░░ The job identifier is 169. Jul 25 06:53:14 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager.service - Network Manager... ░░ Subject: A start job for unit NetworkManager.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager.service has begun execution. ░░ ░░ The job identifier is 205. Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.7077] NetworkManager (version 1.48.4-1.el10.1) is starting... (boot:a525fa73-6735-43ac-90ac-1b2bf1aefdf6) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.7080] Read config: /etc/NetworkManager/NetworkManager.conf (etc: 30-cloud-init-ip6-addr-gen-mode.conf) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9319] manager[0x5622b23f6720]: monitoring kernel firmware directory '/lib/firmware'. Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9358] hostname: hostname: using hostnamed Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9358] hostname: static hostname changed from (none) to "ip-10-31-9-229.us-east-1.aws.redhat.com" Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9363] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9367] manager[0x5622b23f6720]: rfkill: Wi-Fi hardware radio set enabled Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9368] manager[0x5622b23f6720]: rfkill: WWAN hardware radio set enabled Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9427] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9428] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9429] manager: Networking is enabled by state file Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9444] settings: Loaded settings plugin: keyfile (internal) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9490] dhcp: init: Using DHCP client 'internal' Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9493] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9518] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 552. Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9568] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9574] device (lo): Activation: starting connection 'lo' (d1ce6b43-dae8-4185-9eea-3529d410c3d7) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9583] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9587] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9617] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager.service - Network Manager. ░░ Subject: A start job for unit NetworkManager.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager.service has finished successfully. ░░ ░░ The job identifier is 205. Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9626] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9628] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9629] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9631] device (eth0): carrier: link connected Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9633] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9644] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9650] policy: auto-activating connection 'cloud-init eth0' (1dd9a779-d327-56e1-8454-c65e2556c12c) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9655] device (eth0): Activation: starting connection 'cloud-init eth0' (1dd9a779-d327-56e1-8454-c65e2556c12c) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9656] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9661] manager: NetworkManager state is now CONNECTING Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9662] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9673] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9680] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target network.target - Network. ░░ Subject: A start job for unit network.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network.target has finished successfully. ░░ ░░ The job identifier is 208. Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904795.9734] dhcp4 (eth0): state changed new lease, address=10.31.9.229, acd pending Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-wait-online.service - Network Manager Wait Online... ░░ Subject: A start job for unit NetworkManager-wait-online.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-wait-online.service has begun execution. ░░ ░░ The job identifier is 204. Jul 25 06:53:15 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting gssproxy.service - GSSAPI Proxy Daemon... ░░ Subject: A start job for unit gssproxy.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit gssproxy.service has begun execution. ░░ ░░ The job identifier is 245. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.1286] dhcp4 (eth0): state changed new lease, address=10.31.9.229 Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.1291] policy: set 'cloud-init eth0' (eth0) as default for IPv4 routing and DNS Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.1893] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 552. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2714] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2721] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2732] device (lo): Activation: successful, device activated. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2744] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2746] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2750] manager: NetworkManager state is now CONNECTED_SITE Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2753] device (eth0): Activation: successful, device activated. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2760] manager: NetworkManager state is now CONNECTED_GLOBAL Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com NetworkManager[678]: [1721904796.2763] manager: startup complete Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished NetworkManager-wait-online.service - Network Manager Wait Online. ░░ Subject: A start job for unit NetworkManager-wait-online.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-wait-online.service has finished successfully. ░░ ░░ The job identifier is 204. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting cloud-init.service - Initial cloud-init job (metadata service crawler)... ░░ Subject: A start job for unit cloud-init.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has begun execution. ░░ ░░ The job identifier is 271. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com chronyd[642]: Added source 10.11.160.238 Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com chronyd[642]: Added source 10.18.100.10 Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com chronyd[642]: Added source 10.2.32.37 Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com chronyd[642]: Added source 10.2.32.38 Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started gssproxy.service - GSSAPI Proxy Daemon. ░░ Subject: A start job for unit gssproxy.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit gssproxy.service has finished successfully. ░░ ░░ The job identifier is 245. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). ░░ Subject: A start job for unit rpc-gssd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-gssd.service has finished successfully. ░░ ░░ The job identifier is 242. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target nfs-client.target - NFS client services. ░░ Subject: A start job for unit nfs-client.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit nfs-client.target has finished successfully. ░░ ░░ The job identifier is 239. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. ░░ Subject: A start job for unit remote-fs-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs-pre.target has finished successfully. ░░ ░░ The job identifier is 246. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. ░░ Subject: A start job for unit remote-cryptsetup.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-cryptsetup.target has finished successfully. ░░ ░░ The job identifier is 266. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs.target - Remote File Systems. ░░ Subject: A start job for unit remote-fs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs.target has finished successfully. ░░ ░░ The job identifier is 238. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: systemd-pcrphase.service - TPM PCR Barrier (User) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ░░ Subject: A start job for unit systemd-pcrphase.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-pcrphase.service has finished successfully. ░░ ░░ The job identifier is 167. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Cloud-init v. 24.1.4-13.el10 running 'init' at Thu, 25 Jul 2024 10:53:16 +0000. Up 49.08 seconds. Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | eth0 | True | 10.31.9.229 | 255.255.252.0 | global | 12:7b:b2:51:da:5b | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | eth0 | True | fe80::107b:b2ff:fe51:da5b/64 | . | link | 12:7b:b2:51:da:5b | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | lo | True | ::1/128 | . | host | . | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: ++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | 0 | 0.0.0.0 | 10.31.8.1 | 0.0.0.0 | eth0 | UG | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | 1 | 10.31.8.0 | 0.0.0.0 | 255.255.252.0 | eth0 | U | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | Route | Destination | Gateway | Interface | Flags | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | 0 | fe80::/64 | :: | eth0 | U | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: | 2 | multicast | :: | eth0 | U | Jul 25 06:53:16 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 25 06:53:17 ip-10-31-9-229.us-east-1.aws.redhat.com 55-scsi-sg3_id.rules[831]: WARNING: SCSI device xvda has no device ID, consider changing .SCSI_ID_SERIAL_SRC in 00-scsi-sg3_config.rules Jul 25 06:53:17 ip-10-31-9-229.us-east-1.aws.redhat.com 55-scsi-sg3_id.rules[834]: WARNING: SCSI device xvda has no device ID, consider changing .SCSI_ID_SERIAL_SRC in 00-scsi-sg3_config.rules Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Generating public/private rsa key pair. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: The key fingerprint is: Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: SHA256:MFKhYmBVn0paESc7UYvNtpszoAAn/sdYsxPz1b1A87o root@ip-10-31-9-229.us-east-1.aws.redhat.com Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: The key's randomart image is: Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: +---[RSA 3072]----+ Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: |.....*=+ | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: |.. o@ o | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: |o + o*oB o | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: |o+ .+.+o. o + | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | o . B .S. o o | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | o = B + o . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | + = * . . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . . o . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | E | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: +----[SHA256]-----+ Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Generating public/private ecdsa key pair. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: The key fingerprint is: Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: SHA256:jwL0FA9izA0Wz27VHh9ScGhJMg1RsUGHQDk0rmoleCA root@ip-10-31-9-229.us-east-1.aws.redhat.com Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: The key's randomart image is: Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: +---[ECDSA 256]---+ Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | o*+ooXXB*+ | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | oo+.=o=** | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: |E . . + +o= . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . + + o . + . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . + * S . . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . * o | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | o . . . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: +----[SHA256]-----+ Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Generating public/private ed25519 key pair. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: The key fingerprint is: Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: SHA256:tauVEMwFNI9hevpYUzSJ0c2aXvfz6b5AJidDRhE+wdk root@ip-10-31-9-229.us-east-1.aws.redhat.com Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: The key's randomart image is: Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: +--[ED25519 256]--+ Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | .B==O= | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | =.B=++E | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . * +B | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | o +=.o . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . S..= = . | Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | + o.oB ..| Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . . + . +| Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | o ...| Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: | . o+.| Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[765]: +----[SHA256]-----+ Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished cloud-init.service - Initial cloud-init job (metadata service crawler). ░░ Subject: A start job for unit cloud-init.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has finished successfully. ░░ ░░ The job identifier is 271. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-config.target - Cloud-config availability. ░░ Subject: A start job for unit cloud-config.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.target has finished successfully. ░░ ░░ The job identifier is 270. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target network-online.target - Network is Online. ░░ Subject: A start job for unit network-online.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network-online.target has finished successfully. ░░ ░░ The job identifier is 203. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting cloud-config.service - Apply the settings specified in cloud-config... ░░ Subject: A start job for unit cloud-config.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has begun execution. ░░ ░░ The job identifier is 269. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting kdump.service - Crash recovery kernel arming... ░░ Subject: A start job for unit kdump.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has begun execution. ░░ ░░ The job identifier is 232. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting restraintd.service - The restraint harness.... ░░ Subject: A start job for unit restraintd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has begun execution. ░░ ░░ The job identifier is 236. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting rpc-statd-notify.service - Notify NFS peers of a restart... ░░ Subject: A start job for unit rpc-statd-notify.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has begun execution. ░░ ░░ The job identifier is 240. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 248. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com sm-notify[850]: Version 2.6.4 starting Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started rpc-statd-notify.service - Notify NFS peers of a restart. ░░ Subject: A start job for unit rpc-statd-notify.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has finished successfully. ░░ ░░ The job identifier is 240. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com (sshd)[851]: sshd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started restraintd.service - The restraint harness.. ░░ Subject: A start job for unit restraintd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has finished successfully. ░░ ░░ The job identifier is 236. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[851]: Server listening on 0.0.0.0 port 22. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 248. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[851]: Server listening on :: port 22. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[895]: Cloud-init v. 24.1.4-13.el10 running 'modules:config' at Thu, 25 Jul 2024 10:53:19 +0000. Up 52.12 seconds. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[851]: Received signal 15; terminating. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopping sshd.service - OpenSSH server daemon... ░░ Subject: A stop job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 658. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit sshd.service has successfully entered the 'dead' state. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopped sshd.service - OpenSSH server daemon. ░░ Subject: A stop job for unit sshd.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has finished. ░░ ░░ The job identifier is 658 and the job result is done. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopped target sshd-keygen.target. ░░ Subject: A stop job for unit sshd-keygen.target has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd-keygen.target has finished. ░░ ░░ The job identifier is 743 and the job result is done. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopping sshd-keygen.target... ░░ Subject: A stop job for unit sshd-keygen.target has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd-keygen.target has begun execution. ░░ ░░ The job identifier is 743. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: ssh-host-keys-migration.service - Update OpenSSH host key permissions was skipped because of an unmet condition check (ConditionPathExists=!/var/lib/.ssh-host-keys-migration). ░░ Subject: A start job for unit ssh-host-keys-migration.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ssh-host-keys-migration.service has finished successfully. ░░ ░░ The job identifier is 742. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd-keygen@ecdsa.service - OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ecdsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ecdsa.service has finished successfully. ░░ ░░ The job identifier is 741. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd-keygen@ed25519.service - OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ed25519.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ed25519.service has finished successfully. ░░ ░░ The job identifier is 740. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd-keygen@rsa.service - OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@rsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@rsa.service has finished successfully. ░░ ░░ The job identifier is 738. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target sshd-keygen.target. ░░ Subject: A start job for unit sshd-keygen.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen.target has finished successfully. ░░ ░░ The job identifier is 743. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 658. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com (sshd)[899]: sshd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[899]: Server listening on 0.0.0.0 port 22. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[899]: Server listening on :: port 22. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 658. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com restraintd[858]: Listening on http://localhost:8081 Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished cloud-config.service - Apply the settings specified in cloud-config. ░░ Subject: A start job for unit cloud-config.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has finished successfully. ░░ ░░ The job identifier is 269. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting cloud-final.service - Execute cloud user/final scripts... ░░ Subject: A start job for unit cloud-final.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has begun execution. ░░ ░░ The job identifier is 268. Jul 25 06:53:19 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... ░░ Subject: A start job for unit systemd-user-sessions.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has begun execution. ░░ ░░ The job identifier is 223. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. ░░ Subject: A start job for unit systemd-user-sessions.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has finished successfully. ░░ ░░ The job identifier is 223. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started crond.service - Command Scheduler. ░░ Subject: A start job for unit crond.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit crond.service has finished successfully. ░░ ░░ The job identifier is 224. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started getty@tty1.service - Getty on tty1. ░░ Subject: A start job for unit getty@tty1.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty@tty1.service has finished successfully. ░░ ░░ The job identifier is 264. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com crond[905]: (CRON) STARTUP (1.7.0) Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com crond[905]: (CRON) INFO (Syslog will be used instead of sendmail.) Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com crond[905]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 20% if used.) Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com crond[905]: (CRON) INFO (running with inotify support) Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. ░░ Subject: A start job for unit serial-getty@ttyS0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit serial-getty@ttyS0.service has finished successfully. ░░ ░░ The job identifier is 259. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target getty.target - Login Prompts. ░░ Subject: A start job for unit getty.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty.target has finished successfully. ░░ ░░ The job identifier is 258. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target multi-user.target - Multi-User System. ░░ Subject: A start job for unit multi-user.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit multi-user.target has finished successfully. ░░ ░░ The job identifier is 118. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has begun execution. ░░ ░░ The job identifier is 228. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-update-utmp-runlevel.service has successfully entered the 'dead' state. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has finished successfully. ░░ ░░ The job identifier is 228. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[966]: Cloud-init v. 24.1.4-13.el10 running 'modules:final' at Thu, 25 Jul 2024 10:53:20 +0000. Up 52.80 seconds. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com kdumpctl[854]: kdump: Detected change(s) in the following file(s): /etc/fstab Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[1020]: ############################################################# Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[1024]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[1032]: 256 SHA256:jwL0FA9izA0Wz27VHh9ScGhJMg1RsUGHQDk0rmoleCA root@ip-10-31-9-229.us-east-1.aws.redhat.com (ECDSA) Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[1040]: 256 SHA256:tauVEMwFNI9hevpYUzSJ0c2aXvfz6b5AJidDRhE+wdk root@ip-10-31-9-229.us-east-1.aws.redhat.com (ED25519) Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[1047]: 3072 SHA256:MFKhYmBVn0paESc7UYvNtpszoAAn/sdYsxPz1b1A87o root@ip-10-31-9-229.us-east-1.aws.redhat.com (RSA) Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[1048]: -----END SSH HOST KEY FINGERPRINTS----- Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[1051]: ############################################################# Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com cloud-init[966]: Cloud-init v. 24.1.4-13.el10 finished at Thu, 25 Jul 2024 10:53:20 +0000. Datasource DataSourceEc2Local. Up 53.02 seconds Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished cloud-final.service - Execute cloud user/final scripts. ░░ Subject: A start job for unit cloud-final.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has finished successfully. ░░ ░░ The job identifier is 268. Jul 25 06:53:20 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-init.target - Cloud-init target. ░░ Subject: A start job for unit cloud-init.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.target has finished successfully. ░░ ░░ The job identifier is 267. Jul 25 06:53:22 ip-10-31-9-229.us-east-1.aws.redhat.com chronyd[642]: Selected source 10.2.32.38 Jul 25 06:53:22 ip-10-31-9-229.us-east-1.aws.redhat.com chronyd[642]: System clock TAI offset set to 37 seconds Jul 25 06:53:25 ip-10-31-9-229.us-east-1.aws.redhat.com kernel: block xvda: the capability attribute has been deprecated. Jul 25 06:53:26 ip-10-31-9-229.us-east-1.aws.redhat.com kdumpctl[854]: kdump: Rebuilding /boot/initramfs-6.10.0-15.el10.x86_64kdump.img Jul 25 06:53:26 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 25 06:53:27 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1395]: dracut-101-2.el10 Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics --aggressive-strip -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/0ffc5577-5083-4366-b534-51618c34eaf7 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-6.10.0-15.el10.x86_64kdump.img 6.10.0-15.el10.x86_64 Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 25 06:53:28 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'ifcfg' will not be installed, because it's in the list to be omitted! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'plymouth' will not be installed, because it's in the list to be omitted! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'resume' will not be installed, because it's in the list to be omitted! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'earlykdump' will not be installed, because it's in the list to be omitted! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 25 06:53:29 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: systemd *** Jul 25 06:53:30 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: systemd-initrd *** Jul 25 06:53:30 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: rngd *** Jul 25 06:53:30 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: i18n *** Jul 25 06:53:31 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: drm *** Jul 25 06:53:31 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: prefixdevname *** Jul 25 06:53:31 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: kernel-modules *** Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: kernel-modules-extra *** Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: kernel-modules-extra: configuration source "/run/depmod.d" does not exist Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: kernel-modules-extra: configuration source "/lib/depmod.d" does not exist Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf" Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: fstab-sys *** Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: rootfs-block *** Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: terminfo *** Jul 25 06:53:32 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: udev-rules *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: dracut-systemd *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: usrmount *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: base *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: fs-lib *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: kdumpbase *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: memstrack *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: shutdown *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including module: squash *** Jul 25 06:53:33 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Including modules done *** Jul 25 06:53:34 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Installing kernel module dependencies *** Jul 25 06:53:34 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Installing kernel module dependencies done *** Jul 25 06:53:34 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Resolving executable dependencies *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Resolving executable dependencies done *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Hardlinking files *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Mode: real Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Method: sha256 Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Files: 440 Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Linked: 1 files Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Compared: 0 xattrs Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Compared: 11 files Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Saved: 56.55 KiB Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Duration: 0.009902 seconds Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Hardlinking files done *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Generating early-microcode cpio image *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Constructing GenuineIntel.bin *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Store current command line parameters *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: Stored kernel commandline: Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: No dracut internal kernel commandline stored in the initramfs Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Install squash loader *** Jul 25 06:53:36 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Stripping files *** Jul 25 06:53:37 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Stripping files done *** Jul 25 06:53:37 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Squashing the files inside the initramfs *** Jul 25 06:53:43 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Squashing the files inside the initramfs done *** Jul 25 06:53:43 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Creating image file '/boot/initramfs-6.10.0-15.el10.x86_64kdump.img' *** Jul 25 06:53:43 ip-10-31-9-229.us-east-1.aws.redhat.com dracut[1398]: *** Creating initramfs image file '/boot/initramfs-6.10.0-15.el10.x86_64kdump.img' done *** Jul 25 06:53:44 ip-10-31-9-229.us-east-1.aws.redhat.com kernel: PKCS7: Message signed outside of X.509 validity window Jul 25 06:53:44 ip-10-31-9-229.us-east-1.aws.redhat.com kdumpctl[854]: kdump: kexec: loaded kdump kernel Jul 25 06:53:44 ip-10-31-9-229.us-east-1.aws.redhat.com kdumpctl[854]: kdump: Starting kdump: [OK] Jul 25 06:53:44 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished kdump.service - Crash recovery kernel arming. ░░ Subject: A start job for unit kdump.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has finished successfully. ░░ ░░ The job identifier is 232. Jul 25 06:53:44 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Startup finished in 1.405s (kernel) + 10.745s (initrd) + 1min 4.504s (userspace) = 1min 16.655s. ░░ Subject: System start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ All system services necessary queued for starting at boot have been ░░ started. Note that this does not mean that the machine is now idle as services ░░ might still be busy with completing start-up. ░░ ░░ Kernel start-up required 1405967 microseconds. ░░ ░░ Initrd start-up required 10745595 microseconds. ░░ ░░ Userspace start-up required 64504342 microseconds. Jul 25 06:53:45 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3668]: Accepted publickey for root from 10.30.32.10 port 49914 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3668]: pam_systemd(sshd:session): New sd-bus connection (system-bus-pam-systemd-3668) opened. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Created slice user-0.slice - User Slice of UID 0. ░░ Subject: A start job for unit user-0.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-0.slice has finished successfully. ░░ ░░ The job identifier is 831. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0... ░░ Subject: A start job for unit user-runtime-dir@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has begun execution. ░░ ░░ The job identifier is 752. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: New session 1 of user root. ░░ Subject: A new session 1 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 1 has been created for the user root. ░░ ░░ The leading process of the session is 3668. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0. ░░ Subject: A start job for unit user-runtime-dir@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has finished successfully. ░░ ░░ The job identifier is 752. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting user@0.service - User Manager for UID 0... ░░ Subject: A start job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 833. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: New session 2 of user root. ░░ Subject: A new session 2 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 2 has been created for the user root. ░░ ░░ The leading process of the session is 3673. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com (systemd)[3673]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Queued start job for default target default.target. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Created slice app.slice - User Application Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 4. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 9. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 8. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Reached target paths.target - Paths. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 12. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Reached target timers.target - Timers. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 7. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Starting dbus.socket - D-Bus User Message Bus Socket... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 11. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 3. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 3. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Listening on dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 11. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Reached target sockets.target - Sockets. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 10. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 2. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Reached target default.target - Main User Target. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 1. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[3673]: Startup finished in 142ms. ░░ Subject: User manager start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The user manager instance for user 0 has been started. All services queued ░░ for starting have been started. Note that other services might still be starting ░░ up or be started at any later time. ░░ ░░ Startup of the manager took 142517 microseconds. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started user@0.service - User Manager for UID 0. ░░ Subject: A start job for unit user@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has finished successfully. ░░ ░░ The job identifier is 833. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started session-1.scope - Session 1 of User root. ░░ Subject: A start job for unit session-1.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-1.scope has finished successfully. ░░ ░░ The job identifier is 915. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3668]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3684]: Received disconnect from 10.30.32.10 port 49914:11: disconnected by user Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3684]: Disconnected from user root 10.30.32.10 port 49914 Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3668]: pam_systemd(sshd:session): New sd-bus connection (system-bus-pam-systemd-3668) opened. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3668]: pam_unix(sshd:session): session closed for user root Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: session-1.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-1.scope has successfully entered the 'dead' state. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: Session 1 logged out. Waiting for processes to exit. Jul 25 06:54:06 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: Removed session 1. ░░ Subject: Session 1 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 1 has been terminated. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3717]: Accepted publickey for root from 10.31.11.228 port 50796 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3716]: Accepted publickey for root from 10.31.11.228 port 50794 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3716]: pam_systemd(sshd:session): New sd-bus connection (system-bus-pam-systemd-3716) opened. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3717]: pam_systemd(sshd:session): New sd-bus connection (system-bus-pam-systemd-3717) opened. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: New session 3 of user root. ░░ Subject: A new session 3 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 3 has been created for the user root. ░░ ░░ The leading process of the session is 3716. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started session-3.scope - Session 3 of User root. ░░ Subject: A start job for unit session-3.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-3.scope has finished successfully. ░░ ░░ The job identifier is 998. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: New session 4 of user root. ░░ Subject: A new session 4 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 4 has been created for the user root. ░░ ░░ The leading process of the session is 3717. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3716]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started session-4.scope - Session 4 of User root. ░░ Subject: A start job for unit session-4.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-4.scope has finished successfully. ░░ ░░ The job identifier is 1081. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3717]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3723]: Received disconnect from 10.31.11.228 port 50796:11: disconnected by user Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3723]: Disconnected from user root 10.31.11.228 port 50796 Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3717]: pam_systemd(sshd:session): New sd-bus connection (system-bus-pam-systemd-3717) opened. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[3717]: pam_unix(sshd:session): session closed for user root Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: session-4.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-4.scope has successfully entered the 'dead' state. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: Session 4 logged out. Waiting for processes to exit. Jul 25 06:54:09 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: Removed session 4. ░░ Subject: Session 4 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 4 has been terminated. Jul 25 06:54:27 ip-10-31-9-229.us-east-1.aws.redhat.com chronyd[642]: Selected source 159.203.82.102 (2.centos.pool.ntp.org) Jul 25 06:55:16 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[4231]: Accepted publickey for root from 10.31.43.50 port 34534 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 25 06:55:16 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[4231]: pam_systemd(sshd:session): New sd-bus connection (system-bus-pam-systemd-4231) opened. Jul 25 06:55:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd-logind[614]: New session 5 of user root. ░░ Subject: A new session 5 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 5 has been created for the user root. ░░ ░░ The leading process of the session is 4231. Jul 25 06:55:16 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started session-5.scope - Session 5 of User root. ░░ Subject: A start job for unit session-5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-5.scope has finished successfully. ░░ ░░ The job identifier is 1164. Jul 25 06:55:16 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[4231]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:18 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4366]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxjcnezdigklskctqcaxjgojowcqrfrc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904917.557091-7114-98622295191706/AnsiballZ_setup.py' Jul 25 06:55:18 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4366]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-4366) opened. Jul 25 06:55:18 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4366]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:18 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[4369]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:55:19 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4366]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:19 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4511]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grmmaoejmeckujfhcfoaeczhavdspikw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904919.6963942-7136-147609845816314/AnsiballZ_stat.py' Jul 25 06:55:19 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4511]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-4511) opened. Jul 25 06:55:19 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4511]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:20 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[4514]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:55:20 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4511]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:20 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4627]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfcpufwqoeyfxektenflyvpdxugqwsn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904920.2942069-7148-4918794236227/AnsiballZ_dnf.py' Jul 25 06:55:20 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4627]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-4627) opened. Jul 25 06:55:20 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4627]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:20 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[4630]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:55:29 ip-10-31-9-229.us-east-1.aws.redhat.com groupadd[4649]: group added to /etc/group: name=clevis, GID=993 Jul 25 06:55:30 ip-10-31-9-229.us-east-1.aws.redhat.com groupadd[4649]: group added to /etc/gshadow: name=clevis Jul 25 06:55:30 ip-10-31-9-229.us-east-1.aws.redhat.com groupadd[4649]: new group: name=clevis, GID=993 Jul 25 06:55:30 ip-10-31-9-229.us-east-1.aws.redhat.com useradd[4654]: new user: name=clevis, UID=993, GID=993, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none Jul 25 06:55:30 ip-10-31-9-229.us-east-1.aws.redhat.com usermod[4662]: add 'clevis' to group 'tss' Jul 25 06:55:30 ip-10-31-9-229.us-east-1.aws.redhat.com usermod[4662]: add 'clevis' to shadow group 'tss' Jul 25 06:55:31 ip-10-31-9-229.us-east-1.aws.redhat.com dbus-broker-launch[605]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 25 06:55:31 ip-10-31-9-229.us-east-1.aws.redhat.com dbus-broker-launch[605]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 25 06:55:33 ip-10-31-9-229.us-east-1.aws.redhat.com dbus-broker-launch[605]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 25 06:55:33 ip-10-31-9-229.us-east-1.aws.redhat.com dbus-broker-launch[605]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[899]: Received signal 15; terminating. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopping sshd.service - OpenSSH server daemon... ░░ Subject: A stop job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 1251. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit sshd.service has successfully entered the 'dead' state. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopped sshd.service - OpenSSH server daemon. ░░ Subject: A stop job for unit sshd.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has finished. ░░ ░░ The job identifier is 1251 and the job result is done. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopped target sshd-keygen.target. ░░ Subject: A stop job for unit sshd-keygen.target has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd-keygen.target has finished. ░░ ░░ The job identifier is 1336 and the job result is done. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Stopping sshd-keygen.target... ░░ Subject: A stop job for unit sshd-keygen.target has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd-keygen.target has begun execution. ░░ ░░ The job identifier is 1336. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: ssh-host-keys-migration.service - Update OpenSSH host key permissions was skipped because of an unmet condition check (ConditionPathExists=!/var/lib/.ssh-host-keys-migration). ░░ Subject: A start job for unit ssh-host-keys-migration.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ssh-host-keys-migration.service has finished successfully. ░░ ░░ The job identifier is 1335. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd-keygen@ecdsa.service - OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ecdsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ecdsa.service has finished successfully. ░░ ░░ The job identifier is 1334. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd-keygen@ed25519.service - OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ed25519.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ed25519.service has finished successfully. ░░ ░░ The job identifier is 1333. Jul 25 06:55:34 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: sshd-keygen@rsa.service - OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@rsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@rsa.service has finished successfully. ░░ ░░ The job identifier is 1331. Jul 25 06:55:35 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reached target sshd-keygen.target. ░░ Subject: A start job for unit sshd-keygen.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen.target has finished successfully. ░░ ░░ The job identifier is 1336. Jul 25 06:55:35 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 1251. Jul 25 06:55:35 ip-10-31-9-229.us-east-1.aws.redhat.com (sshd)[4685]: sshd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 25 06:55:35 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[4685]: Server listening on 0.0.0.0 port 22. Jul 25 06:55:35 ip-10-31-9-229.us-east-1.aws.redhat.com sshd[4685]: Server listening on :: port 22. Jul 25 06:55:35 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 1251. Jul 25 06:55:36 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Started run-r97ba583791f54677b6add5a0a092002c.service - /usr/bin/systemctl start man-db-cache-update. ░░ Subject: A start job for unit run-r97ba583791f54677b6add5a0a092002c.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-r97ba583791f54677b6add5a0a092002c.service has finished successfully. ░░ ░░ The job identifier is 1338. Jul 25 06:55:37 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reload requested from client PID 4698 ('systemctl') (unit session-5.scope)... Jul 25 06:55:37 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reloading... Jul 25 06:55:37 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Reloading finished in 206 ms. Jul 25 06:55:37 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Starting man-db-cache-update.service... ░░ Subject: A start job for unit man-db-cache-update.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has begun execution. ░░ ░░ The job identifier is 1417. Jul 25 06:55:37 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Queuing reload/restart jobs for marked units… Jul 25 06:55:39 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4627]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:39 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4860]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-httlwopbcmlhsvlsmkziombmhjxcnmrk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904939.3856192-7172-41900155957637/AnsiballZ_blivet.py' Jul 25 06:55:39 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4860]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-4860) opened. Jul 25 06:55:39 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4860]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:40 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[4863]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True safe_mode=True diskvolume_mkfs_option_map={} Jul 25 06:55:40 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4860]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:40 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4978]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szttldvnhmvjsbsdycejyfrerukhjbsx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904940.4873655-7184-89061438411290/AnsiballZ_dnf.py' Jul 25 06:55:40 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4978]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-4978) opened. Jul 25 06:55:40 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4978]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:40 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[4981]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:55:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[4978]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5095]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgtcaimxqttrpuyuuobharqghjtvotps ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904941.2299838-7192-262485248271495/AnsiballZ_service_facts.py' Jul 25 06:55:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5095]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-5095) opened. Jul 25 06:55:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5095]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:41 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[5098]: ansible-service_facts Invoked Jul 25 06:55:43 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Jul 25 06:55:43 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 1417. Jul 25 06:55:43 ip-10-31-9-229.us-east-1.aws.redhat.com systemd[1]: run-r97ba583791f54677b6add5a0a092002c.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-r97ba583791f54677b6add5a0a092002c.service has successfully entered the 'dead' state. Jul 25 06:55:43 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5095]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5322]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsvmixogxmcwdyeopyajywwhihgojwaa ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904943.8743367-7207-31082368693836/AnsiballZ_blivet.py' Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5322]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-5322) opened. Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5322]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[5325]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False packages_only=False diskvolume_mkfs_option_map={} Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5322]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5440]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvwukvpwmduxdddealkrruczcthlvskj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904944.4482112-7216-71238379581997/AnsiballZ_stat.py' Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5440]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-5440) opened. Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5440]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:44 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[5443]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:55:45 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5440]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5558]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkripqukyiocnxtpwrpwznidqkhfjmj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904946.070428-7234-118023591224132/AnsiballZ_stat.py' Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5558]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-5558) opened. Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5558]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[5561]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5558]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5676]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgmmpmxydmyjakuinrwnhoixsesjawjg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904946.4846294-7243-120739163220669/AnsiballZ_setup.py' Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5676]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-5676) opened. Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5676]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:46 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[5679]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:55:47 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5676]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:47 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5821]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ustndhngmdgetllzncaioxpbaqytqyhr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904947.5066764-7253-110593997064492/AnsiballZ_dnf.py' Jul 25 06:55:47 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5821]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-5821) opened. Jul 25 06:55:47 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5821]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:47 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[5824]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5821]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5938]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwertwjbtrqbcxkfqfcrvpqsvvupigi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904948.2232866-7261-80378747554401/AnsiballZ_find_unused_disk.py' Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5938]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-5938) opened. Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5938]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[5941]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[5938]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6056]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypnbjnlgurkcsqsgpdxqpzoexkmnsofc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904948.720928-7269-276255611496474/AnsiballZ_command.py' Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6056]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-6056) opened. Jul 25 06:55:48 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:49 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6059]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 25 06:55:49 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6056]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:53 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6211]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:55:54 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6353]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaglabeygdcmmvaezgchqqhofmpuylqf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904953.9583473-7392-80916244676141/AnsiballZ_setup.py' Jul 25 06:55:54 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6353]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-6353) opened. Jul 25 06:55:54 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6353]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:54 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6356]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:55:54 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6353]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6498]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivcfezkweblrzsixchvvpprhzdponjjx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904955.0058804-7404-151701732847748/AnsiballZ_stat.py' Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6498]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-6498) opened. Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6498]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6501]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6498]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6614]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnxebgkuxemykmcnxvffkbdqwcixzdwd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904955.5991886-7416-261875920287087/AnsiballZ_dnf.py' Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6614]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-6614) opened. Jul 25 06:55:55 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6614]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:56 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6617]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:55:56 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6614]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:56 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6731]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeefcmdjmyxbskhlpaaozsivyfhjchfp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904956.470607-7426-157984021158448/AnsiballZ_blivet.py' Jul 25 06:55:56 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6731]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-6731) opened. Jul 25 06:55:56 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6731]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:57 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6734]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True safe_mode=True diskvolume_mkfs_option_map={} Jul 25 06:55:57 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6731]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:57 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6849]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mepiomyvtwkvansdcdgwvhvalncmsnnq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904957.275257-7438-181935504366740/AnsiballZ_dnf.py' Jul 25 06:55:57 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6849]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-6849) opened. Jul 25 06:55:57 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6849]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:57 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6852]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:55:57 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6849]: pam_unix(sudo:session): session closed for user root Jul 25 06:55:58 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6966]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxjsqnopzrsltcxbydnvuqvajzjmaxt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904957.9769099-7446-4243739453410/AnsiballZ_service_facts.py' Jul 25 06:55:58 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6966]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-6966) opened. Jul 25 06:55:58 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6966]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:55:58 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[6969]: ansible-service_facts Invoked Jul 25 06:55:59 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[6966]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7189]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzgzmgfibmmfsvuevnuaklaovuvxamda ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904960.18041-7456-36587570289944/AnsiballZ_blivet.py' Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7189]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-7189) opened. Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7189]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[7192]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False packages_only=False diskvolume_mkfs_option_map={} Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7189]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7307]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiprzgxqggbxvctsxjsicwwtgqpgoznu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904960.7730956-7465-90198925706776/AnsiballZ_stat.py' Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7307]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-7307) opened. Jul 25 06:56:00 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7307]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:01 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[7310]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:01 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7307]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:01 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7425]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faqfrccnqylprkroihkstyetinlremqk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904961.4030695-7483-244199421376517/AnsiballZ_stat.py' Jul 25 06:56:01 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7425]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-7425) opened. Jul 25 06:56:01 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7425]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:01 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[7428]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:01 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7425]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:02 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7543]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guyldvipdhsrblhkmvrslmursmgwnzip ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904962.0309644-7492-78599753224840/AnsiballZ_setup.py' Jul 25 06:56:02 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7543]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-7543) opened. Jul 25 06:56:02 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7543]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:02 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[7546]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:02 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7543]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7688]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fckvyvsjucxlsjxxhhpwxatuhtjijnxb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904963.0461204-7502-224224330865034/AnsiballZ_dnf.py' Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7688]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-7688) opened. Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7688]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[7691]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7688]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7805]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kywebwaksxqrgcciupytgehfcgvooqrz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904963.7569385-7510-231520017536926/AnsiballZ_find_unused_disk.py' Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7805]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-7805) opened. Jul 25 06:56:03 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7805]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:04 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[7808]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 25 06:56:04 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7805]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:04 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7923]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djyjtdaxvabvajqvqeajlhhlnquthyty ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904964.2549717-7518-265496646467328/AnsiballZ_command.py' Jul 25 06:56:04 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7923]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-7923) opened. Jul 25 06:56:04 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7923]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:04 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[7926]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 25 06:56:04 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[7923]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:07 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8078]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obntlimprildzgimkbddfbjzfgijpvct ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904966.926705-7578-145040962948059/AnsiballZ_setup.py' Jul 25 06:56:07 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8078]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-8078) opened. Jul 25 06:56:07 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8078]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:07 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[8081]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:08 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8078]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:08 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8223]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jthwzcfbsniwrcagrgwslhkiiruipuwt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904968.2671268-7592-227118270476448/AnsiballZ_stat.py' Jul 25 06:56:08 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8223]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-8223) opened. Jul 25 06:56:08 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8223]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:08 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[8226]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:08 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8223]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8339]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtbbifpfavzwmzdexlmqpserrtwarrnh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904968.8651743-7604-279392983828944/AnsiballZ_dnf.py' Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8339]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-8339) opened. Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8339]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[8342]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8339]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8456]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdesegsujneqekztrthignpjbiiqmin ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904969.734862-7614-22571768464606/AnsiballZ_blivet.py' Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8456]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-8456) opened. Jul 25 06:56:09 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8456]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:10 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[8459]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True safe_mode=True diskvolume_mkfs_option_map={} Jul 25 06:56:10 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8456]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:10 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8574]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stuzsrrputpirqhoafwqlvrofffwjkhg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904970.5159967-7626-77285724534623/AnsiballZ_dnf.py' Jul 25 06:56:10 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8574]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-8574) opened. Jul 25 06:56:10 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8574]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:10 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[8577]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:11 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8574]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:11 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8691]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apsfjtsysyezjggvcrtxyptqwmkwbnnp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904971.2179017-7634-116744117112465/AnsiballZ_service_facts.py' Jul 25 06:56:11 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8691]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-8691) opened. Jul 25 06:56:11 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8691]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:11 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[8694]: ansible-service_facts Invoked Jul 25 06:56:13 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8691]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:13 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8914]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxryhlmrrzhcdqahauoufmzjidvooknm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904973.3998342-7644-264288711186545/AnsiballZ_blivet.py' Jul 25 06:56:13 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8914]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-8914) opened. Jul 25 06:56:13 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8914]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:13 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[8917]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True packages_only=False diskvolume_mkfs_option_map={} Jul 25 06:56:13 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[8914]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9032]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikoauznlouthdlwnhcwedtmdfdasfkjn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904973.9757073-7653-48680622619529/AnsiballZ_stat.py' Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9032]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-9032) opened. Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9032]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9035]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9032]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9150]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yozucsfjtcnjrmrhajutxvdhbarcasnw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904974.5744295-7671-150967508829763/AnsiballZ_stat.py' Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9150]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-9150) opened. Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9150]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9153]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:14 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9150]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:15 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9268]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flxlhpuswhjjavlylaqdrqhsprwswbel ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904974.9800038-7680-8183563708408/AnsiballZ_setup.py' Jul 25 06:56:15 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9268]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-9268) opened. Jul 25 06:56:15 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9268]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:15 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9271]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:15 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9268]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9413]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksryyzipeynuoijiudirzhrcdiizjsba ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904975.967062-7690-12457329161500/AnsiballZ_dnf.py' Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9413]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-9413) opened. Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9413]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9416]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9413]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9530]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oilifgrrpexljitaclsoqdlsrdgkotou ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904976.6863837-7698-51561607190367/AnsiballZ_find_unused_disk.py' Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9530]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-9530) opened. Jul 25 06:56:16 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9530]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:17 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9533]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 25 06:56:17 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9530]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:17 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9648]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltwxjhdwwgqfbkwmvwdczmsrzdlqhtwj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904977.1816823-7706-19950669694270/AnsiballZ_command.py' Jul 25 06:56:17 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9648]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-9648) opened. Jul 25 06:56:17 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9648]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:17 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9651]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 25 06:56:17 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9648]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:21 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9803]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:22 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9945]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pamdtaimpujimepxhltgyhfadfdzmkej ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904982.1616387-7829-230434708432339/AnsiballZ_setup.py' Jul 25 06:56:22 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9945]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-9945) opened. Jul 25 06:56:22 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9945]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:22 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[9948]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:22 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[9945]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:23 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10090]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyrrabehklkjawhqqrykfoirixeorfas ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904983.2031803-7841-53976496728552/AnsiballZ_stat.py' Jul 25 06:56:23 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10090]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-10090) opened. Jul 25 06:56:23 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10090]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:23 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[10093]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:23 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10090]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10206]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcgifiducmrzhydikwgynhgqvrugtmac ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904983.800221-7853-165259837256897/AnsiballZ_dnf.py' Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10206]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-10206) opened. Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10206]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[10209]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10206]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10323]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metdkrdkigmfzfjtgipzcyogamzunnxw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904984.6713254-7863-189961974966567/AnsiballZ_blivet.py' Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10323]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-10323) opened. Jul 25 06:56:24 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10323]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:25 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[10326]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True safe_mode=True diskvolume_mkfs_option_map={} Jul 25 06:56:25 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10323]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:25 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10441]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntpyjjmelduldkfrmdbjtgxdmwneodr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904985.4589057-7875-203673279970801/AnsiballZ_dnf.py' Jul 25 06:56:25 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10441]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-10441) opened. Jul 25 06:56:25 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10441]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:25 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[10444]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:26 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10441]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:26 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10558]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgarowsgrzdagfssuwlqeanvaqemaudk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904986.1639888-7883-126991786763308/AnsiballZ_service_facts.py' Jul 25 06:56:26 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10558]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-10558) opened. Jul 25 06:56:26 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10558]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:26 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[10561]: ansible-service_facts Invoked Jul 25 06:56:28 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10558]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:28 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10781]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozmomwhynkqubrphvhvwotpxsuqmjbhc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904988.33273-7893-137058354750724/AnsiballZ_blivet.py' Jul 25 06:56:28 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10781]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-10781) opened. Jul 25 06:56:28 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10781]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:28 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[10784]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True packages_only=False diskvolume_mkfs_option_map={} Jul 25 06:56:28 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10781]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10899]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-easyzqrctxechpsynbyaegvhumbchyex ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904988.9230664-7902-78733425882366/AnsiballZ_stat.py' Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10899]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-10899) opened. Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10899]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[10902]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[10899]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11017]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpbclmwumxwjwiuqoovokczjrmieahet ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904989.5526123-7920-74453662109593/AnsiballZ_stat.py' Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11017]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11017) opened. Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11017]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11020]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:29 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11017]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:30 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11135]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjrpesnavbifpsxvurpyzazwewtdhvsk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904989.975084-7929-213367593655450/AnsiballZ_setup.py' Jul 25 06:56:30 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11135]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11135) opened. Jul 25 06:56:30 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11135]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:30 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11138]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:30 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11135]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11280]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxsfuwqgzemdtbzpiluxsvpfvnalidjn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904990.9893708-7939-215718427493956/AnsiballZ_dnf.py' Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11280]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11280) opened. Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11280]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11283]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11280]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11397]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfkxfureomcnldadqgdpefllvshumrqx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904991.7174492-7947-185095585713156/AnsiballZ_find_unused_disk.py' Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11397]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11397) opened. Jul 25 06:56:31 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11397]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:32 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11400]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 25 06:56:32 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11397]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:32 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11515]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxjxymdvpugbflqejeamnogrtlhxmgqk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904992.2034142-7955-223046165168213/AnsiballZ_command.py' Jul 25 06:56:32 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11515]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11515) opened. Jul 25 06:56:32 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11515]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:32 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11518]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 25 06:56:32 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11515]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:35 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11670]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wovtxhxgeowtnksqfzuttczjbnojfmmn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904994.6162784-8015-48813350219395/AnsiballZ_setup.py' Jul 25 06:56:35 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11670]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11670) opened. Jul 25 06:56:35 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11670]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:35 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11673]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:35 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11670]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11815]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eztuahwarvzikajcuhrhogmrxfafcmgv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904995.9691782-8029-146752137207667/AnsiballZ_stat.py' Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11815]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11815) opened. Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11815]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11818]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11815]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11931]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxbrtueksnthahyitkwszlzokybftykw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904996.5698547-8041-192194636459557/AnsiballZ_dnf.py' Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11931]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-11931) opened. Jul 25 06:56:36 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11931]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:37 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[11934]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:37 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[11931]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:37 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12048]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvwdvezjbytdpxfhcxzkhcpnvebglrai ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904997.4403846-8051-129870791039064/AnsiballZ_blivet.py' Jul 25 06:56:37 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12048]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-12048) opened. Jul 25 06:56:37 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12048]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:38 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[12051]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True safe_mode=True diskvolume_mkfs_option_map={} Jul 25 06:56:38 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12048]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:38 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12166]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgsxrwpbhimontamhcuvyulppgtcptnj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904998.2169964-8063-48458003290058/AnsiballZ_dnf.py' Jul 25 06:56:38 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12166]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-12166) opened. Jul 25 06:56:38 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12166]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:38 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[12169]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:38 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12166]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:39 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12283]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sadzkumzugbkndrfygtdfsysisiqntss ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721904998.934474-8071-136633708290484/AnsiballZ_service_facts.py' Jul 25 06:56:39 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12283]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-12283) opened. Jul 25 06:56:39 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12283]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:39 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[12286]: ansible-service_facts Invoked Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12283]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12506]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buqhvjkrvasprruzrbognycpupyrmglb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721905001.1895382-8081-41903413966963/AnsiballZ_blivet.py' Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12506]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-12506) opened. Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12506]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[12509]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False packages_only=False diskvolume_mkfs_option_map={} Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12506]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12624]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywjghgbqvhlyudhjrvseutwvzxdrkjr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721905001.785703-8090-13183681104698/AnsiballZ_stat.py' Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12624]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-12624) opened. Jul 25 06:56:41 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12624]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[12627]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12624]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12742]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keskssujshxfijybbwbdtcmqfdrqcgzb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721905002.4198344-8108-212161255083470/AnsiballZ_stat.py' Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12742]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-12742) opened. Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12742]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[12745]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12742]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12860]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joowjwrcawkexttzkozuglddkcamoryh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721905002.8466237-8117-38402145558921/AnsiballZ_setup.py' Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12860]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-12860) opened. Jul 25 06:56:42 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12860]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:43 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[12863]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 25 06:56:43 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[12860]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:43 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13005]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waiqhvvhquykfydzekkgyxbvaefnpxvt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721905003.8505478-8127-87718175178560/AnsiballZ_dnf.py' Jul 25 06:56:43 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13005]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-13005) opened. Jul 25 06:56:43 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13005]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:44 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[13008]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 25 06:56:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13005]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13122]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yormgjhimbnhejotrtcviowgcykrtzsc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721905004.5779188-8135-90196839735033/AnsiballZ_find_unused_disk.py' Jul 25 06:56:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13122]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-13122) opened. Jul 25 06:56:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13122]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:44 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[13125]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 25 06:56:44 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13122]: pam_unix(sudo:session): session closed for user root Jul 25 06:56:45 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13240]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlrcvhkzdqarokremyeqsrvirtcriblt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1721905005.0701895-8143-97460420248963/AnsiballZ_command.py' Jul 25 06:56:45 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13240]: pam_systemd(sudo:session): New sd-bus connection (system-bus-pam-systemd-13240) opened. Jul 25 06:56:45 ip-10-31-9-229.us-east-1.aws.redhat.com sudo[13240]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 25 06:56:45 ip-10-31-9-229.us-east-1.aws.redhat.com python3.12[13243]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Set unused_disks if necessary] ******************************************* Thursday 25 July 2024 06:56:45 -0400 (0:00:00.552) 0:00:11.026 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'Unable to find unused disk' not in unused_disks_return.disks", "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** Thursday 25 July 2024 06:56:45 -0400 (0:00:00.017) 0:00:11.044 ********* fatal: [managed_node1]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed_node1 : ok=29 changed=0 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0 Thursday 25 July 2024 06:56:45 -0400 (0:00:00.020) 0:00:11.065 ********* =============================================================================== fedora.linux_system_roles.storage : Get service facts ------------------- 2.19s Gathering Facts --------------------------------------------------------- 1.23s fedora.linux_system_roles.storage : Update facts ------------------------ 0.94s fedora.linux_system_roles.storage : Make sure blivet is available ------- 0.83s Ensure test packages ---------------------------------------------------- 0.73s fedora.linux_system_roles.storage : Make sure required packages are installed --- 0.72s fedora.linux_system_roles.storage : Get required packages --------------- 0.70s fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 0.58s Debug why there are no unused disks ------------------------------------- 0.55s Find unused disks in the system ----------------------------------------- 0.50s fedora.linux_system_roles.storage : Check if system is ostree ----------- 0.49s fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 0.41s fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 0.41s fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.05s fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.05s fedora.linux_system_roles.storage : Set storage_cryptsetup_services ----- 0.04s fedora.linux_system_roles.storage : Enable copr repositories if needed --- 0.03s Get unused disks -------------------------------------------------------- 0.03s fedora.linux_system_roles.storage : Set up new/current mounts ----------- 0.03s Mark tasks to be skipped ------------------------------------------------ 0.03s