[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This feature will be removed from ansible-core in version 2.19. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ansible-playbook [core 2.17.2] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/tmp.UJtmxR54QA executable location = /usr/local/bin/ansible-playbook python version = 3.12.4 (main, Jul 17 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3.12) jinja version = 3.1.4 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles redirecting (type: callback) ansible.builtin.debug to ansible.posix.debug redirecting (type: callback) ansible.builtin.debug to ansible.posix.debug redirecting (type: callback) ansible.builtin.profile_tasks to ansible.posix.profile_tasks Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_quadlet_demo.yml *********************************************** 1 plays in /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml PLAY [Deploy the quadlet demo app] ********************************************* TASK [Gathering Facts] ********************************************************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:3 Saturday 27 July 2024 12:37:21 -0400 (0:00:00.009) 0:00:00.009 ********* [WARNING]: Platform linux on host managed_node1 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed_node1] TASK [Generate certificates] *************************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:33 Saturday 27 July 2024 12:37:22 -0400 (0:00:01.129) 0:00:01.139 ********* included: fedora.linux_system_roles.certificate for managed_node1 TASK [fedora.linux_system_roles.certificate : Set version specific variables] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:2 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.044) 0:00:01.184 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.certificate : Ensure ansible_facts used by role] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:2 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.022) 0:00:01.206 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__certificate_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Check if system is ostree] ******* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:10 Saturday 27 July 2024 12:37:22 -0400 (0:00:00.024) 0:00:01.231 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.certificate : Set flag to indicate system is ostree] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:15 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.448) 0:00:01.679 ********* ok: [managed_node1] => { "ansible_facts": { "__certificate_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.certificate : Set platform/version specific variables] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:19 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.027) 0:00:01.707 ********* skipping: [managed_node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__certificate_certmonger_packages": [ "certmonger", "python3-packaging" ] }, "ansible_included_var_files": [ "/tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed_node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__certificate_certmonger_packages": [ "certmonger", "python3-packaging" ] }, "ansible_included_var_files": [ "/tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 Saturday 27 July 2024 12:37:23 -0400 (0:00:00.045) 0:00:01.753 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: python3-cryptography python3-dbus python3-pyasn1 TASK [fedora.linux_system_roles.certificate : Ensure provider packages are installed] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:23 Saturday 27 July 2024 12:37:24 -0400 (0:00:01.083) 0:00:02.837 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: certmonger python3-packaging TASK [fedora.linux_system_roles.certificate : Ensure pre-scripts hooks directory exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:35 Saturday 27 July 2024 12:37:25 -0400 (0:00:00.943) 0:00:03.780 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//pre-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure post-scripts hooks directory exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:61 Saturday 27 July 2024 12:37:25 -0400 (0:00:00.495) 0:00:04.276 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//post-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure provider service is running] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:90 Saturday 27 July 2024 12:37:26 -0400 (0:00:00.384) 0:00:04.660 ********* ok: [managed_node1] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": false, "enabled": true, "name": "certmonger", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:certmonger_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:29:34 EDT", "ActiveEnterTimestampMonotonic": "307474561", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target sysinit.target syslog.target network.target dbus-broker.service systemd-journald.socket dbus.socket system.slice", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:29:34 EDT", "AssertTimestampMonotonic": "307459291", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedorahosted.certmonger", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "587103000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:29:34 EDT", "ConditionTimestampMonotonic": "307459287", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/certmonger.service", "ControlGroupId": "4016", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Certificate monitoring and PKI enrollment", "DevicePolicy": "auto", "DynamicUser": "no", "EnvironmentFiles": "/etc/sysconfig/certmonger (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "8639", "ExecMainStartTimestamp": "Sat 2024-07-27 12:29:34 EDT", "ExecMainStartTimestampMonotonic": "307465190", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/certmonger ; argv[]=/usr/sbin/certmonger -S -p /run/certmonger.pid -n $OPTS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/certmonger ; argv[]=/usr/sbin/certmonger -S -p /run/certmonger.pid -n $OPTS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/certmonger.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "certmonger.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:29:34 EDT", "InactiveExitTimestampMonotonic": "307465563", "InvocationID": "f658088dd6a7414dbff8e2e3cc164c10", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13964", "LimitNPROCSoft": "13964", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13964", "LimitSIGPENDINGSoft": "13964", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "8639", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "3919872", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "certmonger.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "PIDFile": "/run/certmonger.pid", "PartOf": "dbus-broker.service", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target system.slice dbus.socket", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2024-07-27 12:37:06 EDT", "StateChangeTimestampMonotonic": "758987859", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "1", "TasksMax": "22342", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.certificate : Ensure certificate requests] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:101 Saturday 27 July 2024 12:37:26 -0400 (0:00:00.755) 0:00:05.416 ********* changed: [managed_node1] => (item={'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}) => { "ansible_loop_var": "item", "changed": true, "item": { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } } MSG: Certificate requested (new). TASK [fedora.linux_system_roles.certificate : Slurp the contents of the files] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:152 Saturday 27 July 2024 12:37:27 -0400 (0:00:00.858) 0:00:06.275 ********* ok: [managed_node1] => (item=['cert', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRYSswTDhYRmVSNTZ6cEVBQzVFbTY5VEFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTm1KbApaREJpWmpFdE56RTFaVFEzT1dVdFlqTmhORFF3TURJdFpUUTBPV0poWmpNd0hoY05NalF3TnpJM01UWXpOekkzCldoY05NalV3TnpJM01UWXlPVE0xV2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ1AxbGNwQ2JiSEVDQWtaRGVCSm94YUp1Q2VPaml3cTRkWQpnMERuSmlmNml0Ny9yMGdITDBuWU9sd3dVSjNLUGJBY2p4eC9tR0JDZi91SENNbmZ4Y3NIckVEQlJYU1Q2aE9PCmY1ZUR0QzZvYWVJTFo2elBBcUJudFlhNDFQalNzUnlLV1ZteTJxamY3UkN5RWE3b1krNEE0bG5RanZtSElGc3gKcDQxSnlGYmZoeUFSQkFFYlk5ai9kUHR3TzIrNUlxaHpORy96SW9NNjZVVjJaT3lhcE54ZnBVRUwrMlgyaHh6ZgoraExydWNNNks5M251b0V1aWJpazJTNlpwVUpGbTMrenlDM0J5dXg1QUR1SWw2NEdNOHdkeURicFNIV1RZWk91CmpvdTUvaDlGSC8xc21JUjNLNk5YTFFZb2ZneVQzVVNTOWh0RldHZ255TnRvVGdhd2dRTlBBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVUrdkN2Cko0RUNOQ2hVcWtQbEs2R1lkNytjdlZFd0h3WURWUjBqQkJnd0ZvQVU3SHBmbjJSVlFuRkJTOXQ3dFVhc3E2TU4KblBzd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFESGpzbTVSUHB2NXZRbzZuS2xoTEdnVndZajZITy9HUk96YwpTdFZIWmR1aUhaT3k5VmllUDNjTnA0RTNCYitxT2pMZ1ZWUUJOM2Y1b2xLS0pzMnZ0MHg4b3NyaklzRjl2a2IyCmhoZ2wvdjRldG9MVWh3L2hsQUM1NDBKWGllQVdIQ3RaVjB0NHgybGVHKzhEaER3R3BKR1czVEN1WXdONjFuTEUKL1ljU09YTFovbXJSU2FsYVA0cmljWDE4cFZCTDhNb2pFZk5PQ2FxWHp1ZGhtUlEyeTFWMlFvZG52ZGpmWUtyNApiNWJRWDRjRGR5RFA3OWtvR3NLVEtBR1BVRU9vNFh6cG9XSEN2dVNhNVdBcUIrY3pOOEtyYXROYVFkS3h1S2ttCldjbEhBQk40UWxxdnpkTHI4a3VnWmVPNUEvVUI4QlNaME5OZldpcVpIZWt1WEU4RmZ5VT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "cert", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } ok: [managed_node1] => (item=['key', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRQ1AxbGNwQ2JiSEVDQWsKWkRlQkpveGFKdUNlT2ppd3E0ZFlnMERuSmlmNml0Ny9yMGdITDBuWU9sd3dVSjNLUGJBY2p4eC9tR0JDZi91SApDTW5meGNzSHJFREJSWFNUNmhPT2Y1ZUR0QzZvYWVJTFo2elBBcUJudFlhNDFQalNzUnlLV1ZteTJxamY3UkN5CkVhN29ZKzRBNGxuUWp2bUhJRnN4cDQxSnlGYmZoeUFSQkFFYlk5ai9kUHR3TzIrNUlxaHpORy96SW9NNjZVVjIKWk95YXBOeGZwVUVMKzJYMmh4emYraExydWNNNks5M251b0V1aWJpazJTNlpwVUpGbTMrenlDM0J5dXg1QUR1SQpsNjRHTTh3ZHlEYnBTSFdUWVpPdWpvdTUvaDlGSC8xc21JUjNLNk5YTFFZb2ZneVQzVVNTOWh0RldHZ255TnRvClRnYXdnUU5QQWdNQkFBRUNnZ0VBQjllQ3R2ME5sdnE5RmQ3VTI5aXpMRE1VYWV2NHJVRWN4dW1ZLzBFb1N0T0oKWnQweWl4bTc1K3IrbmErM2VwQTBhc291bnBJN0pVVCsvSlN4Rk44ZXU1TmJrRzN0OWlaNzNIZitWYkxjSEZoeApkM2UyaHk2NGVnNytnbE15SWFLQVVOYWdPazBMaWZITnlEZUlscWprRCs1SFFVU2FQQkM2aUJGL2RYZFRyU29HCjIzVFdHTzdOMXArdG1HRVpkZktYUzgreGpGdDVsTlpXSGkycVRlZWtCaXk1bU1hSG85ck9PTEdVRDJkdUJZeWEKSzdVVUE5Ym9nYWt3QmgyQWk2VU9iWVkwQXVYeTNuV3VNRG9lQVZQem5YSW9uams2S21kd0ZDQmFmRnUzSkpXNQpKdHBzZ3FteTRZOUNkaktMK2RybktuSXRyQVE3aVVIWHdkcTFPT01nb1FLQmdRREN5RGFiTlQzT0VZRk1FY09ECnRGYzRBcXFCckhpeERoU01UNWRxY1M1NHQrTjI0VUdNOHFROXBXaHVBSW10SktXMEg4MmpzM2ZFdXdXaElTSngKZjVFOWphdDl0L2pvZmkzbEdlUW12MFp0R0RXbExHeE9SeDVOdzFOUVZFU3k2N0lid1JJZndtOVFZM1lLWndzcwo5M1JaUWthZUIvN0U3dHZJaHZGdWM1S1E4UUtCZ1FDOUN6VGo4ZnBJS0lYNXZHdmxmczd1dWg5ZEhOR3BkTmZ6ClhXb1k4TytOUzltWDU4d1dtdlgvSmZMUk10OXpnK1BCbFBjaGV5V3o5bENzWnJSWlV6ZFI4UXdHM25uVlBGY3kKK1Jxdmd2akVjcERxc3FBVmpqVXlDQ1Y2aWxLQUIvbDR2RC9kZjhXK0hwOUVLTlk4eS80N3JnVFFhTmo5TFlHKwpTUUR6WjJUWVB3S0JnQlpkY00rdmRGSkY5a2Vxdm4xUDVyZmFyb05ITDNCOUFtVDkrby9SUVJuQlc0L1oyM0g0CnBsMGhzZlQzZ05kdG5zMG8zYTdQTzVCT1BSNDhOTDA5ZllySXlva2I1NnVpV1ZpMStWbHRtd09KeFJjYkc3QjkKUUFDRHpmd3FRTjBlYlF2OHhqejVVVmkwb3VnYzNzNmg5eTBNakJrM0o4eXE5SGQ0N3gzVWpWNnhBb0dBYnFMbgpZbHhVd1BST2JIa1VvR3hWaVN1T2ZYMEhTMmhobGtGZWZaaE1hbUl3eDVGV1JRaU4zYlNFNW1BaW5FVmMvd2RDCmx1cXVoeU1wMWF3Sjhwa2NNQzJsZjBPbkE5L2JuSzVqS3NLNCtxZWVIbTFKK1RPUCtHY0NJRFJoMGlKWW80dHcKeVI0bFNYNDhjYlNBcFhZeHBSWFVKWENuUll6amVNemE3SmpmVVJzQ2dZQmpmTjNXMkc3Q3NWRHlsYy9EanJrNwo1ZmtLOWs1VlJkeG9NeldMSFZ0NU1rVmpsLzludzV6cStnVWhFUHhkZzBNbE9ab2QzcGdKWlkxcVN6N1ExNFBRCjU2SW92Rk1pMng2a1V5dHQrSlJRVGlXU3NOZWRsVld2ZGdlSkdGc3NYbHdoN041ZTl3YzIwR1owSU1RSEtXZ0gKVHRqcEhuWjhWdXE0SXhud1ZEeThzdz09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K", "encoding": "base64", "item": [ "key", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/private/quadlet_demo.key" } ok: [managed_node1] => (item=['ca', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRYSswTDhYRmVSNTZ6cEVBQzVFbTY5VEFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTm1KbApaREJpWmpFdE56RTFaVFEzT1dVdFlqTmhORFF3TURJdFpUUTBPV0poWmpNd0hoY05NalF3TnpJM01UWXpOekkzCldoY05NalV3TnpJM01UWXlPVE0xV2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ1AxbGNwQ2JiSEVDQWtaRGVCSm94YUp1Q2VPaml3cTRkWQpnMERuSmlmNml0Ny9yMGdITDBuWU9sd3dVSjNLUGJBY2p4eC9tR0JDZi91SENNbmZ4Y3NIckVEQlJYU1Q2aE9PCmY1ZUR0QzZvYWVJTFo2elBBcUJudFlhNDFQalNzUnlLV1ZteTJxamY3UkN5RWE3b1krNEE0bG5RanZtSElGc3gKcDQxSnlGYmZoeUFSQkFFYlk5ai9kUHR3TzIrNUlxaHpORy96SW9NNjZVVjJaT3lhcE54ZnBVRUwrMlgyaHh6ZgoraExydWNNNks5M251b0V1aWJpazJTNlpwVUpGbTMrenlDM0J5dXg1QUR1SWw2NEdNOHdkeURicFNIV1RZWk91CmpvdTUvaDlGSC8xc21JUjNLNk5YTFFZb2ZneVQzVVNTOWh0RldHZ255TnRvVGdhd2dRTlBBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVUrdkN2Cko0RUNOQ2hVcWtQbEs2R1lkNytjdlZFd0h3WURWUjBqQkJnd0ZvQVU3SHBmbjJSVlFuRkJTOXQ3dFVhc3E2TU4KblBzd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFESGpzbTVSUHB2NXZRbzZuS2xoTEdnVndZajZITy9HUk96YwpTdFZIWmR1aUhaT3k5VmllUDNjTnA0RTNCYitxT2pMZ1ZWUUJOM2Y1b2xLS0pzMnZ0MHg4b3NyaklzRjl2a2IyCmhoZ2wvdjRldG9MVWh3L2hsQUM1NDBKWGllQVdIQ3RaVjB0NHgybGVHKzhEaER3R3BKR1czVEN1WXdONjFuTEUKL1ljU09YTFovbXJSU2FsYVA0cmljWDE4cFZCTDhNb2pFZk5PQ2FxWHp1ZGhtUlEyeTFWMlFvZG52ZGpmWUtyNApiNWJRWDRjRGR5RFA3OWtvR3NLVEtBR1BVRU9vNFh6cG9XSEN2dVNhNVdBcUIrY3pOOEtyYXROYVFkS3h1S2ttCldjbEhBQk40UWxxdnpkTHI4a3VnWmVPNUEvVUI4QlNaME5OZldpcVpIZWt1WEU4RmZ5VT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "ca", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } TASK [fedora.linux_system_roles.certificate : Create return data] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:160 Saturday 27 July 2024 12:37:28 -0400 (0:00:01.138) 0:00:07.413 ********* ok: [managed_node1] => { "ansible_facts": { "certificate_test_certs": { "quadlet_demo": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQa+0L8XFeR56zpEAC5Em69TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNmJl\nZDBiZjEtNzE1ZTQ3OWUtYjNhNDQwMDItZTQ0OWJhZjMwHhcNMjQwNzI3MTYzNzI3\nWhcNMjUwNzI3MTYyOTM1WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCP1lcpCbbHECAkZDeBJoxaJuCeOjiwq4dY\ng0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uHCMnfxcsHrEDBRXST6hOO\nf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCyEa7oY+4A4lnQjvmHIFsx\np41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2ZOyapNxfpUEL+2X2hxzf\n+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuIl64GM8wdyDbpSHWTYZOu\njou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNtoTgawgQNPAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU+vCv\nJ4ECNChUqkPlK6GYd7+cvVEwHwYDVR0jBBgwFoAU7Hpfn2RVQnFBS9t7tUasq6MN\nnPswDQYJKoZIhvcNAQELBQADggEBADHjsm5RPpv5vQo6nKlhLGgVwYj6HO/GROzc\nStVHZduiHZOy9VieP3cNp4E3Bb+qOjLgVVQBN3f5olKKJs2vt0x8osrjIsF9vkb2\nhhgl/v4etoLUhw/hlAC540JXieAWHCtZV0t4x2leG+8DhDwGpJGW3TCuYwN61nLE\n/YcSOXLZ/mrRSalaP4ricX18pVBL8MojEfNOCaqXzudhmRQ2y1V2QodnvdjfYKr4\nb5bQX4cDdyDP79koGsKTKAGPUEOo4XzpoWHCvuSa5WAqB+czN8KratNaQdKxuKkm\nWclHABN4QlqvzdLr8kugZeO5A/UB8BSZ0NNfWiqZHekuXE8FfyU=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQa+0L8XFeR56zpEAC5Em69TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNmJl\nZDBiZjEtNzE1ZTQ3OWUtYjNhNDQwMDItZTQ0OWJhZjMwHhcNMjQwNzI3MTYzNzI3\nWhcNMjUwNzI3MTYyOTM1WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCP1lcpCbbHECAkZDeBJoxaJuCeOjiwq4dY\ng0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uHCMnfxcsHrEDBRXST6hOO\nf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCyEa7oY+4A4lnQjvmHIFsx\np41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2ZOyapNxfpUEL+2X2hxzf\n+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuIl64GM8wdyDbpSHWTYZOu\njou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNtoTgawgQNPAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU+vCv\nJ4ECNChUqkPlK6GYd7+cvVEwHwYDVR0jBBgwFoAU7Hpfn2RVQnFBS9t7tUasq6MN\nnPswDQYJKoZIhvcNAQELBQADggEBADHjsm5RPpv5vQo6nKlhLGgVwYj6HO/GROzc\nStVHZduiHZOy9VieP3cNp4E3Bb+qOjLgVVQBN3f5olKKJs2vt0x8osrjIsF9vkb2\nhhgl/v4etoLUhw/hlAC540JXieAWHCtZV0t4x2leG+8DhDwGpJGW3TCuYwN61nLE\n/YcSOXLZ/mrRSalaP4ricX18pVBL8MojEfNOCaqXzudhmRQ2y1V2QodnvdjfYKr4\nb5bQX4cDdyDP79koGsKTKAGPUEOo4XzpoWHCvuSa5WAqB+czN8KratNaQdKxuKkm\nWclHABN4QlqvzdLr8kugZeO5A/UB8BSZ0NNfWiqZHekuXE8FfyU=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCP1lcpCbbHECAk\nZDeBJoxaJuCeOjiwq4dYg0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uH\nCMnfxcsHrEDBRXST6hOOf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCy\nEa7oY+4A4lnQjvmHIFsxp41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2\nZOyapNxfpUEL+2X2hxzf+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuI\nl64GM8wdyDbpSHWTYZOujou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNto\nTgawgQNPAgMBAAECggEAB9eCtv0Nlvq9Fd7U29izLDMUaev4rUEcxumY/0EoStOJ\nZt0yixm75+r+na+3epA0asounpI7JUT+/JSxFN8eu5NbkG3t9iZ73Hf+VbLcHFhx\nd3e2hy64eg7+glMyIaKAUNagOk0LifHNyDeIlqjkD+5HQUSaPBC6iBF/dXdTrSoG\n23TWGO7N1p+tmGEZdfKXS8+xjFt5lNZWHi2qTeekBiy5mMaHo9rOOLGUD2duBYya\nK7UUA9bogakwBh2Ai6UObYY0AuXy3nWuMDoeAVPznXIonjk6KmdwFCBafFu3JJW5\nJtpsgqmy4Y9CdjKL+drnKnItrAQ7iUHXwdq1OOMgoQKBgQDCyDabNT3OEYFMEcOD\ntFc4AqqBrHixDhSMT5dqcS54t+N24UGM8qQ9pWhuAImtJKW0H82js3fEuwWhISJx\nf5E9jat9t/jofi3lGeQmv0ZtGDWlLGxORx5Nw1NQVESy67IbwRIfwm9QY3YKZwss\n93RZQkaeB/7E7tvIhvFuc5KQ8QKBgQC9CzTj8fpIKIX5vGvlfs7uuh9dHNGpdNfz\nXWoY8O+NS9mX58wWmvX/JfLRMt9zg+PBlPcheyWz9lCsZrRZUzdR8QwG3nnVPFcy\n+RqvgvjEcpDqsqAVjjUyCCV6ilKAB/l4vD/df8W+Hp9EKNY8y/47rgTQaNj9LYG+\nSQDzZ2TYPwKBgBZdcM+vdFJF9keqvn1P5rfaroNHL3B9AmT9+o/RQRnBW4/Z23H4\npl0hsfT3gNdtns0o3a7PO5BOPR48NL09fYrIyokb56uiWVi1+VltmwOJxRcbG7B9\nQACDzfwqQN0ebQv8xjz5UVi0ougc3s6h9y0MjBk3J8yq9Hd47x3UjV6xAoGAbqLn\nYlxUwPRObHkUoGxViSuOfX0HS2hhlkFefZhMamIwx5FWRQiN3bSE5mAinEVc/wdC\nluquhyMp1awJ8pkcMC2lf0OnA9/bnK5jKsK4+qeeHm1J+TOP+GcCIDRh0iJYo4tw\nyR4lSX48cbSApXYxpRXUJXCnRYzjeMza7JjfURsCgYBjfN3W2G7CsVDylc/Djrk7\n5fkK9k5VRdxoMzWLHVt5MkVjl/9nw5zq+gUhEPxdg0MlOZod3pgJZY1qSz7Q14PQ\n56IovFMi2x6kUytt+JRQTiWSsNedlVWvdgeJGFssXlwh7N5e9wc20GZ0IMQHKWgH\nTtjpHnZ8Vuq4IxnwVDy8sw==\n-----END PRIVATE KEY-----\n" } } }, "changed": false } TASK [fedora.linux_system_roles.certificate : Stop tracking certificates] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:176 Saturday 27 July 2024 12:37:28 -0400 (0:00:00.032) 0:00:07.445 ********* ok: [managed_node1] => (item={'cert': '/etc/pki/tls/certs/quadlet_demo.crt', 'cert_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQa+0L8XFeR56zpEAC5Em69TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNmJl\nZDBiZjEtNzE1ZTQ3OWUtYjNhNDQwMDItZTQ0OWJhZjMwHhcNMjQwNzI3MTYzNzI3\nWhcNMjUwNzI3MTYyOTM1WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCP1lcpCbbHECAkZDeBJoxaJuCeOjiwq4dY\ng0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uHCMnfxcsHrEDBRXST6hOO\nf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCyEa7oY+4A4lnQjvmHIFsx\np41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2ZOyapNxfpUEL+2X2hxzf\n+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuIl64GM8wdyDbpSHWTYZOu\njou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNtoTgawgQNPAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU+vCv\nJ4ECNChUqkPlK6GYd7+cvVEwHwYDVR0jBBgwFoAU7Hpfn2RVQnFBS9t7tUasq6MN\nnPswDQYJKoZIhvcNAQELBQADggEBADHjsm5RPpv5vQo6nKlhLGgVwYj6HO/GROzc\nStVHZduiHZOy9VieP3cNp4E3Bb+qOjLgVVQBN3f5olKKJs2vt0x8osrjIsF9vkb2\nhhgl/v4etoLUhw/hlAC540JXieAWHCtZV0t4x2leG+8DhDwGpJGW3TCuYwN61nLE\n/YcSOXLZ/mrRSalaP4ricX18pVBL8MojEfNOCaqXzudhmRQ2y1V2QodnvdjfYKr4\nb5bQX4cDdyDP79koGsKTKAGPUEOo4XzpoWHCvuSa5WAqB+czN8KratNaQdKxuKkm\nWclHABN4QlqvzdLr8kugZeO5A/UB8BSZ0NNfWiqZHekuXE8FfyU=\n-----END CERTIFICATE-----\n', 'key': '/etc/pki/tls/private/quadlet_demo.key', 'key_content': '-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCP1lcpCbbHECAk\nZDeBJoxaJuCeOjiwq4dYg0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uH\nCMnfxcsHrEDBRXST6hOOf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCy\nEa7oY+4A4lnQjvmHIFsxp41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2\nZOyapNxfpUEL+2X2hxzf+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuI\nl64GM8wdyDbpSHWTYZOujou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNto\nTgawgQNPAgMBAAECggEAB9eCtv0Nlvq9Fd7U29izLDMUaev4rUEcxumY/0EoStOJ\nZt0yixm75+r+na+3epA0asounpI7JUT+/JSxFN8eu5NbkG3t9iZ73Hf+VbLcHFhx\nd3e2hy64eg7+glMyIaKAUNagOk0LifHNyDeIlqjkD+5HQUSaPBC6iBF/dXdTrSoG\n23TWGO7N1p+tmGEZdfKXS8+xjFt5lNZWHi2qTeekBiy5mMaHo9rOOLGUD2duBYya\nK7UUA9bogakwBh2Ai6UObYY0AuXy3nWuMDoeAVPznXIonjk6KmdwFCBafFu3JJW5\nJtpsgqmy4Y9CdjKL+drnKnItrAQ7iUHXwdq1OOMgoQKBgQDCyDabNT3OEYFMEcOD\ntFc4AqqBrHixDhSMT5dqcS54t+N24UGM8qQ9pWhuAImtJKW0H82js3fEuwWhISJx\nf5E9jat9t/jofi3lGeQmv0ZtGDWlLGxORx5Nw1NQVESy67IbwRIfwm9QY3YKZwss\n93RZQkaeB/7E7tvIhvFuc5KQ8QKBgQC9CzTj8fpIKIX5vGvlfs7uuh9dHNGpdNfz\nXWoY8O+NS9mX58wWmvX/JfLRMt9zg+PBlPcheyWz9lCsZrRZUzdR8QwG3nnVPFcy\n+RqvgvjEcpDqsqAVjjUyCCV6ilKAB/l4vD/df8W+Hp9EKNY8y/47rgTQaNj9LYG+\nSQDzZ2TYPwKBgBZdcM+vdFJF9keqvn1P5rfaroNHL3B9AmT9+o/RQRnBW4/Z23H4\npl0hsfT3gNdtns0o3a7PO5BOPR48NL09fYrIyokb56uiWVi1+VltmwOJxRcbG7B9\nQACDzfwqQN0ebQv8xjz5UVi0ougc3s6h9y0MjBk3J8yq9Hd47x3UjV6xAoGAbqLn\nYlxUwPRObHkUoGxViSuOfX0HS2hhlkFefZhMamIwx5FWRQiN3bSE5mAinEVc/wdC\nluquhyMp1awJ8pkcMC2lf0OnA9/bnK5jKsK4+qeeHm1J+TOP+GcCIDRh0iJYo4tw\nyR4lSX48cbSApXYxpRXUJXCnRYzjeMza7JjfURsCgYBjfN3W2G7CsVDylc/Djrk7\n5fkK9k5VRdxoMzWLHVt5MkVjl/9nw5zq+gUhEPxdg0MlOZod3pgJZY1qSz7Q14PQ\n56IovFMi2x6kUytt+JRQTiWSsNedlVWvdgeJGFssXlwh7N5e9wc20GZ0IMQHKWgH\nTtjpHnZ8Vuq4IxnwVDy8sw==\n-----END PRIVATE KEY-----\n', 'ca': '/etc/pki/tls/certs/quadlet_demo.crt', 'ca_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQa+0L8XFeR56zpEAC5Em69TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNmJl\nZDBiZjEtNzE1ZTQ3OWUtYjNhNDQwMDItZTQ0OWJhZjMwHhcNMjQwNzI3MTYzNzI3\nWhcNMjUwNzI3MTYyOTM1WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCP1lcpCbbHECAkZDeBJoxaJuCeOjiwq4dY\ng0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uHCMnfxcsHrEDBRXST6hOO\nf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCyEa7oY+4A4lnQjvmHIFsx\np41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2ZOyapNxfpUEL+2X2hxzf\n+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuIl64GM8wdyDbpSHWTYZOu\njou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNtoTgawgQNPAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU+vCv\nJ4ECNChUqkPlK6GYd7+cvVEwHwYDVR0jBBgwFoAU7Hpfn2RVQnFBS9t7tUasq6MN\nnPswDQYJKoZIhvcNAQELBQADggEBADHjsm5RPpv5vQo6nKlhLGgVwYj6HO/GROzc\nStVHZduiHZOy9VieP3cNp4E3Bb+qOjLgVVQBN3f5olKKJs2vt0x8osrjIsF9vkb2\nhhgl/v4etoLUhw/hlAC540JXieAWHCtZV0t4x2leG+8DhDwGpJGW3TCuYwN61nLE\n/YcSOXLZ/mrRSalaP4ricX18pVBL8MojEfNOCaqXzudhmRQ2y1V2QodnvdjfYKr4\nb5bQX4cDdyDP79koGsKTKAGPUEOo4XzpoWHCvuSa5WAqB+czN8KratNaQdKxuKkm\nWclHABN4QlqvzdLr8kugZeO5A/UB8BSZ0NNfWiqZHekuXE8FfyU=\n-----END CERTIFICATE-----\n'}) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "getcert", "stop-tracking", "-f", "/etc/pki/tls/certs/quadlet_demo.crt" ], "delta": "0:00:00.032809", "end": "2024-07-27 12:37:29.245982", "item": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQa+0L8XFeR56zpEAC5Em69TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNmJl\nZDBiZjEtNzE1ZTQ3OWUtYjNhNDQwMDItZTQ0OWJhZjMwHhcNMjQwNzI3MTYzNzI3\nWhcNMjUwNzI3MTYyOTM1WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCP1lcpCbbHECAkZDeBJoxaJuCeOjiwq4dY\ng0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uHCMnfxcsHrEDBRXST6hOO\nf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCyEa7oY+4A4lnQjvmHIFsx\np41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2ZOyapNxfpUEL+2X2hxzf\n+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuIl64GM8wdyDbpSHWTYZOu\njou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNtoTgawgQNPAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU+vCv\nJ4ECNChUqkPlK6GYd7+cvVEwHwYDVR0jBBgwFoAU7Hpfn2RVQnFBS9t7tUasq6MN\nnPswDQYJKoZIhvcNAQELBQADggEBADHjsm5RPpv5vQo6nKlhLGgVwYj6HO/GROzc\nStVHZduiHZOy9VieP3cNp4E3Bb+qOjLgVVQBN3f5olKKJs2vt0x8osrjIsF9vkb2\nhhgl/v4etoLUhw/hlAC540JXieAWHCtZV0t4x2leG+8DhDwGpJGW3TCuYwN61nLE\n/YcSOXLZ/mrRSalaP4ricX18pVBL8MojEfNOCaqXzudhmRQ2y1V2QodnvdjfYKr4\nb5bQX4cDdyDP79koGsKTKAGPUEOo4XzpoWHCvuSa5WAqB+czN8KratNaQdKxuKkm\nWclHABN4QlqvzdLr8kugZeO5A/UB8BSZ0NNfWiqZHekuXE8FfyU=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQa+0L8XFeR56zpEAC5Em69TANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNmJl\nZDBiZjEtNzE1ZTQ3OWUtYjNhNDQwMDItZTQ0OWJhZjMwHhcNMjQwNzI3MTYzNzI3\nWhcNMjUwNzI3MTYyOTM1WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCP1lcpCbbHECAkZDeBJoxaJuCeOjiwq4dY\ng0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uHCMnfxcsHrEDBRXST6hOO\nf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCyEa7oY+4A4lnQjvmHIFsx\np41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2ZOyapNxfpUEL+2X2hxzf\n+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuIl64GM8wdyDbpSHWTYZOu\njou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNtoTgawgQNPAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU+vCv\nJ4ECNChUqkPlK6GYd7+cvVEwHwYDVR0jBBgwFoAU7Hpfn2RVQnFBS9t7tUasq6MN\nnPswDQYJKoZIhvcNAQELBQADggEBADHjsm5RPpv5vQo6nKlhLGgVwYj6HO/GROzc\nStVHZduiHZOy9VieP3cNp4E3Bb+qOjLgVVQBN3f5olKKJs2vt0x8osrjIsF9vkb2\nhhgl/v4etoLUhw/hlAC540JXieAWHCtZV0t4x2leG+8DhDwGpJGW3TCuYwN61nLE\n/YcSOXLZ/mrRSalaP4ricX18pVBL8MojEfNOCaqXzudhmRQ2y1V2QodnvdjfYKr4\nb5bQX4cDdyDP79koGsKTKAGPUEOo4XzpoWHCvuSa5WAqB+czN8KratNaQdKxuKkm\nWclHABN4QlqvzdLr8kugZeO5A/UB8BSZ0NNfWiqZHekuXE8FfyU=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCP1lcpCbbHECAk\nZDeBJoxaJuCeOjiwq4dYg0DnJif6it7/r0gHL0nYOlwwUJ3KPbAcjxx/mGBCf/uH\nCMnfxcsHrEDBRXST6hOOf5eDtC6oaeILZ6zPAqBntYa41PjSsRyKWVmy2qjf7RCy\nEa7oY+4A4lnQjvmHIFsxp41JyFbfhyARBAEbY9j/dPtwO2+5IqhzNG/zIoM66UV2\nZOyapNxfpUEL+2X2hxzf+hLrucM6K93nuoEuibik2S6ZpUJFm3+zyC3Byux5ADuI\nl64GM8wdyDbpSHWTYZOujou5/h9FH/1smIR3K6NXLQYofgyT3USS9htFWGgnyNto\nTgawgQNPAgMBAAECggEAB9eCtv0Nlvq9Fd7U29izLDMUaev4rUEcxumY/0EoStOJ\nZt0yixm75+r+na+3epA0asounpI7JUT+/JSxFN8eu5NbkG3t9iZ73Hf+VbLcHFhx\nd3e2hy64eg7+glMyIaKAUNagOk0LifHNyDeIlqjkD+5HQUSaPBC6iBF/dXdTrSoG\n23TWGO7N1p+tmGEZdfKXS8+xjFt5lNZWHi2qTeekBiy5mMaHo9rOOLGUD2duBYya\nK7UUA9bogakwBh2Ai6UObYY0AuXy3nWuMDoeAVPznXIonjk6KmdwFCBafFu3JJW5\nJtpsgqmy4Y9CdjKL+drnKnItrAQ7iUHXwdq1OOMgoQKBgQDCyDabNT3OEYFMEcOD\ntFc4AqqBrHixDhSMT5dqcS54t+N24UGM8qQ9pWhuAImtJKW0H82js3fEuwWhISJx\nf5E9jat9t/jofi3lGeQmv0ZtGDWlLGxORx5Nw1NQVESy67IbwRIfwm9QY3YKZwss\n93RZQkaeB/7E7tvIhvFuc5KQ8QKBgQC9CzTj8fpIKIX5vGvlfs7uuh9dHNGpdNfz\nXWoY8O+NS9mX58wWmvX/JfLRMt9zg+PBlPcheyWz9lCsZrRZUzdR8QwG3nnVPFcy\n+RqvgvjEcpDqsqAVjjUyCCV6ilKAB/l4vD/df8W+Hp9EKNY8y/47rgTQaNj9LYG+\nSQDzZ2TYPwKBgBZdcM+vdFJF9keqvn1P5rfaroNHL3B9AmT9+o/RQRnBW4/Z23H4\npl0hsfT3gNdtns0o3a7PO5BOPR48NL09fYrIyokb56uiWVi1+VltmwOJxRcbG7B9\nQACDzfwqQN0ebQv8xjz5UVi0ougc3s6h9y0MjBk3J8yq9Hd47x3UjV6xAoGAbqLn\nYlxUwPRObHkUoGxViSuOfX0HS2hhlkFefZhMamIwx5FWRQiN3bSE5mAinEVc/wdC\nluquhyMp1awJ8pkcMC2lf0OnA9/bnK5jKsK4+qeeHm1J+TOP+GcCIDRh0iJYo4tw\nyR4lSX48cbSApXYxpRXUJXCnRYzjeMza7JjfURsCgYBjfN3W2G7CsVDylc/Djrk7\n5fkK9k5VRdxoMzWLHVt5MkVjl/9nw5zq+gUhEPxdg0MlOZod3pgJZY1qSz7Q14PQ\n56IovFMi2x6kUytt+JRQTiWSsNedlVWvdgeJGFssXlwh7N5e9wc20GZ0IMQHKWgH\nTtjpHnZ8Vuq4IxnwVDy8sw==\n-----END PRIVATE KEY-----\n" }, "rc": 0, "start": "2024-07-27 12:37:29.213173" } STDOUT: Request "20240727163727" removed. TASK [fedora.linux_system_roles.certificate : Remove files] ******************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:181 Saturday 27 July 2024 12:37:29 -0400 (0:00:00.488) 0:00:07.934 ********* changed: [managed_node1] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } changed: [managed_node1] => (item=/etc/pki/tls/private/quadlet_demo.key) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/private/quadlet_demo.key", "path": "/etc/pki/tls/private/quadlet_demo.key", "state": "absent" } ok: [managed_node1] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": false, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } TASK [Run the role] ************************************************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:44 Saturday 27 July 2024 12:37:30 -0400 (0:00:01.074) 0:00:09.008 ********* included: fedora.linux_system_roles.podman for managed_node1 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.069) 0:00:09.077 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.028) 0:00:09.106 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.024) 0:00:09.130 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.349) 0:00:09.480 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:20 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.024) 0:00:09.504 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 27 July 2024 12:37:30 -0400 (0:00:00.039) 0:00:09.544 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 27 July 2024 12:37:32 -0400 (0:00:01.131) 0:00:10.675 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.036) 0:00:10.711 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages))", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:22 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.037) 0:00:10.749 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.028741", "end": "2024-07-27 12:37:32.441500", "rc": 0, "start": "2024-07-27 12:37:32.412759" } STDOUT: podman version 5.1.2 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.393) 0:00:11.142 ********* ok: [managed_node1] => { "ansible_facts": { "podman_version": "5.1.2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:32 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.037) 0:00:11.180 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:39 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.034) 0:00:11.214 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:49 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.044) 0:00:11.259 ********* META: end_host conditional evaluated to False, continuing execution for managed_node1 skipping: [managed_node1] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed_node1" } MSG: end_host conditional evaluated to false, continuing execution for managed_node1 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.045) 0:00:11.305 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:32 -0400 (0:00:00.066) 0:00:11.371 ********* ok: [managed_node1] => { "ansible_facts": { "getent_passwd": { "root": [ "x", "0", "0", "root", "/root", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:33 -0400 (0:00:00.512) 0:00:11.884 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:33 -0400 (0:00:00.035) 0:00:11.920 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:33 -0400 (0:00:00.043) 0:00:11.963 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:33 -0400 (0:00:00.371) 0:00:12.335 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:33 -0400 (0:00:00.040) 0:00:12.376 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.369) 0:00:12.746 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.029) 0:00:12.776 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.029) 0:00:12.805 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.028) 0:00:12.834 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.028) 0:00:12.863 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.028) 0:00:12.891 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.027) 0:00:12.919 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.028) 0:00:12.948 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:62 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.027) 0:00:12.976 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:71 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.058) 0:00:13.034 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.061) 0:00:13.096 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.030) 0:00:13.127 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:74 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.029) 0:00:13.156 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.087) 0:00:13.244 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.031) 0:00:13.276 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:77 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.030) 0:00:13.306 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:5 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.063) 0:00:13.370 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:13 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.029) 0:00:13.400 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.030) 0:00:13.430 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:6 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.065) 0:00:13.496 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:14 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.030) 0:00:13.526 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:19 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.030) 0:00:13.556 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:25 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.032) 0:00:13.589 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:86 Saturday 27 July 2024 12:37:34 -0400 (0:00:00.030) 0:00:13.619 ********* included: fedora.linux_system_roles.firewall for managed_node1 TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 27 July 2024 12:37:35 -0400 (0:00:00.114) 0:00:13.734 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed_node1 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 27 July 2024 12:37:35 -0400 (0:00:00.057) 0:00:13.791 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 27 July 2024 12:37:35 -0400 (0:00:00.062) 0:00:13.854 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 27 July 2024 12:37:35 -0400 (0:00:00.372) 0:00:14.227 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 27 July 2024 12:37:35 -0400 (0:00:00.038) 0:00:14.265 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 27 July 2024 12:37:36 -0400 (0:00:00.370) 0:00:14.636 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_is_transactional": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 Saturday 27 July 2024 12:37:36 -0400 (0:00:00.041) 0:00:14.677 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:43 Saturday 27 July 2024 12:37:36 -0400 (0:00:00.923) 0:00:15.601 ********* skipping: [managed_node1] => { "false_condition": "__firewall_is_transactional | d(false)" } TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:48 Saturday 27 July 2024 12:37:36 -0400 (0:00:00.031) 0:00:15.633 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:53 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.031) 0:00:15.665 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Collect service facts] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.030) 0:00:15.695 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:9 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.029) 0:00:15.725 ********* skipping: [managed_node1] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "ufw", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:22 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.036) 0:00:15.761 ********* ok: [managed_node1] => { "changed": false, "name": "firewalld", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ActiveEnterTimestampMonotonic": "336405581", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "polkit.service dbus-broker.service dbus.socket sysinit.target system.slice basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:30:03 EDT", "AssertTimestampMonotonic": "335732159", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "1059354000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ConditionTimestampMonotonic": "335732155", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target ip6tables.service ebtables.service ipset.service nftables.service iptables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4302", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "13034", "ExecMainStartTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ExecMainStartTimestampMonotonic": "335744183", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:30:03 EDT", "InactiveExitTimestampMonotonic": "335744735", "InvocationID": "65026572be3e4e69abbcad284fe4fa9d", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13964", "LimitNPROCSoft": "13964", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13964", "LimitSIGPENDINGSoft": "13964", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "13034", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "33079296", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "dbus.socket sysinit.target system.slice", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2024-07-27 12:37:06 EDT", "StateChangeTimestampMonotonic": "758987730", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22342", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:28 Saturday 27 July 2024 12:37:37 -0400 (0:00:00.525) 0:00:16.287 ********* ok: [managed_node1] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ActiveEnterTimestampMonotonic": "336405581", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "polkit.service dbus-broker.service dbus.socket sysinit.target system.slice basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:30:03 EDT", "AssertTimestampMonotonic": "335732159", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "1059354000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ConditionTimestampMonotonic": "335732155", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target ip6tables.service ebtables.service ipset.service nftables.service iptables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4302", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "13034", "ExecMainStartTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ExecMainStartTimestampMonotonic": "335744183", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:30:03 EDT", "InactiveExitTimestampMonotonic": "335744735", "InvocationID": "65026572be3e4e69abbcad284fe4fa9d", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13964", "LimitNPROCSoft": "13964", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13964", "LimitSIGPENDINGSoft": "13964", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "13034", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "33079296", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "dbus.socket sysinit.target system.slice", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2024-07-27 12:37:06 EDT", "StateChangeTimestampMonotonic": "758987730", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22342", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:34 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.526) 0:00:16.814 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/bin/python3.9", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:43 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.041) 0:00:16.856 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:55 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.029) 0:00:16.886 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 Saturday 27 July 2024 12:37:38 -0400 (0:00:00.028) 0:00:16.914 ********* changed: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "8000/tcp", "state": "enabled" } } changed: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:120 Saturday 27 July 2024 12:37:39 -0400 (0:00:01.197) 0:00:18.112 ********* skipping: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:130 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.055) 0:00:18.168 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall | length == 1", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:139 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.035) 0:00:18.203 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:144 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.032) 0:00:18.235 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:153 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.059) 0:00:18.295 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:163 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.031) 0:00:18.326 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:169 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.028) 0:00:18.355 ********* skipping: [managed_node1] => { "false_condition": "__firewall_previous_replaced | bool" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:93 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.046) 0:00:18.401 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:100 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.031) 0:00:18.433 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:104 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.030) 0:00:18.463 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:113 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.027) 0:00:18.491 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:122 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.026) 0:00:18.518 ********* fatal: [managed_node1]: FAILED! => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result" } TASK [Dump journal] ************************************************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:124 Saturday 27 July 2024 12:37:39 -0400 (0:00:00.029) 0:00:18.547 ********* fatal: [managed_node1]: FAILED! => { "changed": false, "cmd": [ "journalctl", "-ex" ], "delta": "0:00:00.040257", "end": "2024-07-27 12:37:40.247807", "failed_when_result": true, "rc": 0, "start": "2024-07-27 12:37:40.207550" } STDOUT: Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth0 entered promiscuous mode Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered forwarding state Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started /usr/libexec/podman/aardvark-dns --config /run/user/3001/containers/networks/aardvark-dns -p 53 run. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 101. Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started libcrun container. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 105. Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started libcrun container. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 110. Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com podman[25631]: Pod: Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com podman[25631]: b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066 Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com podman[25631]: Container: Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com podman[25631]: 7bf805548a794144af77c1d1181503babd6ce6686ca4ba623f4458466f8055f2 Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started A template for running K8s workloads via podman-kube-play. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 74. Jul 27 12:32:19 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[25626]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:32:20 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[25906]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:32:20 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26014]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:32:21 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26122]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:22 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26231]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:23 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26339]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:23 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26446]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:24 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:32:24 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:32:24 ip-10-31-12-229.us-east-1.aws.redhat.com podman[26570]: 2024-07-27 12:32:24.825463549 -0400 EDT m=+0.460810648 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26691]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:25 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:32:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26798]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26905]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[26990]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722097945.7948165-8817-249596236438368/.source.yml _original_basename=.jq7_mvzx follow=False checksum=b3561c8f986bfa70cd5476459ea6f2110d65a0a2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27097]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Created slice cgroup machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice. ░░ Subject: A start job for unit machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice has finished successfully. ░░ ░░ The job identifier is 1609. Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:26.95605532 -0400 EDT m=+0.088958376 container create afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 (image=localhost/podman-pause:5.1.2-1720678294, name=c904233a5e6a-infra, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, io.buildah.version=1.36.0) Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:26.963183181 -0400 EDT m=+0.096086222 pod create c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 (image=, name=httpd2) Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:26.966734432 -0400 EDT m=+0.099637650 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:26.997651581 -0400 EDT m=+0.130554625 container create 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, io.containers.autoupdate=registry, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, app=test) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0223] manager: (podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth0 entered promiscuous mode Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0363] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/4) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-udevd[27116]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered forwarding state Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0460] device (veth0): carrier: link connected Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0462] device (podman1): carrier: link connected Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-udevd[27117]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0705] device (podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0716] device (podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0724] device (podman1): Activation: starting connection 'podman1' (5babca86-8d14-45b2-87bb-27d56fe6a6a3) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0728] device (podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0731] device (podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0733] device (podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.0737] device (podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Starting Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 1614. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 1614. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.1167] device (podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.1171] device (podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097947.1178] device (podman1): Activation: successful, device activated. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started /usr/libexec/podman/aardvark-dns --config /run/containers/networks/aardvark-dns -p 53 run. ░░ Subject: A start job for unit run-rf635b93922984e84a00f3e304e18977c.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-rf635b93922984e84a00f3e304e18977c.scope has finished successfully. ░░ ░░ The job identifier is 1678. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27209]: starting aardvark on a child with pid 27210 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27210]: Successfully parsed config Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27210]: Listen v4 ip {"podman-default-kube-network": [10.89.0.1]} Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27210]: Listen v6 ip {} Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27210]: Will Forward dns requests to udp://1.1.1.1:53 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27210]: Starting listen on udp 10.89.0.1:53 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope. ░░ Subject: A start job for unit libpod-conmon-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-conmon-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope has finished successfully. ░░ ░░ The job identifier is 1682. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27217]: conmon afcc88467a690a167268 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach} Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27217]: conmon afcc88467a690a167268 : terminal_ctrl_fd: 12 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27217]: conmon afcc88467a690a167268 : winsz read side: 16, winsz write side: 17 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.fGaMyc.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.fGaMyc.mount has successfully entered the 'dead' state. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope has finished successfully. ░░ ░░ The job identifier is 1687. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27217]: conmon afcc88467a690a167268 : container PID: 27219 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:27.3182544 -0400 EDT m=+0.451157635 container init afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 (image=localhost/podman-pause:5.1.2-1720678294, name=c904233a5e6a-infra, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, io.buildah.version=1.36.0) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:27.323054071 -0400 EDT m=+0.455957309 container start afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 (image=localhost/podman-pause:5.1.2-1720678294, name=c904233a5e6a-infra, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, io.buildah.version=1.36.0) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope. ░░ Subject: A start job for unit libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has finished successfully. ░░ ░░ The job identifier is 1692. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27222]: conmon 3bb4f888f7fc283d2f9a : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/11/attach} Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27222]: conmon 3bb4f888f7fc283d2f9a : terminal_ctrl_fd: 11 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27222]: conmon 3bb4f888f7fc283d2f9a : winsz read side: 15, winsz write side: 16 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has finished successfully. ░░ ░░ The job identifier is 1697. Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27222]: conmon 3bb4f888f7fc283d2f9a : container PID: 27224 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:27.383518314 -0400 EDT m=+0.516421687 container init 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:27.38757565 -0400 EDT m=+0.520479049 container start 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27104]: 2024-07-27 12:32:27.394443653 -0400 EDT m=+0.527346715 pod start c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 (image=, name=httpd2) Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27097]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27097]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 Container: 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27097]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2024-07-27T12:32:26-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2024-07-27T12:32:26-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2024-07-27T12:32:26-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2024-07-27T12:32:26-04:00" level=info msg="Using sqlite as database backend" time="2024-07-27T12:32:26-04:00" level=debug msg="Using graph driver overlay" time="2024-07-27T12:32:26-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Using run root /run/containers/storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2024-07-27T12:32:26-04:00" level=debug msg="Using tmp dir /run/libpod" time="2024-07-27T12:32:26-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2024-07-27T12:32:26-04:00" level=debug msg="Using transient store: false" time="2024-07-27T12:32:26-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:32:26-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:32:26-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2024-07-27T12:32:26-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2024-07-27T12:32:26-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2024-07-27T12:32:26-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2024-07-27T12:32:26-04:00" level=debug msg="Initializing event backend journald" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2024-07-27T12:32:26-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2024-07-27T12:32:26-04:00" level=info msg="Setting parallel job count to 7" time="2024-07-27T12:32:26-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network 7f20b6d410693ba0d53098bbbd51b58cf4a1522ca416836f34fe03bcda7fa5b0 bridge podman1 2024-07-27 12:30:23.859968267 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2024-07-27T12:32:26-04:00" level=debug msg="Successfully loaded 2 networks" time="2024-07-27T12:32:26-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:26-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720678294\" ..." time="2024-07-27T12:32:26-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3)" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Pod using bridge network mode" time="2024-07-27T12:32:26-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice for parent machine.slice and name libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799" time="2024-07-27T12:32:26-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice" time="2024-07-27T12:32:26-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice" time="2024-07-27T12:32:26-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:26-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720678294\" ..." time="2024-07-27T12:32:26-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3)" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Inspecting image cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Inspecting image cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3" time="2024-07-27T12:32:26-04:00" level=debug msg="Inspecting image cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3" time="2024-07-27T12:32:26-04:00" level=debug msg="using systemd mode: false" time="2024-07-27T12:32:26-04:00" level=debug msg="setting container name c904233a5e6a-infra" time="2024-07-27T12:32:26-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Allocated lock 1 for container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2024-07-27T12:32:26-04:00" level=debug msg="Check for idmapped mounts support " time="2024-07-27T12:32:26-04:00" level=debug msg="Created container \"afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Container \"afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2\" has work directory \"/var/lib/containers/storage/overlay-containers/afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2/userdata\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Container \"afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2\" has run directory \"/run/containers/storage/overlay-containers/afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2/userdata\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:26-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:32:26-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2024-07-27T12:32:26-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:26-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:32:26-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:26-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:32:26-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:32:26-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2024-07-27T12:32:26-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2024-07-27T12:32:26-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:32:26-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2024-07-27T12:32:26-04:00" level=debug msg="using systemd mode: false" time="2024-07-27T12:32:26-04:00" level=debug msg="adding container to pod httpd2" time="2024-07-27T12:32:26-04:00" level=debug msg="setting container name httpd2-httpd2" time="2024-07-27T12:32:26-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2024-07-27T12:32:26-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2024-07-27T12:32:26-04:00" level=debug msg="Adding mount /proc" time="2024-07-27T12:32:26-04:00" level=debug msg="Adding mount /dev" time="2024-07-27T12:32:26-04:00" level=debug msg="Adding mount /dev/pts" time="2024-07-27T12:32:26-04:00" level=debug msg="Adding mount /dev/mqueue" time="2024-07-27T12:32:26-04:00" level=debug msg="Adding mount /sys" time="2024-07-27T12:32:26-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2024-07-27T12:32:26-04:00" level=debug msg="Allocated lock 2 for container 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc" time="2024-07-27T12:32:26-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Created container \"3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Container \"3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc\" has work directory \"/var/lib/containers/storage/overlay-containers/3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc/userdata\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Container \"3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc\" has run directory \"/run/containers/storage/overlay-containers/3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc/userdata\"" time="2024-07-27T12:32:26-04:00" level=debug msg="Strongconnecting node afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2" time="2024-07-27T12:32:26-04:00" level=debug msg="Pushed afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 onto stack" time="2024-07-27T12:32:26-04:00" level=debug msg="Finishing node afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2. Popped afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 off stack" time="2024-07-27T12:32:26-04:00" level=debug msg="Strongconnecting node 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc" time="2024-07-27T12:32:26-04:00" level=debug msg="Pushed 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc onto stack" time="2024-07-27T12:32:26-04:00" level=debug msg="Finishing node 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc. Popped 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc off stack" time="2024-07-27T12:32:26-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/DVYPS57NAT2PVRUITZZJSWK23Y,upperdir=/var/lib/containers/storage/overlay/83cb103a9de7ad353fd0f2688ab5da4f2595a5970f0f38ac37e366fb0c220a1b/diff,workdir=/var/lib/containers/storage/overlay/83cb103a9de7ad353fd0f2688ab5da4f2595a5970f0f38ac37e366fb0c220a1b/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c459,c881\"" time="2024-07-27T12:32:27-04:00" level=debug msg="Mounted container \"afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2\" at \"/var/lib/containers/storage/overlay/83cb103a9de7ad353fd0f2688ab5da4f2595a5970f0f38ac37e366fb0c220a1b/merged\"" time="2024-07-27T12:32:27-04:00" level=debug msg="Created root filesystem for container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 at /var/lib/containers/storage/overlay/83cb103a9de7ad353fd0f2688ab5da4f2595a5970f0f38ac37e366fb0c220a1b/merged" time="2024-07-27T12:32:27-04:00" level=debug msg="Made network namespace at /run/netns/netns-3c95832c-65e9-4497-a783-ab962ad96ef4 for container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2" [DEBUG netavark::network::validation] "Validating network namespace..." [DEBUG netavark::commands::setup] "Setting up..." [INFO netavark::firewall] Using iptables firewall driver [DEBUG netavark::network::bridge] Setup network podman-default-kube-network [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.2/24] [DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24] [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/podman1/rp_filter to 2 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1 [DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/rp_filter to 2 [INFO netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100) [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-4B9D9135B29BA created on table nat [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 created on table filter [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 created on table filter [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_INPUT created on table filter [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT created on table nat and chain NETAVARK-4B9D9135B29BA [DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-4B9D9135B29BA [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-4B9D9135B29BA created on table nat and chain POSTROUTING [DEBUG netavark::firewall::varktables::helpers] rule -p udp -s 10.89.0.0/24 --dport 53 -j ACCEPT created on table filter and chain NETAVARK_INPUT [DEBUG netavark::firewall::varktables::helpers] rule -m conntrack --ctstate INVALID -j DROP created on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD [DEBUG netavark::firewall::firewalld] Adding firewalld rules for network 10.89.0.0/24 [DEBUG netavark::firewall::firewalld] Adding subnet 10.89.0.0/24 to zone trusted as source [DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.conf.podman1.route_localnet to 1 [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-DN-4B9D9135B29BA created on table nat [DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat [DEBUG netavark::firewall::varktables::helpers] rule -j MARK --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK [DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 10.89.0.0/24 -p tcp --dport 15002 created on table nat and chain NETAVARK-DN-4B9D9135B29BA [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-SETMARK -s 127.0.0.1 -p tcp --dport 15002 created on table nat and chain NETAVARK-DN-4B9D9135B29BA [DEBUG netavark::firewall::varktables::helpers] rule -j DNAT -p tcp --to-destination 10.89.0.2:80 --destination-port 15002 created on table nat and chain NETAVARK-DN-4B9D9135B29BA [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-DN-4B9D9135B29BA -p tcp --dport 15002 -m comment --comment 'dnat name: podman-default-kube-network id: afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2' created on table nat and chain NETAVARK-HOSTPORT-DNAT [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING [DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT [DEBUG netavark::dns::aardvark] Spawning aardvark server [DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "/usr/libexec/podman/aardvark-dns", "--config", "/run/containers/networks/aardvark-dns", "-p", "53", "run"] [DEBUG netavark::commands::setup] { "podman-default-kube-network": StatusBlock { dns_search_domains: Some( [ "dns.podman", ], ), dns_server_ips: Some( [ 10.89.0.1, ], ), interfaces: Some( { "eth0": NetInterface { mac_address: "26:3b:f3:ac:71:fd", subnets: Some( [ NetAddress { gateway: Some( 10.89.0.1, ), ipnet: 10.89.0.2/24, }, ], ), }, }, ), }, } [DEBUG netavark::commands::setup] "Setup complete" time="2024-07-27T12:32:27-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2024-07-27T12:32:27-04:00" level=debug msg="Setting Cgroups for container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 to machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice:libpod:afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2" time="2024-07-27T12:32:27-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2024-07-27T12:32:27-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay/83cb103a9de7ad353fd0f2688ab5da4f2595a5970f0f38ac37e366fb0c220a1b/merged\"" time="2024-07-27T12:32:27-04:00" level=debug msg="Created OCI spec for container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 at /var/lib/containers/storage/overlay-containers/afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2/userdata/config.json" time="2024-07-27T12:32:27-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice for parent machine.slice and name libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799" time="2024-07-27T12:32:27-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice" time="2024-07-27T12:32:27-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice" time="2024-07-27T12:32:27-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2024-07-27T12:32:27-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 -u afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2/userdata -p /run/containers/storage/overlay-containers/afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2/userdata/pidfile -n c904233a5e6a-infra --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2]" time="2024-07-27T12:32:27-04:00" level=info msg="Running conmon under slice machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice and unitName libpod-conmon-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope" time="2024-07-27T12:32:27-04:00" level=debug msg="Received: 27219" time="2024-07-27T12:32:27-04:00" level=info msg="Got Conmon PID as 27217" time="2024-07-27T12:32:27-04:00" level=debug msg="Created container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 in OCI runtime" time="2024-07-27T12:32:27-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2024-07-27T12:32:27-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2024-07-27T12:32:27-04:00" level=debug msg="Starting container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 with command [/catatonit -P]" time="2024-07-27T12:32:27-04:00" level=debug msg="Started container afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2" time="2024-07-27T12:32:27-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/3WDTKE7TBSQ2L4AUK7DIBLPBKR,upperdir=/var/lib/containers/storage/overlay/37385b1296d2430128fb86fd05380b1a877dc672ada5c2c1f396e9da2d20e84b/diff,workdir=/var/lib/containers/storage/overlay/37385b1296d2430128fb86fd05380b1a877dc672ada5c2c1f396e9da2d20e84b/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c459,c881\"" time="2024-07-27T12:32:27-04:00" level=debug msg="Mounted container \"3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc\" at \"/var/lib/containers/storage/overlay/37385b1296d2430128fb86fd05380b1a877dc672ada5c2c1f396e9da2d20e84b/merged\"" time="2024-07-27T12:32:27-04:00" level=debug msg="Created root filesystem for container 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc at /var/lib/containers/storage/overlay/37385b1296d2430128fb86fd05380b1a877dc672ada5c2c1f396e9da2d20e84b/merged" time="2024-07-27T12:32:27-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2024-07-27T12:32:27-04:00" level=debug msg="Setting Cgroups for container 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc to machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice:libpod:3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc" time="2024-07-27T12:32:27-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2024-07-27T12:32:27-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2024-07-27T12:32:27-04:00" level=debug msg="Created OCI spec for container 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc at /var/lib/containers/storage/overlay-containers/3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc/userdata/config.json" time="2024-07-27T12:32:27-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice for parent machine.slice and name libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799" time="2024-07-27T12:32:27-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice" time="2024-07-27T12:32:27-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice" time="2024-07-27T12:32:27-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2024-07-27T12:32:27-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc -u 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc/userdata -p /run/containers/storage/overlay-containers/3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc]" time="2024-07-27T12:32:27-04:00" level=info msg="Running conmon under slice machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice and unitName libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope" time="2024-07-27T12:32:27-04:00" level=debug msg="Received: 27224" time="2024-07-27T12:32:27-04:00" level=info msg="Got Conmon PID as 27222" time="2024-07-27T12:32:27-04:00" level=debug msg="Created container 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc in OCI runtime" time="2024-07-27T12:32:27-04:00" level=debug msg="Starting container 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc with command [/bin/busybox-extras httpd -f -p 80]" time="2024-07-27T12:32:27-04:00" level=debug msg="Started container 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc" time="2024-07-27T12:32:27-04:00" level=debug msg="Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2024-07-27T12:32:27-04:00" level=debug msg="Shutting down engines" Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27097]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27332]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:32:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:32:28 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[27350]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:32:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27472]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None Jul 27 12:32:28 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:32:28 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[27492]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27614]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Created slice Slice /system/podman-kube. ░░ Subject: A start job for unit system-podman\x2dkube.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit system-podman\x2dkube.slice has finished successfully. ░░ ░░ The job identifier is 1703. Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Starting A template for running K8s workloads via podman-kube-play... ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution. ░░ ░░ The job identifier is 1702. Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:29.62383584 -0400 EDT m=+0.033616663 pod stop c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 (image=, name=httpd2) Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope has successfully entered the 'dead' state. Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27217]: conmon afcc88467a690a167268 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice/libpod-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope/container/memory.events Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:29.65822663 -0400 EDT m=+0.068007814 container died afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 (image=localhost/podman-pause:5.1.2-1720678294, name=c904233a5e6a-infra, io.buildah.version=1.36.0) Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27210]: Received SIGHUP will refresh servers: 1 Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com aardvark-dns[27210]: No configuration found stopping the sever Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: run-rf635b93922984e84a00f3e304e18977c.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-rf635b93922984e84a00f3e304e18977c.scope has successfully entered the 'dead' state. Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth0 left promiscuous mode Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2)" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=info msg="Using sqlite as database backend" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using graph driver overlay" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using run root /run/containers/storage" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using tmp dir /run/libpod" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using transient store: false" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Cached value indicated that metacopy is being used" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Initializing event backend journald" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=info msg="Setting parallel job count to 7" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097949.7231] device (podman1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2d3c95832c\x2d65e9\x2d4497\x2da783\x2dab962ad96ef4.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2d3c95832c\x2d65e9\x2d4497\x2da783\x2dab962ad96ef4.mount has successfully entered the 'dead' state. Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.Cf1YTY.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.Cf1YTY.mount has successfully entered the 'dead' state. Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:29.944053013 -0400 EDT m=+0.353833701 container cleanup afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 (image=localhost/podman-pause:5.1.2-1720678294, name=c904233a5e6a-infra, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, io.buildah.version=1.36.0) Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2)" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27629]: time="2024-07-27T12:32:29-04:00" level=debug msg="Shutting down engines" Jul 27 12:32:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2.scope has successfully entered the 'dead' state. Jul 27 12:32:30 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-83cb103a9de7ad353fd0f2688ab5da4f2595a5970f0f38ac37e366fb0c220a1b-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-83cb103a9de7ad353fd0f2688ab5da4f2595a5970f0f38ac37e366fb0c220a1b-merged.mount has successfully entered the 'dead' state. Jul 27 12:32:30 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: time="2024-07-27T12:32:39-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has successfully entered the 'dead' state. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27222]: conmon 3bb4f888f7fc283d2f9a : container 27224 exited with status 137 Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27222]: conmon 3bb4f888f7fc283d2f9a : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice/libpod-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope/container/memory.events Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.653213967 -0400 EDT m=+10.062994886 container died 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.Phcwvg.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.Phcwvg.mount has successfully entered the 'dead' state. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc)" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-37385b1296d2430128fb86fd05380b1a877dc672ada5c2c1f396e9da2d20e84b-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-37385b1296d2430128fb86fd05380b1a877dc672ada5c2c1f396e9da2d20e84b-merged.mount has successfully entered the 'dead' state. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=info msg="Using sqlite as database backend" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using graph driver overlay" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using run root /run/containers/storage" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using tmp dir /run/libpod" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using transient store: false" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Cached value indicated that overlay is supported" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Cached value indicated that metacopy is being used" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Initializing event backend journald" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=info msg="Setting parallel job count to 7" Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.708577608 -0400 EDT m=+10.118358452 container cleanup 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopping libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope... ░░ Subject: A stop job for unit libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has begun execution. ░░ ░░ The job identifier is 1773. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=info msg="Received shutdown signal \"terminated\", terminating!" PID=27676 Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com /usr/bin/podman[27676]: time="2024-07-27T12:32:39-04:00" level=info msg="Invoking shutdown handler \"libpod\"" PID=27676 Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has successfully entered the 'dead' state. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopped libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope. ░░ Subject: A stop job for unit libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit libpod-conmon-3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc.scope has finished. ░░ ░░ The job identifier is 1773 and the job result is done. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Removed slice cgroup machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice. ░░ Subject: A stop job for unit machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice has finished. ░░ ░░ The job identifier is 1772 and the job result is done. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.791403868 -0400 EDT m=+10.201184564 container remove 3bb4f888f7fc283d2f9a25d9d6091e2fa0698bff0816aef3fb9278b2b03bf9bc (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.826950245 -0400 EDT m=+10.236731093 container remove afcc88467a690a167268abd4061385e7596b316ffb69c8501ef0b7ad2f4754e2 (image=localhost/podman-pause:5.1.2-1720678294, name=c904233a5e6a-infra, pod_id=c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799, io.buildah.version=1.36.0) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice: Failed to open /run/systemd/transient/machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice: No such file or directory Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.837001605 -0400 EDT m=+10.246782294 pod remove c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 (image=, name=httpd2) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Pods stopped: Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Pods removed: Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Error: removing pod c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 cgroup: removing pod c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799 cgroup: Unit machine-libpod_pod_c904233a5e6a411a5b0cd113119363dcc08244961471679127e26fc7f5668799.slice not loaded. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Secrets removed: Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Error: %!s() Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Volumes removed: Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.865904349 -0400 EDT m=+10.275685452 container create 63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666 (image=localhost/podman-pause:5.1.2-1720678294, name=ec7e5af3b69e-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Created slice cgroup machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice. ░░ Subject: A start job for unit machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice has finished successfully. ░░ ░░ The job identifier is 1774. Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.919440365 -0400 EDT m=+10.329221053 container create c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7 (image=localhost/podman-pause:5.1.2-1720678294, name=0333edc697be-infra, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, io.buildah.version=1.36.0) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.931626856 -0400 EDT m=+10.341407529 pod create 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 (image=, name=httpd2) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.96445956 -0400 EDT m=+10.374240537 container create 6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.964950787 -0400 EDT m=+10.374731789 container restart 63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666 (image=localhost/podman-pause:5.1.2-1720678294, name=ec7e5af3b69e-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:32:39 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:39.937430155 -0400 EDT m=+10.347211342 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666.scope has finished successfully. ░░ ░░ The job identifier is 1778. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:40.038794318 -0400 EDT m=+10.448575223 container init 63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666 (image=localhost/podman-pause:5.1.2-1720678294, name=ec7e5af3b69e-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:40.042678877 -0400 EDT m=+10.452459761 container start 63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666 (image=localhost/podman-pause:5.1.2-1720678294, name=ec7e5af3b69e-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.0594] manager: (podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/5) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth0 entered promiscuous mode Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered blocking state Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered forwarding state Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.0931] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.0941] device (veth0): carrier: link connected Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.0944] device (podman1): carrier: link connected Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-udevd[27694]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-udevd[27693]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1160] device (podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1176] device (podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1192] device (podman1): Activation: starting connection 'podman1' (63cadcf3-8c0d-471e-b3d8-39c1ef709808) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1194] device (podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1204] device (podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1211] device (podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1218] device (podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Starting Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 1783. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 1783. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1635] device (podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1638] device (podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097960.1644] device (podman1): Activation: successful, device activated. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started /usr/libexec/podman/aardvark-dns --config /run/containers/networks/aardvark-dns -p 53 run. ░░ Subject: A start job for unit run-rc5239bbd4bb64814889097c0b095abdd.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-rc5239bbd4bb64814889097c0b095abdd.scope has finished successfully. ░░ ░░ The job identifier is 1847. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7.scope has finished successfully. ░░ ░░ The job identifier is 1851. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:40.287177045 -0400 EDT m=+10.696957973 container init c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7 (image=localhost/podman-pause:5.1.2-1720678294, name=0333edc697be-infra, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:40.291170114 -0400 EDT m=+10.700950916 container start c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7 (image=localhost/podman-pause:5.1.2-1720678294, name=0333edc697be-infra, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3.scope has finished successfully. ░░ ░░ The job identifier is 1856. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:40.338536993 -0400 EDT m=+10.748318145 container init 6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:40.342523818 -0400 EDT m=+10.752304787 container start 6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 2024-07-27 12:32:40.348684184 -0400 EDT m=+10.758464877 pod start 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 (image=, name=httpd2) Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started A template for running K8s workloads via podman-kube-play. ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished successfully. ░░ ░░ The job identifier is 1702. Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Pod: Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: Container: Jul 27 12:32:40 ip-10-31-12-229.us-east-1.aws.redhat.com podman[27618]: 6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3 Jul 27 12:32:41 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[27892]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:32:41 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28000]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28109]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:32:43 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28217]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:44 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28324]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:45 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28448]: 2024-07-27 12:32:45.247024401 -0400 EDT m=+0.535824759 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28569]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:32:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28676]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28783]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:32:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28868]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722097966.1948674-8956-160637284820936/.source.yml _original_basename=.rck8z22o follow=False checksum=357a4dee3ead538c9b0b23b7f6bad0dfb461c402 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[28975]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Created slice cgroup machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice. ░░ Subject: A start job for unit machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice has finished successfully. ░░ ░░ The job identifier is 1861. Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.307705136 -0400 EDT m=+0.074336331 container create a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5 (image=localhost/podman-pause:5.1.2-1720678294, name=d06e81f27e10-infra, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, io.buildah.version=1.36.0) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.314087377 -0400 EDT m=+0.080718475 pod create d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f (image=, name=httpd3) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.341693521 -0400 EDT m=+0.108324696 container create 91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.316825714 -0400 EDT m=+0.083456951 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth1 entered promiscuous mode Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered forwarding state Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097967.3723] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097967.3796] device (veth1): carrier: link connected Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-udevd[28995]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope. ░░ Subject: A start job for unit libpod-conmon-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-conmon-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope has finished successfully. ░░ ░░ The job identifier is 1866. Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope has finished successfully. ░░ ░░ The job identifier is 1871. Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.525380472 -0400 EDT m=+0.292011755 container init a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5 (image=localhost/podman-pause:5.1.2-1720678294, name=d06e81f27e10-infra, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, io.buildah.version=1.36.0) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.532760241 -0400 EDT m=+0.299391458 container start a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5 (image=localhost/podman-pause:5.1.2-1720678294, name=d06e81f27e10-infra, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, io.buildah.version=1.36.0) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope. ░░ Subject: A start job for unit libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has finished successfully. ░░ ░░ The job identifier is 1876. Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has finished successfully. ░░ ░░ The job identifier is 1881. Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.586556373 -0400 EDT m=+0.353187760 container init 91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.590512214 -0400 EDT m=+0.357143418 container start 91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:32:47 ip-10-31-12-229.us-east-1.aws.redhat.com podman[28982]: 2024-07-27 12:32:47.596842145 -0400 EDT m=+0.363473248 pod start d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f (image=, name=httpd3) Jul 27 12:32:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[29173]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:32:48 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:32:48 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[29191]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:32:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[29313]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None Jul 27 12:32:48 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[29334]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[29455]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Starting A template for running K8s workloads via podman-kube-play... ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution. ░░ ░░ The job identifier is 1886. Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:49.739768845 -0400 EDT m=+0.030946202 pod stop d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f (image=, name=httpd3) Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.T6fkwb.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.T6fkwb.mount has successfully entered the 'dead' state. Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope has successfully entered the 'dead' state. Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:49.766815764 -0400 EDT m=+0.057993143 container died a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5 (image=localhost/podman-pause:5.1.2-1720678294, name=d06e81f27e10-infra, io.buildah.version=1.36.0) Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.v3a1LS.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.v3a1LS.mount has successfully entered the 'dead' state. Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth1 left promiscuous mode Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:49.881245334 -0400 EDT m=+0.172422449 container cleanup a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5 (image=localhost/podman-pause:5.1.2-1720678294, name=d06e81f27e10-infra, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, io.buildah.version=1.36.0) Jul 27 12:32:49 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5.scope has successfully entered the 'dead' state. Jul 27 12:32:50 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 27 12:32:50 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2d0a0f767b\x2d3815\x2d181e\x2d50b8\x2dcc9f05394d40.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2d0a0f767b\x2d3815\x2d181e\x2d50b8\x2dcc9f05394d40.mount has successfully entered the 'dead' state. Jul 27 12:32:50 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-e6ca1f02269c4e27e744017cc37b453bc0c2a10423bb94b4c52428e1176258e3-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-e6ca1f02269c4e27e744017cc37b453bc0c2a10423bb94b4c52428e1176258e3-merged.mount has successfully entered the 'dead' state. Jul 27 12:32:50 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: time="2024-07-27T12:32:59-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL" Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.HCBiMc.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.HCBiMc.mount has successfully entered the 'dead' state. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has successfully entered the 'dead' state. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:59.807739519 -0400 EDT m=+10.098916831 container died 91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:59.858840768 -0400 EDT m=+10.150017997 container cleanup 91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopping libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope... ░░ Subject: A stop job for unit libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has begun execution. ░░ ░░ The job identifier is 1957. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has successfully entered the 'dead' state. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopped libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope. ░░ Subject: A stop job for unit libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit libpod-conmon-91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089.scope has finished. ░░ ░░ The job identifier is 1957 and the job result is done. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Removed slice cgroup machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice. ░░ Subject: A stop job for unit machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice has finished. ░░ ░░ The job identifier is 1956 and the job result is done. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:59.867628064 -0400 EDT m=+10.158805172 pod stop d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f (image=, name=httpd3) Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice: Failed to open /run/systemd/transient/machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice: No such file or directory Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: time="2024-07-27T12:32:59-04:00" level=error msg="Checking if infra needs to be stopped: removing pod d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f cgroup: Unit machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice not loaded." Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:59.905798629 -0400 EDT m=+10.196976060 container remove 91cbca44d77745a073ff57356579198f89820a0789289cf02353011f3e8eb089 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:59.935227514 -0400 EDT m=+10.226404645 container remove a499fdb33559053866c9b1eddc1b8872c990b2284d827de46d398bcb15acdad5 (image=localhost/podman-pause:5.1.2-1720678294, name=d06e81f27e10-infra, pod_id=d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f, io.buildah.version=1.36.0) Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice: Failed to open /run/systemd/transient/machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice: No such file or directory Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:32:59.94422623 -0400 EDT m=+10.235403338 pod remove d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f (image=, name=httpd3) Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Pods stopped: Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Pods removed: Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Error: removing pod d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f cgroup: removing pod d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f cgroup: Unit machine-libpod_pod_d06e81f27e109d8befb434a48b23264b083f955e3817a99c7737a5ad5d5d344f.slice not loaded. Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Secrets removed: Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Error: %!s() Jul 27 12:32:59 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Volumes removed: Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.050504933 -0400 EDT m=+10.341682044 container create 32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d (image=localhost/podman-pause:5.1.2-1720678294, name=df543e1580ae-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Created slice cgroup machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice. ░░ Subject: A start job for unit machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice has finished successfully. ░░ ░░ The job identifier is 1958. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.091305941 -0400 EDT m=+10.382483053 container create 2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7 (image=localhost/podman-pause:5.1.2-1720678294, name=23fad04f45a5-infra, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.098505007 -0400 EDT m=+10.389682221 pod create 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 (image=, name=httpd3) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.101362242 -0400 EDT m=+10.392539534 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.128704213 -0400 EDT m=+10.419881321 container create c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, created_by=test/system/build-testimage, io.buildah.version=1.21.0, app=test, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, created_at=2021-06-10T18:55:36Z) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.129863671 -0400 EDT m=+10.421041006 container restart 32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d (image=localhost/podman-pause:5.1.2-1720678294, name=df543e1580ae-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d.scope has finished successfully. ░░ ░░ The job identifier is 1962. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.175233893 -0400 EDT m=+10.466411435 container init 32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d (image=localhost/podman-pause:5.1.2-1720678294, name=df543e1580ae-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.178995069 -0400 EDT m=+10.470172324 container start 32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d (image=localhost/podman-pause:5.1.2-1720678294, name=df543e1580ae-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth1 entered promiscuous mode Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097980.2002] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/8) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered blocking state Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered forwarding state Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722097980.2042] device (veth1): carrier: link connected Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-udevd[29515]: Network interface NamePolicy= disabled on kernel command line. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7.scope has finished successfully. ░░ ░░ The job identifier is 1967. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.345067781 -0400 EDT m=+10.636245127 container init 2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7 (image=localhost/podman-pause:5.1.2-1720678294, name=23fad04f45a5-infra, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.348790804 -0400 EDT m=+10.639968227 container start 2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7 (image=localhost/podman-pause:5.1.2-1720678294, name=23fad04f45a5-infra, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started libcrun container. ░░ Subject: A start job for unit libpod-c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016.scope has finished successfully. ░░ ░░ The job identifier is 1972. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.39985027 -0400 EDT m=+10.691027650 container init c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.40372514 -0400 EDT m=+10.694902385 container start c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 2024-07-27 12:33:00.410518376 -0400 EDT m=+10.701695620 pod start 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 (image=, name=httpd3) Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Pod: Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: Container: Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com podman[29459]: c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016 Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started A template for running K8s workloads via podman-kube-play. ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished successfully. ░░ ░░ The job identifier is 1886. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.f54lvk.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.f54lvk.mount has successfully entered the 'dead' state. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-536d27d99f9004a8cd5fc963cec31b69f39bd13eec1488b17190cf4450215db3-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-536d27d99f9004a8cd5fc963cec31b69f39bd13eec1488b17190cf4450215db3-merged.mount has successfully entered the 'dead' state. Jul 27 12:33:00 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[29692]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpxhgnorkqazskrgcapmcpcedjywvlcw ; /usr/bin/python3.9 /var/tmp/ansible-tmp-1722097980.8589134-9008-141686773631681/AnsiballZ_command.py' Jul 27 12:33:01 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[29692]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:01 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[29694]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:01 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-29702.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 115. Jul 27 12:33:01 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[29692]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:01 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[29819]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:01 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[29934]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:02 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[30049]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwpfcxnepqymoriudyyvuebgiuiabxgn ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722097982.1569126-9030-98283515000115/AnsiballZ_command.py' Jul 27 12:33:02 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[30049]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30051]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:02 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[30049]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30161]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:03 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30271]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:03 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30381]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30489]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30597]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_2r2po57x_podman/httpd1-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30705]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_2r2po57x_podman/httpd2-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:05 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[30813]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_2r2po57x_podman/httpd3-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:07 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31028]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:08 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31141]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:33:08 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31249]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:10 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31358]: ansible-ansible.legacy.dnf Invoked with name=['firewalld'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:33:12 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31466]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None Jul 27 12:33:12 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31575]: ansible-ansible.legacy.systemd Invoked with name=firewalld state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Jul 27 12:33:13 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31684]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Jul 27 12:33:14 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31791]: ansible-ansible.legacy.dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:33:15 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[31899]: ansible-ansible.legacy.dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:33:17 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32007]: ansible-setup Invoked with filter=['ansible_selinux'] gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Jul 27 12:33:18 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32153]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Jul 27 12:33:19 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32260]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Jul 27 12:33:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32367]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Jul 27 12:33:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32475]: ansible-getent Invoked with database=group key=3001 fail_key=False service=None split=None Jul 27 12:33:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32583]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32692]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32800]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[32908]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33016]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Jul 27 12:33:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33123]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33230]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:28 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[33337]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrtddghvizboghotwabztwswimnwqmcq ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098008.80704-9383-81213612032775/AnsiballZ_podman_image.py' Jul 27 12:33:28 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[33337]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-33340.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 119. Jul 27 12:33:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-33347.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 123. Jul 27 12:33:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-33354.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 127. Jul 27 12:33:29 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-33361.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 131. Jul 27 12:33:29 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[33337]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33475]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33584]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33691]: ansible-ansible.legacy.stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33745]: ansible-ansible.legacy.file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=.q3o678z2 recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[33852]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnczxunrardrqfcssdbcsxqkyliifgld ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098011.3888984-9419-159959416239066/AnsiballZ_podman_play.py' Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[33852]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33854]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-33861.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 135. Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Created slice cgroup user-libpod_pod_b0265840b5794cd2683857fe98c60ce0418994a3cfd679c18930ffb4543990cc.slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 139. Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33854]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33854]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33854]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2024-07-27T12:33:31-04:00" level=info msg="/bin/podman filtering at log level debug" time="2024-07-27T12:33:31-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2024-07-27T12:33:31-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2024-07-27T12:33:31-04:00" level=info msg="Using sqlite as database backend" time="2024-07-27T12:33:31-04:00" level=debug msg="systemd-logind: Unknown object '/'." time="2024-07-27T12:33:31-04:00" level=debug msg="Using graph driver overlay" time="2024-07-27T12:33:31-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2024-07-27T12:33:31-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2024-07-27T12:33:31-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2024-07-27T12:33:31-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2024-07-27T12:33:31-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2024-07-27T12:33:31-04:00" level=debug msg="Using transient store: false" time="2024-07-27T12:33:31-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2024-07-27T12:33:31-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:33:31-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:33:31-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2024-07-27T12:33:31-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2024-07-27T12:33:31-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2024-07-27T12:33:31-04:00" level=debug msg="Initializing event backend file" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2024-07-27T12:33:31-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2024-07-27T12:33:31-04:00" level=info msg="Setting parallel job count to 7" time="2024-07-27T12:33:31-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network 71b1a60f73e1e868b6ea683735758103546553888726eb48f91ae1a6ca2d9c5b bridge podman1 2024-07-27 12:32:05.92116415 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2024-07-27T12:33:31-04:00" level=debug msg="Successfully loaded 2 networks" time="2024-07-27T12:33:31-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:33:31-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:33:31-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720678294\" ..." time="2024-07-27T12:33:31-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@0e807b318beb575f6f204f06ccab43013f496f1efa9354d0b186f422b369f7b0\"" time="2024-07-27T12:33:31-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:33:31-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@0e807b318beb575f6f204f06ccab43013f496f1efa9354d0b186f422b369f7b0)" time="2024-07-27T12:33:31-04:00" level=debug msg="exporting opaque data as blob \"sha256:0e807b318beb575f6f204f06ccab43013f496f1efa9354d0b186f422b369f7b0\"" time="2024-07-27T12:33:31-04:00" level=debug msg="Pod using bridge network mode" time="2024-07-27T12:33:31-04:00" level=debug msg="Created cgroup path user.slice/user-libpod_pod_b0265840b5794cd2683857fe98c60ce0418994a3cfd679c18930ffb4543990cc.slice for parent user.slice and name libpod_pod_b0265840b5794cd2683857fe98c60ce0418994a3cfd679c18930ffb4543990cc" time="2024-07-27T12:33:31-04:00" level=debug msg="Created cgroup user.slice/user-libpod_pod_b0265840b5794cd2683857fe98c60ce0418994a3cfd679c18930ffb4543990cc.slice" time="2024-07-27T12:33:31-04:00" level=debug msg="Got pod cgroup as user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_b0265840b5794cd2683857fe98c60ce0418994a3cfd679c18930ffb4543990cc.slice" Error: adding pod to state: name "httpd1" is in use: pod already exists time="2024-07-27T12:33:31-04:00" level=debug msg="Shutting down engines" Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33854]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Jul 27 12:33:31 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[33852]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:32 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[33975]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:33:33 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34083]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:33:33 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34191]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:34 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34300]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:35 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34408]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:35 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34515]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:36 ip-10-31-12-229.us-east-1.aws.redhat.com podman[34637]: 2024-07-27 12:33:36.834126392 -0400 EDT m=+0.436502131 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:33:37 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34759]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:37 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34868]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[34975]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35029]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=.6xjksi44 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35136]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Created slice cgroup machine-libpod_pod_fb72e13a1e887ac47b69f42bbe6428bf552038e72bbaf45c119262c9551f6743.slice. ░░ Subject: A start job for unit machine-libpod_pod_fb72e13a1e887ac47b69f42bbe6428bf552038e72bbaf45c119262c9551f6743.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_fb72e13a1e887ac47b69f42bbe6428bf552038e72bbaf45c119262c9551f6743.slice has finished successfully. ░░ ░░ The job identifier is 1977. Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35136]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35136]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35136]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2024-07-27T12:33:38-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2024-07-27T12:33:38-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2024-07-27T12:33:38-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2024-07-27T12:33:38-04:00" level=info msg="Using sqlite as database backend" time="2024-07-27T12:33:38-04:00" level=debug msg="Using graph driver overlay" time="2024-07-27T12:33:38-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2024-07-27T12:33:38-04:00" level=debug msg="Using run root /run/containers/storage" time="2024-07-27T12:33:38-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2024-07-27T12:33:38-04:00" level=debug msg="Using tmp dir /run/libpod" time="2024-07-27T12:33:38-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2024-07-27T12:33:38-04:00" level=debug msg="Using transient store: false" time="2024-07-27T12:33:38-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2024-07-27T12:33:38-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:33:38-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2024-07-27T12:33:38-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2024-07-27T12:33:38-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2024-07-27T12:33:38-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2024-07-27T12:33:38-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2024-07-27T12:33:38-04:00" level=debug msg="Initializing event backend journald" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2024-07-27T12:33:38-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2024-07-27T12:33:38-04:00" level=info msg="Setting parallel job count to 7" time="2024-07-27T12:33:38-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network 7f20b6d410693ba0d53098bbbd51b58cf4a1522ca416836f34fe03bcda7fa5b0 bridge podman1 2024-07-27 12:30:23.859968267 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2024-07-27T12:33:38-04:00" level=debug msg="Successfully loaded 2 networks" time="2024-07-27T12:33:38-04:00" level=debug msg="Looking up image \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2024-07-27T12:33:38-04:00" level=debug msg="Trying \"localhost/podman-pause:5.1.2-1720678294\" ..." time="2024-07-27T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:33:38-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage" time="2024-07-27T12:33:38-04:00" level=debug msg="Found image \"localhost/podman-pause:5.1.2-1720678294\" as \"localhost/podman-pause:5.1.2-1720678294\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3)" time="2024-07-27T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3\"" time="2024-07-27T12:33:38-04:00" level=debug msg="Pod using bridge network mode" time="2024-07-27T12:33:38-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_fb72e13a1e887ac47b69f42bbe6428bf552038e72bbaf45c119262c9551f6743.slice for parent machine.slice and name libpod_pod_fb72e13a1e887ac47b69f42bbe6428bf552038e72bbaf45c119262c9551f6743" time="2024-07-27T12:33:38-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_fb72e13a1e887ac47b69f42bbe6428bf552038e72bbaf45c119262c9551f6743.slice" time="2024-07-27T12:33:38-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_fb72e13a1e887ac47b69f42bbe6428bf552038e72bbaf45c119262c9551f6743.slice" Error: adding pod to state: name "httpd2" is in use: pod already exists time="2024-07-27T12:33:38-04:00" level=debug msg="Shutting down engines" Jul 27 12:33:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35136]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Jul 27 12:33:39 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35256]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:33:40 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35364]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:41 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35473]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35581]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35688]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:43 ip-10-31-12-229.us-east-1.aws.redhat.com podman[35810]: 2024-07-27 12:33:43.815789139 -0400 EDT m=+0.602209546 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Jul 27 12:33:44 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[35931]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:44 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36040]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36147]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:33:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36201]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=.x2cp5_lq recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36308]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:33:45 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Created slice cgroup machine-libpod_pod_63e1553facac05c8c1d67c45f694e1e49e011ff7e2b1577b1506ccc0a9b051eb.slice. ░░ Subject: A start job for unit machine-libpod_pod_63e1553facac05c8c1d67c45f694e1e49e011ff7e2b1577b1506ccc0a9b051eb.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_63e1553facac05c8c1d67c45f694e1e49e011ff7e2b1577b1506ccc0a9b051eb.slice has finished successfully. ░░ ░░ The job identifier is 1981. Jul 27 12:33:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[36428]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocoigutfqtvekmbpaoupozcpjgvlcsui ; /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098026.4230695-9672-73775375525050/AnsiballZ_command.py' Jul 27 12:33:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[36428]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36430]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:46 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-36439.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 143. Jul 27 12:33:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[36428]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36553]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36668]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:47 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[36782]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knavlbtsmmdlbqkgmkzkieaydyhyxqrp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098027.6707408-9694-113719573933069/AnsiballZ_command.py' Jul 27 12:33:47 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[36782]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36784]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:47 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[36782]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:33:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[36894]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37004]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37114]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37222]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:50 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37330]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:33:52 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37545]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:53 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37658]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:33:53 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37766]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:55 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37875]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Jul 27 12:33:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[37983]: ansible-getent Invoked with database=group key=3001 fail_key=False service=None split=None Jul 27 12:33:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38091]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:57 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38200]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:57 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38308]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:58 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38416]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38524]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[38633]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sotixveccphabexqyjardnewfxenmlpx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098039.336067-9891-274610954778057/AnsiballZ_systemd.py' Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[38633]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38635]: ansible-systemd Invoked with name=podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Reloading. Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopping A template for running K8s workloads via podman-kube-play... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 147. Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[25790]: conmon fc10fd8b20d610a9a93d : Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066.slice/libpod-fc10fd8b20d610a9a93d43da2f4652d1ccb56dba94dbcedc3c7f62c4cce8d377.scope/container/memory.events Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth0 left promiscuous mode Jul 27 12:33:59 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: time="2024-07-27T12:34:10-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd1-httpd1 in 10 seconds, resorting to SIGKILL" Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[25796]: conmon 7bf805548a794144af77 : Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066.slice/libpod-7bf805548a794144af77c1d1181503babd6ce6686ca4ba623f4458466f8055f2.scope/container/memory.events Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Removed slice cgroup user-libpod_pod_b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066.slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 148 and the job result is done. Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: user-libpod_pod_b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066.slice: Failed to open /run/user/3001/systemd/transient/user-libpod_pod_b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066.slice: No such file or directory Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: Pods stopped: Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066 Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: Pods removed: Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: Error: removing pod b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066 cgroup: removing pod b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066 cgroup: Unit user-libpod_pod_b1f83afa77f626592adeaf4b11b8c8fde3fe3cacc77ae51e7c555024910fb066.slice not loaded. Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: Secrets removed: Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: Error: %!s() Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com podman[38650]: Volumes removed: Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped A template for running K8s workloads via podman-kube-play. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 147 and the job result is done. Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[38633]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38829]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[38938]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fauxkumjbwoyzuqxlbyaheerehrjmjeb ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098050.7810347-9907-54776336517899/AnsiballZ_podman_play.py' Jul 27 12:34:10 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[38938]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38940]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38940]: ansible-containers.podman.podman_play version: 5.1.2, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-38947.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 149. Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38940]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38940]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38940]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[38940]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[38938]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39062]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[39169]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krohkscrqdrhlnzgzwagbauhcbnuobcu ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098051.6949046-9923-82802512226359/AnsiballZ_command.py' Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[39169]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39171]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:12 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-39172.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 153. Jul 27 12:34:12 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[39169]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:12 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39285]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:34:13 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39393]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:34:13 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39501]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:15 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39610]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:15 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39718]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:34:15 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:34:15 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[39738]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution. ░░ ░░ The job identifier is 1986. Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:16.091284639 -0400 EDT m=+0.032399316 pod stop 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 (image=, name=httpd2) Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.tvu2SP.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.tvu2SP.mount has successfully entered the 'dead' state. Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7.scope has successfully entered the 'dead' state. Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:16.116459469 -0400 EDT m=+0.057574291 container died c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7 (image=localhost/podman-pause:5.1.2-1720678294, name=0333edc697be-infra, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, io.buildah.version=1.36.0) Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.f0Q8Ap.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.f0Q8Ap.mount has successfully entered the 'dead' state. Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth0 left promiscuous mode Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 1(veth0) entered disabled state Jul 27 12:34:16 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:16.236475926 -0400 EDT m=+0.177590469 container cleanup c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7 (image=localhost/podman-pause:5.1.2-1720678294, name=0333edc697be-infra, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:34:17 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-d356ea4dbe9387ef997467eb4a93cba3486e1a2e2fdd32f2474ea26ed0d3341d-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-d356ea4dbe9387ef997467eb4a93cba3486e1a2e2fdd32f2474ea26ed0d3341d-merged.mount has successfully entered the 'dead' state. Jul 27 12:34:17 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2d9bd34587\x2d91a6\x2d138f\x2d1648\x2d5739cb15f2b2.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2d9bd34587\x2d91a6\x2d138f\x2d1648\x2d5739cb15f2b2.mount has successfully entered the 'dead' state. Jul 27 12:34:17 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: time="2024-07-27T12:34:26-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL" Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.rzi7mD.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.rzi7mD.mount has successfully entered the 'dead' state. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3.scope has successfully entered the 'dead' state. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27782]: conmon 6e1f5612be551253f2fe : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice/libpod-6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3.scope/container/memory.events Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.149508411 -0400 EDT m=+10.090623235 container died 6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.qMW270.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.qMW270.mount has successfully entered the 'dead' state. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.197462822 -0400 EDT m=+10.138577488 container cleanup 6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Removed slice cgroup machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice. ░░ Subject: A stop job for unit machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice has finished. ░░ ░░ The job identifier is 1988 and the job result is done. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.272941688 -0400 EDT m=+10.214056348 container remove 6e1f5612be551253f2fe79871a7d9548668434308bbd846a6277503e79c2dcb3 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.304687182 -0400 EDT m=+10.245801734 container remove c6e2144414a77982f865350c330feb65ebeafe6b100d99dcba676c3ae3e6dcc7 (image=localhost/podman-pause:5.1.2-1720678294, name=0333edc697be-infra, pod_id=0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice: Failed to open /run/systemd/transient/machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice: No such file or directory Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.314550824 -0400 EDT m=+10.255665364 pod remove 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 (image=, name=httpd2) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.318845055 -0400 EDT m=+10.259959855 container kill 63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666 (image=localhost/podman-pause:5.1.2-1720678294, name=ec7e5af3b69e-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666.scope has successfully entered the 'dead' state. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[27686]: conmon 63d4d0a11ecde0e4852d : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666.scope/container/memory.events Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.325802449 -0400 EDT m=+10.266917244 container died 63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666 (image=localhost/podman-pause:5.1.2-1720678294, name=ec7e5af3b69e-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 2024-07-27 12:34:26.401509886 -0400 EDT m=+10.342624436 container remove 63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666 (image=localhost/podman-pause:5.1.2-1720678294, name=ec7e5af3b69e-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: Pods stopped: Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: Pods removed: Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: Error: removing pod 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 cgroup: removing pod 0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9 cgroup: Unit machine-libpod_pod_0333edc697be4d98d762999054d7c0ff5cfb31a70f64421cc7923e6327db66a9.slice not loaded. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: Secrets removed: Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: Error: %!s() Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com podman[39756]: Volumes removed: Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished. ░░ ░░ The job identifier is 1986 and the job result is done. Jul 27 12:34:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[39921]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-19805a002ba0855c535e4527c9f64000317aa4762aeed0d8dc1432b83c3e954d-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-19805a002ba0855c535e4527c9f64000317aa4762aeed0d8dc1432b83c3e954d-merged.mount has successfully entered the 'dead' state. Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-320f6585d76d041e1a84ae5e0fd228a66df066999c96402d5380545d4543e3f7-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-320f6585d76d041e1a84ae5e0fd228a66df066999c96402d5380545d4543e3f7-merged.mount has successfully entered the 'dead' state. Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-63d4d0a11ecde0e4852d7ed91937adec3edcbbc368dee21956dd1ffd89e38666-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40030]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40030]: ansible-containers.podman.podman_play version: 5.1.2, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40030]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40030]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40030]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40030]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Jul 27 12:34:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40150]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40257]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40371]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:34:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40479]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40588]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40696]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[40716]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution. ░░ ░░ The job identifier is 1989. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:31.806109207 -0400 EDT m=+0.032479857 pod stop 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 (image=, name=httpd3) Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.2sNu3U.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.2sNu3U.mount has successfully entered the 'dead' state. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7.scope has successfully entered the 'dead' state. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:31.831291215 -0400 EDT m=+0.057661969 container died 2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7 (image=localhost/podman-pause:5.1.2-1720678294, name=23fad04f45a5-infra, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.sKTUiE.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.sKTUiE.mount has successfully entered the 'dead' state. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: run-rc5239bbd4bb64814889097c0b095abdd.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-rc5239bbd4bb64814889097c0b095abdd.scope has successfully entered the 'dead' state. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: device veth1 left promiscuous mode Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com kernel: podman1: port 2(veth1) entered disabled state Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com NetworkManager[614]: [1722098071.8942] device (podman1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Starting Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 1991. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Started Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 1991. Jul 27 12:34:31 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: run-netns-netns\x2d7258b6c7\x2d1195\x2dd974\x2d4d65\x2d0bbed5ea90aa.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2d7258b6c7\x2d1195\x2dd974\x2d4d65\x2d0bbed5ea90aa.mount has successfully entered the 'dead' state. Jul 27 12:34:32 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:32.030705553 -0400 EDT m=+0.257076065 container cleanup 2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7 (image=localhost/podman-pause:5.1.2-1720678294, name=23fad04f45a5-infra, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, io.buildah.version=1.36.0) Jul 27 12:34:32 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-03dd982b3a8196841e05e6e56966423c1695a8c3a7112695f21554710e2b184f-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-03dd982b3a8196841e05e6e56966423c1695a8c3a7112695f21554710e2b184f-merged.mount has successfully entered the 'dead' state. Jul 27 12:34:32 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: time="2024-07-27T12:34:41-04:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL" Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.yc1Tdt.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.yc1Tdt.mount has successfully entered the 'dead' state. Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016.scope has successfully entered the 'dead' state. Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:41.870762818 -0400 EDT m=+10.097133605 container died c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:41.922609186 -0400 EDT m=+10.148979698 container cleanup c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Removed slice cgroup machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice. ░░ Subject: A stop job for unit machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice has finished. ░░ ░░ The job identifier is 2055 and the job result is done. Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 27 12:34:41 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:41.95622688 -0400 EDT m=+10.182597481 container remove c03c9747a3de31bde17b5fc07ff3e52768ce8c38d6ea62a0d1efc0335a30a016 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:42.010337631 -0400 EDT m=+10.236708265 container remove 2b27832b0d5b8477ee720890ce36e9d6a47ba00f9de9cea525112f09ae499aa7 (image=localhost/podman-pause:5.1.2-1720678294, name=23fad04f45a5-infra, pod_id=23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7, io.buildah.version=1.36.0, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice: Failed to open /run/systemd/transient/machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice: No such file or directory Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:42.020069709 -0400 EDT m=+10.246440225 pod remove 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 (image=, name=httpd3) Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:42.024530379 -0400 EDT m=+10.250901034 container kill 32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d (image=localhost/podman-pause:5.1.2-1720678294, name=df543e1580ae-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com conmon[29507]: conmon 32527d92ff1496aec930 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d.scope/container/memory.events Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: libpod-32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d.scope has successfully entered the 'dead' state. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:42.034084282 -0400 EDT m=+10.260455047 container died 32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d (image=localhost/podman-pause:5.1.2-1720678294, name=df543e1580ae-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 2024-07-27 12:34:42.114386618 -0400 EDT m=+10.340757233 container remove 32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d (image=localhost/podman-pause:5.1.2-1720678294, name=df543e1580ae-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: Pods stopped: Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: Pods removed: Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: Error: removing pod 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 cgroup: removing pod 23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7 cgroup: Unit machine-libpod_pod_23fad04f45a51991a380a328a6a2f0ffaa2b75ae917fb0b9592e446e71bdf6d7.slice not loaded. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: Secrets removed: Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: Error: %!s() Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com podman[40734]: Volumes removed: Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished. ░░ ░░ The job identifier is 1989 and the job result is done. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[40926]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: tmp-crun.aTwSo9.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit tmp-crun.aTwSo9.mount has successfully entered the 'dead' state. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-a465fb4025cf473524c2ad165868699f7bd67e9d2eb9a070d215e04c315085e9-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-a465fb4025cf473524c2ad165868699f7bd67e9d2eb9a070d215e04c315085e9-merged.mount has successfully entered the 'dead' state. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-815b9adb3f6bd9cf8fedcdf3dbb240a941b852d237bd1509e321d3cffdad3fd4-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-815b9adb3f6bd9cf8fedcdf3dbb240a941b852d237bd1509e321d3cffdad3fd4-merged.mount has successfully entered the 'dead' state. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-32527d92ff1496aec930848e159d7db10ac3d35b3467bd4fc6fd1bb370de549d-userdata-shm.mount has successfully entered the 'dead' state. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:34:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41035]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_options=None Jul 27 12:34:43 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41035]: ansible-containers.podman.podman_play version: 5.1.2, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml Jul 27 12:34:43 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:34:43 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41155]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:34:43 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41262]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:43 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:34:44 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41376]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Jul 27 12:34:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41484]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:45 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41593]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uygnptkbgppkiebownaqzkxfcetfhfut ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098085.4120166-10146-26933035321417/AnsiballZ_podman_container_info.py' Jul 27 12:34:45 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41593]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41595]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Jul 27 12:34:45 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-41596.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 157. Jul 27 12:34:45 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41593]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41710]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thnnloyspczaowblkxcgmqsayxmetozl ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098085.9641516-10154-146661791574890/AnsiballZ_command.py' Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41710]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41712]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-41713.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 161. Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41710]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41826]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uukzxcwqbcekriwwnrdnwdjxwpxsuono ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098086.4144602-10162-69763212571512/AnsiballZ_command.py' Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41826]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41828]: ansible-ansible.legacy.command Invoked with _raw_params=podman secret ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Started podman-41829.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 165. Jul 27 12:34:46 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[41826]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[41942]: ansible-ansible.legacy.command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopping User Manager for UID 3001... ░░ Subject: A stop job for unit user@3001.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user@3001.service has begun execution. ░░ ░░ The job identifier is 2056. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Activating special unit Exit the Session... Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopping podman-pause-f6326b37.scope... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 182. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Removed slice Slice /app/podman-kube. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 177 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Removed slice cgroup user-libpod_pod_b0265840b5794cd2683857fe98c60ce0418994a3cfd679c18930ffb4543990cc.slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 181 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped target Main User Target. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 173 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped target Basic System. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 172 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped target Paths. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 179 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped target Sockets. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 187 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped target Timers. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 176 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped Mark boot as successful after the user session has run 2 minutes. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 184 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped Daily Cleanup of User's Temporary Directories. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 186 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com dbus-broker[24566]: Dispatched 2321 messages @ 4(±14)μs / message. ░░ Subject: Dispatched 2321 messages ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ This message is printed by dbus-broker when shutting down. It includes metric ░░ information collected during the runtime of dbus-broker. ░░ ░░ The message lists the number of dispatched messages ░░ (in this case 2321) as well as the mean time to ░░ handling a single message. The time measurements exclude the time spent on ░░ writing to and reading from the kernel. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopping D-Bus User Message Bus... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 175. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped Create User's Volatile Files and Directories. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 185 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped D-Bus User Message Bus. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 175 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Stopped podman-pause-f6326b37.scope. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 182 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Removed slice Slice /user. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 180 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: user.slice: Consumed 1.492s CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Closed D-Bus User Message Bus Socket. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 178 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Removed slice User Application Slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 188 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Reached target Shutdown. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 171. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Finished Exit the Session. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 170. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[24213]: Reached target Exit the Session. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 169. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: user@3001.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user@3001.service has successfully entered the 'dead' state. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopped User Manager for UID 3001. ░░ Subject: A stop job for unit user@3001.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user@3001.service has finished. ░░ ░░ The job identifier is 2056 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: user@3001.service: Consumed 3.081s CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user@3001.service completed and consumed the indicated resources. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopping User Runtime Directory /run/user/3001... ░░ Subject: A stop job for unit user-runtime-dir@3001.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-runtime-dir@3001.service has begun execution. ░░ ░░ The job identifier is 2057. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: run-user-3001.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-user-3001.mount has successfully entered the 'dead' state. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: user-runtime-dir@3001.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user-runtime-dir@3001.service has successfully entered the 'dead' state. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Stopped User Runtime Directory /run/user/3001. ░░ Subject: A stop job for unit user-runtime-dir@3001.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-runtime-dir@3001.service has finished. ░░ ░░ The job identifier is 2057 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Removed slice User Slice of UID 3001. ░░ Subject: A stop job for unit user-3001.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-3001.slice has finished. ░░ ░░ The job identifier is 2059 and the job result is done. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: user-3001.slice: Consumed 3.106s CPU time. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user-3001.slice completed and consumed the indicated resources. Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[42051]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xibbghgvwdleaszwjivpcpoxtfdsyfvt ; /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098087.3883593-10180-130411933652340/AnsiballZ_command.py' Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[42051]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42053]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:47 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[42051]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42167]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd2 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42281]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd3 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[42395]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinqziaphdwtohqdprqjehgnrmkwjgxu ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.9 /var/tmp/ansible-tmp-1722098088.6110847-10202-102724840693194/AnsiballZ_command.py' Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[42395]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42397]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:48 ip-10-31-12-229.us-east-1.aws.redhat.com sudo[42395]: pam_unix(sudo:session): session closed for user podman_basic_user Jul 27 12:34:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42507]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42617]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42727]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:52 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[42941]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:52 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43055]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:34:53 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43163]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:34:53 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43271]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43380]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Jul 27 12:34:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43488]: ansible-getent Invoked with database=group key=3001 fail_key=False service=None split=None Jul 27 12:34:57 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43596]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:34:57 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43705]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:58 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43813]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[43921]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:34:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44029]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:00 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44136]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:00 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44243]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:01 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44350]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:35:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44458]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44566]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:03 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44675]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44783]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:35:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44892]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:05 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[44999]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:06 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45106]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:06 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45214]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:07 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45323]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:08 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45431]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Jul 27 12:35:09 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45540]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:09 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45647]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:10 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45754]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Jul 27 12:35:10 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45862]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[45969]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[46076]: ansible-file Invoked with path=/tmp/lsr_2r2po57x_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:14 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[46218]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:35:15 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[46351]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:17 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[46565]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:17 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[46678]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:35:18 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[46786]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:18 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[46894]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:20 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47003]: ansible-tempfile Invoked with state=directory prefix=lsr_podman_config_ suffix= path=None Jul 27 12:35:20 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47110]: ansible-ansible.legacy.command Invoked with _raw_params=tar --ignore-failed-read -c -P -v -p -f /tmp/lsr_podman_config_8qpnebdr/backup.tar /etc/containers/containers.conf.d/50-systemroles.conf /etc/containers/registries.conf.d/50-systemroles.conf /etc/containers/storage.conf /etc/containers/policy.json _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:21 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47218]: ansible-user Invoked with name=user1 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on ip-10-31-12-229.us-east-1.aws.redhat.com update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Jul 27 12:35:21 ip-10-31-12-229.us-east-1.aws.redhat.com useradd[47220]: new group: name=user1, GID=3002 Jul 27 12:35:21 ip-10-31-12-229.us-east-1.aws.redhat.com useradd[47220]: new user: name=user1, UID=3002, GID=3002, home=/home/user1, shell=/bin/bash, from=/dev/pts/0 Jul 27 12:35:21 ip-10-31-12-229.us-east-1.aws.redhat.com rsyslogd[783]: imjournal: journal files changed, reloading... [v8.2310.0-4.el9 try https://www.rsyslog.com/e/0 ] Jul 27 12:35:23 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47441]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:23 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47554]: ansible-getent Invoked with database=passwd key=user1 fail_key=False service=None split=None Jul 27 12:35:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47662]: ansible-getent Invoked with database=group key=3002 fail_key=False service=None split=None Jul 27 12:35:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47770]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47879]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[47987]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48095]: ansible-file Invoked with path=/home/user1/.config/containers/containers.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48202]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48287]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098126.232166-10902-165566967604309/.source.conf dest=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf owner=user1 mode=0644 follow=False _original_basename=toml.j2 checksum=94370d6e765779f1c58daf02f667b8f0b74d91f6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48394]: ansible-file Invoked with path=/home/user1/.config/containers/registries.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48501]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48586]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098127.4847517-10926-205661664857682/.source.conf dest=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf owner=user1 mode=0644 follow=False _original_basename=toml.j2 checksum=dfb9cd7094a81b3d1bb06512cc9b49a09c75639b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48693]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48800]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48885]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098128.6270175-10950-253187412711286/.source.conf dest=/home/user1/.config/containers/storage.conf owner=user1 mode=0644 follow=False _original_basename=toml.j2 checksum=d08574b6a1df63dbe1c939ff0bcc7c0b61d03044 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[48992]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49099]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49206]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49291]: ansible-ansible.legacy.copy Invoked with dest=/home/user1/.config/containers/policy.json owner=user1 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722098130.2069187-10983-118981900026467/.source.json _original_basename=.0jw1ef5a follow=False checksum=6746c079ad563b735fc39f73d4876654b80b0a0d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49398]: ansible-stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49507]: ansible-stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:32 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49616]: ansible-stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:32 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49725]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:34 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[49941]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:34 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50054]: ansible-getent Invoked with database=group key=3002 fail_key=False service=None split=None Jul 27 12:35:35 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50162]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:35 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50271]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:36 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50379]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:36 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50487]: ansible-file Invoked with path=/home/user1/.config/containers/containers.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:37 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50594]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:37 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50648]: ansible-ansible.legacy.file Invoked with owner=user1 mode=0644 dest=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:37 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50755]: ansible-file Invoked with path=/home/user1/.config/containers/registries.conf.d state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50862]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[50916]: ansible-ansible.legacy.file Invoked with owner=user1 mode=0644 dest=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:39 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51023]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:39 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51130]: ansible-ansible.legacy.stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:39 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51184]: ansible-ansible.legacy.file Invoked with owner=user1 mode=0644 dest=/home/user1/.config/containers/storage.conf _original_basename=toml.j2 recurse=False state=file path=/home/user1/.config/containers/storage.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:40 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51291]: ansible-file Invoked with path=/home/user1/.config/containers state=directory owner=user1 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:40 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51398]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:41 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51507]: ansible-slurp Invoked with path=/home/user1/.config/containers/policy.json src=/home/user1/.config/containers/policy.json Jul 27 12:35:41 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51614]: ansible-stat Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51723]: ansible-stat Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51832]: ansible-stat Invoked with path=/home/user1/.config/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[51941]: ansible-stat Invoked with path=/home/user1/.config/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:44 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52157]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52270]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:35:45 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52378]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52486]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52595]: ansible-file Invoked with path=/etc/containers/containers.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52702]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52787]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098146.98505-11311-157702852656223/.source.conf dest=/etc/containers/containers.conf.d/50-systemroles.conf owner=root mode=0644 follow=False _original_basename=toml.j2 checksum=94370d6e765779f1c58daf02f667b8f0b74d91f6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[52894]: ansible-file Invoked with path=/etc/containers/registries.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53001]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:48 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53086]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098148.153968-11335-155670780427611/.source.conf dest=/etc/containers/registries.conf.d/50-systemroles.conf owner=root mode=0644 follow=False _original_basename=toml.j2 checksum=dfb9cd7094a81b3d1bb06512cc9b49a09c75639b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53193]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53300]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53385]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098149.3348846-11359-104175979065363/.source.conf dest=/etc/containers/storage.conf owner=root mode=0644 follow=False _original_basename=toml.j2 checksum=d08574b6a1df63dbe1c939ff0bcc7c0b61d03044 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:50 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53492]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:50 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53599]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:51 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53708]: ansible-slurp Invoked with path=/etc/containers/policy.json src=/etc/containers/policy.json Jul 27 12:35:51 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53815]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:51 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[53902]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/policy.json owner=root mode=0644 src=/root/.ansible/tmp/ansible-tmp-1722098151.2868085-11399-254438282606790/.source.json _original_basename=.mk9o5nx4 follow=False checksum=6746c079ad563b735fc39f73d4876654b80b0a0d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:52 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54009]: ansible-stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:52 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54118]: ansible-stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:53 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54227]: ansible-stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:53 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54336]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:55 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54552]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:35:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54665]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:35:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54773]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:35:57 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54882]: ansible-file Invoked with path=/etc/containers/containers.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:57 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[54989]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:58 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55043]: ansible-ansible.legacy.file Invoked with owner=root mode=0644 dest=/etc/containers/containers.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/etc/containers/containers.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:58 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55150]: ansible-file Invoked with path=/etc/containers/registries.conf.d state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:58 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55257]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:35:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55311]: ansible-ansible.legacy.file Invoked with owner=root mode=0644 dest=/etc/containers/registries.conf.d/50-systemroles.conf _original_basename=toml.j2 recurse=False state=file path=/etc/containers/registries.conf.d/50-systemroles.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55418]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:35:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55525]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:36:00 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55579]: ansible-ansible.legacy.file Invoked with owner=root mode=0644 dest=/etc/containers/storage.conf _original_basename=toml.j2 recurse=False state=file path=/etc/containers/storage.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:00 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55686]: ansible-file Invoked with path=/etc/containers state=directory owner=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:01 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55793]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:01 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[55902]: ansible-slurp Invoked with path=/etc/containers/policy.json src=/etc/containers/policy.json Jul 27 12:36:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56009]: ansible-stat Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56118]: ansible-stat Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56227]: ansible-stat Invoked with path=/etc/containers/storage.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:03 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56336]: ansible-stat Invoked with path=/etc/containers/policy.json follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:03 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56445]: ansible-slurp Invoked with path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf src=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf Jul 27 12:36:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56552]: ansible-slurp Invoked with path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf src=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf Jul 27 12:36:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56659]: ansible-slurp Invoked with path=/home/user1/.config/containers/storage.conf src=/home/user1/.config/containers/storage.conf Jul 27 12:36:05 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56766]: ansible-slurp Invoked with path=/etc/containers/containers.conf.d/50-systemroles.conf src=/etc/containers/containers.conf.d/50-systemroles.conf Jul 27 12:36:05 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56873]: ansible-slurp Invoked with path=/etc/containers/registries.conf.d/50-systemroles.conf src=/etc/containers/registries.conf.d/50-systemroles.conf Jul 27 12:36:06 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[56980]: ansible-slurp Invoked with path=/etc/containers/storage.conf src=/etc/containers/storage.conf Jul 27 12:36:06 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57087]: ansible-file Invoked with state=absent path=/etc/containers/containers.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:06 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57194]: ansible-file Invoked with state=absent path=/etc/containers/registries.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:07 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57301]: ansible-file Invoked with state=absent path=/etc/containers/storage.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:07 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57408]: ansible-file Invoked with state=absent path=/etc/containers/policy.json recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:07 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57515]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/containers.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:08 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57622]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/registries.conf.d/50-systemroles.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:08 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57729]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/storage.conf recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:08 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57836]: ansible-file Invoked with state=absent path=/home/user1/.config/containers/policy.json recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:09 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[57943]: ansible-ansible.legacy.command Invoked with _raw_params=tar xfvpP /tmp/lsr_podman_config_8qpnebdr/backup.tar _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:09 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58051]: ansible-file Invoked with state=absent path=/tmp/lsr_podman_config_8qpnebdr recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:12 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58193]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:36:12 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58302]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:14 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58516]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:15 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58629]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:36:15 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58737]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:16 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58845]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:19 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[58989]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:36:22 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[59122]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[59336]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[59449]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:36:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[59557]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[59665]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[59809]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:36:31 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[59942]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:33 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60156]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:33 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60269]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:36:34 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60377]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:34 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60485]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:36 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60594]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:37 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60702]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60811]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[60918]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:36:39 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[61003]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098198.5436368-12426-233458295195767/.source.container dest=/etc/containers/systemd/nopull.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=670d64fc68a9768edb20cad26df2acc703542d85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:41 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[61217]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:41 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[61330]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:42 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[61438]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:44 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[61547]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:44 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[61655]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:46 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1835587247-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-metacopy\x2dcheck1835587247-merged.mount has successfully entered the 'dead' state. Jul 27 12:36:46 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:46 ip-10-31-12-229.us-east-1.aws.redhat.com podman[61780]: 2024-07-27 12:36:46.49069517 -0400 EDT m=+0.134779378 image pull-error this_is_a_bogus_image:latest short-name resolution enforced but cannot prompt without a TTY Jul 27 12:36:46 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[61893]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62000]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Jul 27 12:36:47 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62085]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1722098207.0700624-12590-149017880719930/.source.container dest=/etc/containers/systemd/bogus.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=1d087e679d135214e8ac9ccaf33b2222916efb7f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:49 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62299]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:36:50 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62412]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:50 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62520]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:52 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62629]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:36:53 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62737]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:54 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62846]: ansible-systemd Invoked with name=nopull.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Jul 27 12:36:54 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[62954]: ansible-stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:36:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[63170]: ansible-file Invoked with path=/etc/containers/systemd/nopull.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:36:56 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[63277]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:36:56 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:36:56 ip-10-31-12-229.us-east-1.aws.redhat.com quadlet-generator[63283]: Warning: bogus.container specifies the image "this_is_a_bogus_image" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jul 27 12:36:56 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[63295]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:36:57 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:36:59 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[63638]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:00 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[63751]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:37:00 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[63859]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:02 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[63968]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:37:03 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[64076]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[64185]: ansible-systemd Invoked with name=bogus.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Jul 27 12:37:04 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:37:04 ip-10-31-12-229.us-east-1.aws.redhat.com quadlet-generator[64193]: Warning: bogus.container specifies the image "this_is_a_bogus_image" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Jul 27 12:37:04 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[64205]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:37:04 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[64327]: ansible-stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:05 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[64543]: ansible-file Invoked with path=/etc/containers/systemd/bogus.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[64650]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Reloading. Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com systemd-rc-local-generator[64668]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Starting dnf makecache... ░░ Subject: A start job for unit dnf-makecache.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dnf-makecache.service has begun execution. ░░ ░░ The job identifier is 2060. Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Failed determining last makecache time. Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Beaker Client - RedHatEnterpriseLinux9 7.7 kB/s | 1.5 kB 00:00 Jul 27 12:37:06 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Beaker harness 71 kB/s | 1.3 kB 00:00 Jul 27 12:37:07 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Copr repo for beakerlib-libraries owned by bgon 33 kB/s | 1.8 kB 00:00 Jul 27 12:37:07 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: CentOS Stream 9 - BaseOS 45 kB/s | 5.8 kB 00:00 Jul 27 12:37:07 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: CentOS Stream 9 - AppStream 44 kB/s | 5.9 kB 00:00 Jul 27 12:37:07 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: CentOS Stream 9 - Extras packages 56 kB/s | 6.3 kB 00:00 Jul 27 12:37:07 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Extra Packages for Enterprise Linux 9 openh264 22 kB/s | 993 B 00:00 Jul 27 12:37:07 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Extra Packages for Enterprise Linux 9 - Next - 90 kB/s | 27 kB 00:00 Jul 27 12:37:08 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Copr repo for qa-tools owned by lpol 27 kB/s | 1.8 kB 00:00 Jul 27 12:37:08 ip-10-31-12-229.us-east-1.aws.redhat.com dnf[64684]: Metadata cache created. Jul 27 12:37:08 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: dnf-makecache.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit dnf-makecache.service has successfully entered the 'dead' state. Jul 27 12:37:08 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: Finished dnf makecache. ░░ Subject: A start job for unit dnf-makecache.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dnf-makecache.service has finished successfully. ░░ ░░ The job identifier is 2060. Jul 27 12:37:08 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[64914]: ansible-user Invoked with name=user_quadlet_basic uid=1111 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on ip-10-31-12-229.us-east-1.aws.redhat.com update_password=always group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Jul 27 12:37:08 ip-10-31-12-229.us-east-1.aws.redhat.com useradd[64916]: new group: name=user_quadlet_basic, GID=1111 Jul 27 12:37:08 ip-10-31-12-229.us-east-1.aws.redhat.com useradd[64916]: new user: name=user_quadlet_basic, UID=1111, GID=1111, home=/home/user_quadlet_basic, shell=/bin/bash, from=/dev/pts/0 Jul 27 12:37:10 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[65136]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:10 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[65249]: ansible-getent Invoked with database=passwd key=user_quadlet_basic fail_key=False service=None split=None Jul 27 12:37:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[65357]: ansible-getent Invoked with database=group key=1111 fail_key=False service=None split=None Jul 27 12:37:11 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[65465]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:12 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[65574]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:12 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[65682]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:13 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[65790]: ansible-ansible.legacy.command Invoked with _raw_params=set -x set -o pipefail exec 1>&2 #podman volume rm --all #podman network prune -f podman volume ls podman network ls podman secret ls podman container ls podman pod ls podman images systemctl list-units | grep quadlet _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:13 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:37:13 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:37:14 ip-10-31-12-229.us-east-1.aws.redhat.com systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Jul 27 12:37:15 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66049]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:16 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66162]: ansible-getent Invoked with database=group key=1111 fail_key=False service=None split=None Jul 27 12:37:16 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66270]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:17 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66379]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:17 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66487]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:19 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66595]: ansible-ansible.legacy.command Invoked with _raw_params=journalctl -ex _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:22 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66738]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 27 12:37:22 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66871]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:23 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[66978]: ansible-ansible.legacy.dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:37:24 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67086]: ansible-ansible.legacy.dnf Invoked with name=['certmonger', 'python3-packaging'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:37:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67194]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:37:25 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67301]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:37:26 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67408]: ansible-ansible.legacy.systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67517]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[67532]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Jul 27 12:37:27 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:27 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67639]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Jul 27 12:37:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67746]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Jul 27 12:37:28 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67853]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Jul 27 12:37:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[67960]: ansible-ansible.legacy.command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:29 ip-10-31-12-229.us-east-1.aws.redhat.com certmonger[8639]: 2024-07-27 12:37:29 [8639] Wrote to /var/lib/certmonger/requests/20240727163727 Jul 27 12:37:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68068]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:37:29 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68175]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:37:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68282]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Jul 27 12:37:30 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68389]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:32 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68603]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 27 12:37:33 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68716]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Jul 27 12:37:33 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68824]: ansible-getent Invoked with database=group key=0 fail_key=False service=None split=None Jul 27 12:37:34 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[68932]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:35 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69041]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:35 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69148]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 27 12:37:36 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69255]: ansible-ansible.legacy.dnf Invoked with name=['firewalld'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 27 12:37:37 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69363]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None Jul 27 12:37:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69472]: ansible-ansible.legacy.systemd Invoked with name=firewalld state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Jul 27 12:37:38 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69581]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Jul 27 12:37:39 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69688]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Jul 27 12:37:40 ip-10-31-12-229.us-east-1.aws.redhat.com python3.9[69795]: ansible-ansible.legacy.command Invoked with _raw_params=journalctl -ex _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Check] ******************************************************************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:130 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.444) 0:00:18.992 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "-a" ], "delta": "0:00:00.035967", "end": "2024-07-27 12:37:40.689290", "rc": 0, "start": "2024-07-27 12:37:40.653323" } STDOUT: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES TASK [Check pods] ************************************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:134 Saturday 27 July 2024 12:37:40 -0400 (0:00:00.395) 0:00:19.388 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "pod", "ps", "--ctr-ids", "--ctr-names", "--ctr-status" ], "delta": "0:00:00.037267", "end": "2024-07-27 12:37:41.088001", "failed_when_result": false, "rc": 0, "start": "2024-07-27 12:37:41.050734" } STDOUT: POD ID NAME STATUS CREATED INFRA ID IDS NAMES STATUS TASK [Check systemd] *********************************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:139 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.398) 0:00:19.786 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euo pipefail; systemctl list-units --all | grep quadlet", "delta": "0:00:00.013305", "end": "2024-07-27 12:37:41.462772", "failed_when_result": false, "rc": 1, "start": "2024-07-27 12:37:41.449467" } MSG: non-zero return code TASK [LS] ********************************************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:147 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.373) 0:00:20.160 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ls", "-alrtF", "/etc/systemd/system" ], "delta": "0:00:00.003849", "end": "2024-07-27 12:37:41.828840", "failed_when_result": false, "rc": 0, "start": "2024-07-27 12:37:41.824991" } STDOUT: total 8 drwxr-xr-x. 2 root root 32 Jul 24 05:01 getty.target.wants/ lrwxrwxrwx. 1 root root 37 Jul 24 05:01 ctrl-alt-del.target -> /usr/lib/systemd/system/reboot.target lrwxrwxrwx. 1 root root 43 Jul 24 05:01 dbus.service -> /usr/lib/systemd/system/dbus-broker.service drwxr-xr-x. 4 root root 166 Jul 24 05:01 ../ drwxr-xr-x. 2 root root 152 Jul 24 05:01 sysinit.target.wants/ drwxr-xr-x. 2 root root 48 Jul 24 05:01 network-online.target.wants/ lrwxrwxrwx. 1 root root 57 Jul 24 05:01 dbus-org.freedesktop.nm-dispatcher.service -> /usr/lib/systemd/system/NetworkManager-dispatcher.service drwxr-xr-x. 2 root root 56 Jul 24 05:01 timers.target.wants/ drwxr-xr-x. 2 root root 38 Jul 24 05:01 dev-virtio\x2dports-org.qemu.guest_agent.0.device.wants/ drwxr-xr-x. 2 root root 31 Jul 24 05:01 basic.target.wants/ lrwxrwxrwx. 1 root root 41 Jul 24 05:03 default.target -> /usr/lib/systemd/system/multi-user.target drwxr-xr-x. 2 root root 70 Jul 24 05:17 sockets.target.wants/ drwxr-xr-x. 2 root root 31 Jul 24 05:17 remote-fs.target.wants/ drwxr-xr-x. 2 root root 59 Jul 24 05:18 sshd-keygen@.service.d/ drwxr-xr-x. 2 root root 119 Jul 24 05:18 cloud-init.target.wants/ drwxr-xr-x. 2 root root 4096 Jul 27 12:30 multi-user.target.wants/ lrwxrwxrwx. 1 root root 41 Jul 27 12:30 dbus-org.fedoraproject.FirewallD1.service -> /usr/lib/systemd/system/firewalld.service drwxr-xr-x. 13 root root 4096 Jul 27 12:34 ./ TASK [Cleanup] ***************************************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:154 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.370) 0:00:20.530 ********* included: fedora.linux_system_roles.podman for managed_node1 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 27 July 2024 12:37:41 -0400 (0:00:00.081) 0:00:20.612 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 27 July 2024 12:37:42 -0400 (0:00:00.086) 0:00:20.699 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 27 July 2024 12:37:42 -0400 (0:00:00.038) 0:00:20.737 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 27 July 2024 12:37:42 -0400 (0:00:00.031) 0:00:20.769 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:20 Saturday 27 July 2024 12:37:42 -0400 (0:00:00.030) 0:00:20.799 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 27 July 2024 12:37:42 -0400 (0:00:00.067) 0:00:20.867 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.923) 0:00:21.791 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.034) 0:00:21.825 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages))", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:22 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.034) 0:00:21.859 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.028013", "end": "2024-07-27 12:37:43.550430", "rc": 0, "start": "2024-07-27 12:37:43.522417" } STDOUT: podman version 5.1.2 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.391) 0:00:22.251 ********* ok: [managed_node1] => { "ansible_facts": { "podman_version": "5.1.2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:32 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.036) 0:00:22.288 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:39 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.031) 0:00:22.319 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:49 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.037) 0:00:22.356 ********* META: end_host conditional evaluated to False, continuing execution for managed_node1 skipping: [managed_node1] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed_node1" } MSG: end_host conditional evaluated to false, continuing execution for managed_node1 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.042) 0:00:22.399 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.068) 0:00:22.467 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.035) 0:00:22.502 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.063) 0:00:22.565 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:43 -0400 (0:00:00.044) 0:00:22.609 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.373) 0:00:22.983 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.041) 0:00:23.025 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.370) 0:00:23.395 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.032) 0:00:23.428 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.030) 0:00:23.458 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.031) 0:00:23.489 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.029) 0:00:23.518 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.029) 0:00:23.548 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.029) 0:00:23.577 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:44 -0400 (0:00:00.029) 0:00:23.607 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:62 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.028) 0:00:23.636 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:71 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.062) 0:00:23.698 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.064) 0:00:23.763 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.031) 0:00:23.794 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:74 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.030) 0:00:23.825 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.065) 0:00:23.890 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.060) 0:00:23.950 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:77 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.033) 0:00:23.984 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:5 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.066) 0:00:24.050 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:13 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.030) 0:00:24.081 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.031) 0:00:24.112 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:6 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.069) 0:00:24.182 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:14 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.032) 0:00:24.214 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:19 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.030) 0:00:24.244 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:25 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.031) 0:00:24.276 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:86 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.030) 0:00:24.307 ********* included: fedora.linux_system_roles.firewall for managed_node1 TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.118) 0:00:24.426 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed_node1 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.058) 0:00:24.484 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.037) 0:00:24.522 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.031) 0:00:24.553 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 27 July 2024 12:37:45 -0400 (0:00:00.062) 0:00:24.615 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.032) 0:00:24.648 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __firewall_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.031) 0:00:24.679 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:43 Saturday 27 July 2024 12:37:46 -0400 (0:00:00.926) 0:00:25.606 ********* skipping: [managed_node1] => { "false_condition": "__firewall_is_transactional | d(false)" } TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:48 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.037) 0:00:25.643 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:53 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.033) 0:00:25.677 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Collect service facts] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.033) 0:00:25.710 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:9 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.031) 0:00:25.741 ********* skipping: [managed_node1] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall_disable_conflicting_services | bool", "item": "ufw", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:22 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.037) 0:00:25.779 ********* ok: [managed_node1] => { "changed": false, "name": "firewalld", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ActiveEnterTimestampMonotonic": "336405581", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "polkit.service dbus-broker.service dbus.socket sysinit.target system.slice basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:30:03 EDT", "AssertTimestampMonotonic": "335732159", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "1274519000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ConditionTimestampMonotonic": "335732155", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target ip6tables.service ebtables.service ipset.service nftables.service iptables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4302", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "13034", "ExecMainStartTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ExecMainStartTimestampMonotonic": "335744183", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:30:03 EDT", "InactiveExitTimestampMonotonic": "335744735", "InvocationID": "65026572be3e4e69abbcad284fe4fa9d", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13964", "LimitNPROCSoft": "13964", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13964", "LimitSIGPENDINGSoft": "13964", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "13034", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "33083392", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "dbus.socket sysinit.target system.slice", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2024-07-27 12:37:06 EDT", "StateChangeTimestampMonotonic": "758987730", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22342", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:28 Saturday 27 July 2024 12:37:47 -0400 (0:00:00.527) 0:00:26.306 ********* ok: [managed_node1] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:firewalld_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ActiveEnterTimestampMonotonic": "336405581", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "polkit.service dbus-broker.service dbus.socket sysinit.target system.slice basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2024-07-27 12:30:03 EDT", "AssertTimestampMonotonic": "335732159", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "1274519000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ConditionTimestampMonotonic": "335732155", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target ip6tables.service ebtables.service ipset.service nftables.service iptables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlGroupId": "4302", "ControlPID": "0", "CoredumpFilter": "0x33", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "\"man:firewalld(1)\"", "DynamicUser": "no", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "13034", "ExecMainStartTimestamp": "Sat 2024-07-27 12:30:03 EDT", "ExecMainStartTimestampMonotonic": "335744183", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecReloadEx": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FinalKillSignal": "9", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "18446744073709551615", "IOReadOperations": "18446744073709551615", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "18446744073709551615", "IOWriteOperations": "18446744073709551615", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2024-07-27 12:30:03 EDT", "InactiveExitTimestampMonotonic": "335744735", "InvocationID": "65026572be3e4e69abbcad284fe4fa9d", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13964", "LimitNPROCSoft": "13964", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13964", "LimitSIGPENDINGSoft": "13964", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "13034", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "infinity", "MemoryCurrent": "33083392", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "dbus.socket sysinit.target system.slice", "Restart": "no", "RestartKillSignal": "15", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2024-07-27 12:37:06 EDT", "StateChangeTimestampMonotonic": "758987730", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22342", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:34 Saturday 27 July 2024 12:37:48 -0400 (0:00:00.530) 0:00:26.837 ********* ok: [managed_node1] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/bin/python3.9", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:43 Saturday 27 July 2024 12:37:48 -0400 (0:00:00.048) 0:00:26.886 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:55 Saturday 27 July 2024 12:37:48 -0400 (0:00:00.032) 0:00:26.918 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 Saturday 27 July 2024 12:37:48 -0400 (0:00:00.029) 0:00:26.948 ********* ok: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" } } ok: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:120 Saturday 27 July 2024 12:37:49 -0400 (0:00:01.041) 0:00:27.989 ********* skipping: [managed_node1] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "firewall | length == 1", "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:130 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.049) 0:00:28.039 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall | length == 1", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:139 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.034) 0:00:28.073 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:144 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.031) 0:00:28.105 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "firewall == None or firewall | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:153 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.032) 0:00:28.138 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:163 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.029) 0:00:28.168 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__firewall_previous_replaced | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:169 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.061) 0:00:28.229 ********* skipping: [managed_node1] => { "false_condition": "__firewall_previous_replaced | bool" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:93 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.048) 0:00:28.278 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:100 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.031) 0:00:28.310 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:104 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.031) 0:00:28.341 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:113 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.028) 0:00:28.370 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:122 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.027) 0:00:28.397 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed_node1 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.102) 0:00:28.500 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.034) 0:00:28.534 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:13 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.039) 0:00:28.574 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:49 -0400 (0:00:00.051) 0:00:28.625 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.029) 0:00:28.655 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.030) 0:00:28.685 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:18 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.031) 0:00:28.716 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:34 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.030) 0:00:28.747 ********* [WARNING]: Using a variable for a task's 'args' is unsafe in some situations (see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat- unsafe) ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.514) 0:00:29.261 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.035) 0:00:29.297 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:13 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.070) 0:00:29.367 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.052) 0:00:29.419 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.029) 0:00:29.449 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.030) 0:00:29.480 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:18 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.029) 0:00:29.509 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:34 Saturday 27 July 2024 12:37:50 -0400 (0:00:00.029) 0:00:29.539 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.421) 0:00:29.961 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.036) 0:00:29.997 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:13 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.040) 0:00:30.037 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.051) 0:00:30.089 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.029) 0:00:30.119 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.030) 0:00:30.150 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:18 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.030) 0:00:30.180 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:34 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.030) 0:00:30.210 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 27 July 2024 12:37:51 -0400 (0:00:00.419) 0:00:30.630 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:136 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.028) 0:00:30.659 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed_node1 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.204) 0:00:30.863 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "quadlet-demo.kube", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Install]\nWantedBy=default.target\n\n[Unit]\nRequires=quadlet-demo-mysql.service\nAfter=quadlet-demo-mysql.service\n\n[Kube]\n# Point to the yaml file in the same directory\nYaml=quadlet-demo.yml\n# Use the quadlet-demo network\nNetwork=quadlet-demo.network\n# Publish the envoy proxy data port\nPublishPort=8000:8080\n# Publish the envoy proxy admin port\nPublishPort=9000:9901\n# Use the envoy proxy config map in the same directory\nConfigMap=envoy-proxy-configmap.yml", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.047) 0:00:30.911 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.040) 0:00:30.952 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.034) 0:00:30.986 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo", "__podman_quadlet_type": "kube", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.051) 0:00:31.037 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.065) 0:00:31.103 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.036) 0:00:31.139 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.036) 0:00:31.175 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.044) 0:00:31.220 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:37:52 -0400 (0:00:00.372) 0:00:31.592 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.042) 0:00:31.635 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.368) 0:00:32.004 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.031) 0:00:32.035 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.030) 0:00:32.066 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.031) 0:00:32.097 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.030) 0:00:32.128 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.031) 0:00:32.159 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.030) 0:00:32.189 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.065) 0:00:32.255 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.031) 0:00:32.286 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": [ "quadlet-demo.yml" ], "__podman_service_name": "quadlet-demo.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.058) 0:00:32.344 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.034) 0:00:32.379 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.030) 0:00:32.410 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo.kube", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.080) 0:00:32.490 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.039) 0:00:32.530 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:37:53 -0400 (0:00:00.082) 0:00:32.612 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.030) 0:00:32.642 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.521) 0:00:33.164 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.366) 0:00:33.530 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:37:54 -0400 (0:00:00.035) 0:00:33.566 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo.kube", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.380) 0:00:33.946 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.037) 0:00:33.984 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.033) 0:00:34.018 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.048) 0:00:34.066 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.032) 0:00:34.099 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.124554", "end": "2024-07-27 12:37:55.892421", "rc": 0, "start": "2024-07-27 12:37:55.767867" } STDOUT: cdd133fcb08165701150c9b626930a3f00e504cebf4548d2cf419252395ad3a3 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:37:55 -0400 (0:00:00.495) 0:00:34.595 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.101) 0:00:34.696 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.033) 0:00:34.729 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.030) 0:00:34.759 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.031) 0:00:34.790 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.035212", "end": "2024-07-27 12:37:56.498955", "rc": 0, "start": "2024-07-27 12:37:56.463743" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.406) 0:00:35.197 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.036400", "end": "2024-07-27 12:37:56.908102", "rc": 0, "start": "2024-07-27 12:37:56.871702" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:37:56 -0400 (0:00:00.414) 0:00:35.612 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.035189", "end": "2024-07-27 12:37:57.332321", "rc": 0, "start": "2024-07-27 12:37:57.297132" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.425) 0:00:36.037 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.033208", "end": "2024-07-27 12:37:57.752252", "rc": 0, "start": "2024-07-27 12:37:57.719044" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:37:57 -0400 (0:00:00.415) 0:00:36.452 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:37:58 -0400 (0:00:00.418) 0:00:36.870 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "snapd.seeded.service": { "name": "snapd.seeded.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:38:00 -0400 (0:00:02.081) 0:00:38.952 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.032) 0:00:38.985 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: wp-pv-claim\n labels:\n app: wordpress\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 20Gi\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: quadlet-demo\nspec:\n containers:\n - name: wordpress\n image: quay.io/linux-system-roles/wordpress:4.8-apache\n env:\n - name: WORDPRESS_DB_HOST\n value: quadlet-demo-mysql\n - name: WORDPRESS_DB_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mysql-root-password-kube\n key: password\n volumeMounts:\n - name: wordpress-persistent-storage\n mountPath: /var/www/html\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n - name: envoy\n image: quay.io/linux-system-roles/envoyproxy:v1.25.0\n volumeMounts:\n - name: config-volume\n mountPath: /etc/envoy\n - name: certificates\n mountPath: /etc/envoy-certificates\n env:\n - name: ENVOY_UID\n value: \"0\"\n resources:\n requests:\n memory: \"64Mi\"\n cpu: \"250m\"\n limits:\n memory: \"128Mi\"\n cpu: \"500m\"\n volumes:\n - name: config-volume\n configMap:\n name: envoy-proxy-config\n - name: certificates\n secret:\n secretName: envoy-certificates\n - name: wordpress-persistent-storage\n persistentVolumeClaim:\n claimName: wp-pv-claim\n - name: www # not used - for testing hostpath\n hostPath:\n path: /tmp/httpd3\n - name: create # not used - for testing hostpath\n hostPath:\n path: /tmp/httpd3-create\n", "__podman_quadlet_template_src": "quadlet-demo.yml.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.094) 0:00:39.079 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.044) 0:00:39.124 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.036) 0:00:39.160 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo", "__podman_quadlet_type": "yml", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.052) 0:00:39.212 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.069) 0:00:39.282 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.120) 0:00:39.403 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.040) 0:00:39.444 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:38:00 -0400 (0:00:00.048) 0:00:39.492 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.381) 0:00:39.873 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.048) 0:00:39.922 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.382) 0:00:40.304 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.034) 0:00:40.339 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.034) 0:00:40.374 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.032) 0:00:40.407 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.033) 0:00:40.440 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.033) 0:00:40.474 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.032) 0:00:40.507 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.033) 0:00:40.540 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.033) 0:00:40.573 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:38:01 -0400 (0:00:00.058) 0:00:40.632 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.037) 0:00:40.669 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.069) 0:00:40.738 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo.yml", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.082) 0:00:40.821 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.043) 0:00:40.865 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.086) 0:00:40.951 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.034) 0:00:40.986 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_service_name | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.038) 0:00:41.025 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.371) 0:00:41.397 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:38:02 -0400 (0:00:00.034) 0:00:41.431 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo.yml", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.377) 0:00:41.809 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.037) 0:00:41.846 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.035) 0:00:41.881 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.048) 0:00:41.930 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.036) 0:00:41.966 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.036380", "end": "2024-07-27 12:38:03.677104", "rc": 0, "start": "2024-07-27 12:38:03.640724" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.414) 0:00:42.381 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.062) 0:00:42.443 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.072) 0:00:42.516 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.036) 0:00:42.552 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:38:03 -0400 (0:00:00.034) 0:00:42.586 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.035537", "end": "2024-07-27 12:38:04.297404", "rc": 0, "start": "2024-07-27 12:38:04.261867" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:38:04 -0400 (0:00:00.414) 0:00:43.000 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.034484", "end": "2024-07-27 12:38:04.712223", "rc": 0, "start": "2024-07-27 12:38:04.677739" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:38:04 -0400 (0:00:00.416) 0:00:43.417 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.034448", "end": "2024-07-27 12:38:05.130250", "rc": 0, "start": "2024-07-27 12:38:05.095802" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:38:05 -0400 (0:00:00.413) 0:00:43.831 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.033820", "end": "2024-07-27 12:38:05.540386", "rc": 0, "start": "2024-07-27 12:38:05.506566" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:38:05 -0400 (0:00:00.414) 0:00:44.246 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:38:06 -0400 (0:00:00.418) 0:00:44.665 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "snapd.seeded.service": { "name": "snapd.seeded.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:38:07 -0400 (0:00:01.767) 0:00:46.433 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:38:07 -0400 (0:00:00.033) 0:00:46.466 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "envoy-proxy-configmap.yml", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: envoy-proxy-config\ndata:\n envoy.yaml: |\n admin:\n address:\n socket_address:\n address: 0.0.0.0\n port_value: 9901\n\n static_resources:\n listeners:\n - name: listener_0\n address:\n socket_address:\n address: 0.0.0.0\n port_value: 8080\n filter_chains:\n - filters:\n - name: envoy.filters.network.http_connection_manager\n typed_config:\n \"@type\": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n stat_prefix: ingress_http\n codec_type: AUTO\n route_config:\n name: local_route\n virtual_hosts:\n - name: local_service\n domains: [\"*\"]\n routes:\n - match:\n prefix: \"/\"\n route:\n cluster: backend\n http_filters:\n - name: envoy.filters.http.router\n typed_config:\n \"@type\": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n transport_socket:\n name: envoy.transport_sockets.tls\n typed_config:\n \"@type\": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n common_tls_context:\n tls_certificates:\n - certificate_chain:\n filename: /etc/envoy-certificates/certificate.pem\n private_key:\n filename: /etc/envoy-certificates/certificate.key\n clusters:\n - name: backend\n connect_timeout: 5s\n type: STATIC\n dns_refresh_rate: 1800s\n lb_policy: ROUND_ROBIN\n load_assignment:\n cluster_name: backend\n endpoints:\n - lb_endpoints:\n - endpoint:\n address:\n socket_address:\n address: 127.0.0.1\n port_value: 80", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:38:07 -0400 (0:00:00.048) 0:00:46.515 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:38:07 -0400 (0:00:00.044) 0:00:46.559 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:07 -0400 (0:00:00.036) 0:00:46.595 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "envoy-proxy-configmap", "__podman_quadlet_type": "yml", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:38:08 -0400 (0:00:00.051) 0:00:46.647 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:38:08 -0400 (0:00:00.107) 0:00:46.755 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:38:08 -0400 (0:00:00.040) 0:00:46.795 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:38:08 -0400 (0:00:00.040) 0:00:46.835 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:38:08 -0400 (0:00:00.049) 0:00:46.885 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:38:08 -0400 (0:00:00.374) 0:00:47.259 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:38:08 -0400 (0:00:00.047) 0:00:47.306 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.380) 0:00:47.687 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.033) 0:00:47.720 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.033) 0:00:47.754 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.033) 0:00:47.788 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.032) 0:00:47.821 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.033) 0:00:47.855 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.033) 0:00:47.889 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.032) 0:00:47.921 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.033) 0:00:47.955 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.060) 0:00:48.015 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.037) 0:00:48.052 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.070) 0:00:48.123 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/envoy-proxy-configmap.yml", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.083) 0:00:48.206 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.043) 0:00:48.250 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.085) 0:00:48.335 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.033) 0:00:48.369 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_service_name | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:38:09 -0400 (0:00:00.037) 0:00:48.406 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:38:10 -0400 (0:00:00.364) 0:00:48.771 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:38:10 -0400 (0:00:00.033) 0:00:48.805 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/envoy-proxy-configmap.yml", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:10 -0400 (0:00:00.375) 0:00:49.181 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:38:10 -0400 (0:00:00.037) 0:00:49.218 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:38:10 -0400 (0:00:00.035) 0:00:49.253 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:38:10 -0400 (0:00:00.047) 0:00:49.301 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:38:10 -0400 (0:00:00.035) 0:00:49.337 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.033566", "end": "2024-07-27 12:38:11.044238", "rc": 0, "start": "2024-07-27 12:38:11.010672" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:38:11 -0400 (0:00:00.411) 0:00:49.749 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:38:11 -0400 (0:00:00.066) 0:00:49.815 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:38:11 -0400 (0:00:00.070) 0:00:49.886 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:38:11 -0400 (0:00:00.035) 0:00:49.921 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:38:11 -0400 (0:00:00.033) 0:00:49.954 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.033538", "end": "2024-07-27 12:38:11.665259", "rc": 0, "start": "2024-07-27 12:38:11.631721" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:38:11 -0400 (0:00:00.414) 0:00:50.369 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.035106", "end": "2024-07-27 12:38:12.081697", "rc": 0, "start": "2024-07-27 12:38:12.046591" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:38:12 -0400 (0:00:00.419) 0:00:50.789 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.035058", "end": "2024-07-27 12:38:12.503757", "rc": 0, "start": "2024-07-27 12:38:12.468699" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:38:12 -0400 (0:00:00.419) 0:00:51.208 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.034714", "end": "2024-07-27 12:38:12.925901", "rc": 0, "start": "2024-07-27 12:38:12.891187" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:38:12 -0400 (0:00:00.425) 0:00:51.634 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:38:13 -0400 (0:00:00.421) 0:00:52.055 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "snapd.seeded.service": { "name": "snapd.seeded.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:38:15 -0400 (0:00:01.790) 0:00:53.845 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.036) 0:00:53.882 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Install]\nWantedBy=default.target\n\n[Container]\nImage=quay.io/linux-system-roles/mysql:5.6\nContainerName=quadlet-demo-mysql\nVolume=quadlet-demo-mysql.volume:/var/lib/mysql\nVolume=/tmp/quadlet_demo:/var/lib/quadlet_demo:Z\nNetwork=quadlet-demo.network\nSecret=mysql-root-password-container,type=env,target=MYSQL_ROOT_PASSWORD\nHealthCmd=/bin/true\nHealthOnFailure=kill\n", "__podman_quadlet_template_src": "quadlet-demo-mysql.container.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.098) 0:00:53.980 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.045) 0:00:54.026 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.037) 0:00:54.063 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo-mysql", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.053) 0:00:54.117 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.109) 0:00:54.226 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.040) 0:00:54.267 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.040) 0:00:54.307 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:38:15 -0400 (0:00:00.050) 0:00:54.357 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.375) 0:00:54.733 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.047) 0:00:54.780 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.377) 0:00:55.158 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.034) 0:00:55.192 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.034) 0:00:55.227 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.034) 0:00:55.261 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.033) 0:00:55.295 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.034) 0:00:55.329 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.033) 0:00:55.363 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.032) 0:00:55.395 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.033) 0:00:55.429 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-demo-mysql.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.059) 0:00:55.489 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.036) 0:00:55.525 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:38:16 -0400 (0:00:00.070) 0:00:55.596 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo-mysql.container", "__podman_volumes": [ "/tmp/quadlet_demo" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:38:17 -0400 (0:00:00.084) 0:00:55.680 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:38:17 -0400 (0:00:00.042) 0:00:55.723 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:38:17 -0400 (0:00:00.085) 0:00:55.809 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:38:17 -0400 (0:00:00.033) 0:00:55.842 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo-mysql.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:38:17 -0400 (0:00:00.525) 0:00:56.368 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:38:18 -0400 (0:00:00.370) 0:00:56.739 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:38:18 -0400 (0:00:00.033) 0:00:56.773 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo-mysql.container", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:18 -0400 (0:00:00.371) 0:00:57.145 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:38:18 -0400 (0:00:00.037) 0:00:57.182 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:38:18 -0400 (0:00:00.035) 0:00:57.218 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:38:18 -0400 (0:00:00.048) 0:00:57.266 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:38:18 -0400 (0:00:00.035) 0:00:57.302 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.036568", "end": "2024-07-27 12:38:19.010300", "rc": 0, "start": "2024-07-27 12:38:18.973732" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:38:19 -0400 (0:00:00.408) 0:00:57.711 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:38:19 -0400 (0:00:00.098) 0:00:57.810 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:38:19 -0400 (0:00:00.034) 0:00:57.845 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:38:19 -0400 (0:00:00.034) 0:00:57.880 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:38:19 -0400 (0:00:00.034) 0:00:57.915 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.035342", "end": "2024-07-27 12:38:19.622098", "rc": 0, "start": "2024-07-27 12:38:19.586756" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:38:19 -0400 (0:00:00.408) 0:00:58.323 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.035240", "end": "2024-07-27 12:38:20.032646", "rc": 0, "start": "2024-07-27 12:38:19.997406" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:38:20 -0400 (0:00:00.413) 0:00:58.737 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.035573", "end": "2024-07-27 12:38:20.449588", "rc": 0, "start": "2024-07-27 12:38:20.414015" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:38:20 -0400 (0:00:00.415) 0:00:59.152 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.034653", "end": "2024-07-27 12:38:20.860383", "rc": 0, "start": "2024-07-27 12:38:20.825730" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:38:20 -0400 (0:00:00.410) 0:00:59.563 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:38:21 -0400 (0:00:00.422) 0:00:59.985 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "snapd.seeded.service": { "name": "snapd.seeded.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:38:23 -0400 (0:00:01.780) 0:01:01.765 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.034) 0:01:01.800 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "quadlet-demo-mysql.volume", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Volume]", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.048) 0:01:01.848 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.044) 0:01:01.893 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.036) 0:01:01.929 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo-mysql", "__podman_quadlet_type": "volume", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.051) 0:01:01.980 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.107) 0:01:02.088 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.040) 0:01:02.128 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.040) 0:01:02.169 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.049) 0:01:02.219 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:38:23 -0400 (0:00:00.378) 0:01:02.598 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.048) 0:01:02.646 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.375) 0:01:03.021 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.034) 0:01:03.056 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.034) 0:01:03.091 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.034) 0:01:03.125 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.032) 0:01:03.158 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.033) 0:01:03.191 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.033) 0:01:03.225 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.032) 0:01:03.258 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.034) 0:01:03.292 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-demo-mysql-volume.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.059) 0:01:03.352 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.036) 0:01:03.388 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.071) 0:01:03.460 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo-mysql.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.084) 0:01:03.545 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:38:24 -0400 (0:00:00.042) 0:01:03.587 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:38:25 -0400 (0:00:00.085) 0:01:03.673 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:38:25 -0400 (0:00:00.034) 0:01:03.708 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo-mysql-volume.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:38:25 -0400 (0:00:00.528) 0:01:04.236 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:38:25 -0400 (0:00:00.374) 0:01:04.611 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:38:26 -0400 (0:00:00.038) 0:01:04.649 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo-mysql.volume", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:26 -0400 (0:00:00.382) 0:01:05.032 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:38:26 -0400 (0:00:00.041) 0:01:05.074 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:38:26 -0400 (0:00:00.038) 0:01:05.112 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:38:26 -0400 (0:00:00.049) 0:01:05.161 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:38:26 -0400 (0:00:00.036) 0:01:05.198 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.032205", "end": "2024-07-27 12:38:26.902077", "rc": 0, "start": "2024-07-27 12:38:26.869872" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:38:26 -0400 (0:00:00.412) 0:01:05.610 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:38:27 -0400 (0:00:00.109) 0:01:05.720 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:38:27 -0400 (0:00:00.035) 0:01:05.755 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:38:27 -0400 (0:00:00.035) 0:01:05.790 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:38:27 -0400 (0:00:00.034) 0:01:05.825 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.034208", "end": "2024-07-27 12:38:27.535212", "rc": 0, "start": "2024-07-27 12:38:27.501004" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:38:27 -0400 (0:00:00.412) 0:01:06.237 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.034954", "end": "2024-07-27 12:38:27.947002", "rc": 0, "start": "2024-07-27 12:38:27.912048" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:38:28 -0400 (0:00:00.412) 0:01:06.650 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.036220", "end": "2024-07-27 12:38:28.362285", "rc": 0, "start": "2024-07-27 12:38:28.326065" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:38:28 -0400 (0:00:00.419) 0:01:07.069 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.034534", "end": "2024-07-27 12:38:28.786303", "rc": 0, "start": "2024-07-27 12:38:28.751769" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:38:28 -0400 (0:00:00.420) 0:01:07.490 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:38:29 -0400 (0:00:00.416) 0:01:07.907 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "snapd.seeded.service": { "name": "snapd.seeded.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:38:31 -0400 (0:00:01.764) 0:01:09.671 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.034) 0:01:09.705 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_file_src": "quadlet-demo.network", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Network]\nSubnet=192.168.30.0/24\nGateway=192.168.30.1\nLabel=app=wordpress", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.049) 0:01:09.755 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.043) 0:01:09.799 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_quadlet_file_src", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.036) 0:01:09.835 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-demo", "__podman_quadlet_type": "network", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.053) 0:01:09.888 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:2 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.109) 0:01:09.998 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:9 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.041) 0:01:10.039 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:16 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.038) 0:01:10.078 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get group information] **************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:28 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.049) 0:01:10.128 ********* ok: [managed_node1] => { "ansible_facts": { "getent_group": { "root": [ "x", "0", "" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Set group name] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:35 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.375) 0:01:10.503 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_group_name": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 27 July 2024 12:38:31 -0400 (0:00:00.048) 0:01:10.551 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1722097764.7689178, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "fc3588658699d4eb56c9a9cdf6678e83337ea7ee", "ctime": 1722097725.4462144, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6544612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1719992328.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15680, "uid": 0, "version": "648322932", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check user with getsubids] ************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.377) 0:01:10.928 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check group with getsubids] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.036) 0:01:10.964 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.034) 0:01:10.999 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:74 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.033) 0:01:11.033 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:79 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.033) 0:01:11.067 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:84 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.033) 0:01:11.100 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:94 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.034) 0:01:11.134 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if group not in subgid file] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:101 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.033) 0:01:11.168 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:62 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.032) 0:01:11.201 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-demo-network.service", "__podman_systemd_scope": "system", "__podman_user_home_dir": "/root", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.061) 0:01:11.263 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:77 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.077) 0:01:11.340 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:87 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.034) 0:01:11.374 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-demo.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:103 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.083) 0:01:11.457 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:110 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.043) 0:01:11.501 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.084) 0:01:11.586 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 27 July 2024 12:38:32 -0400 (0:00:00.034) 0:01:11.620 ********* ok: [managed_node1] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-demo-network.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:33 Saturday 27 July 2024 12:38:33 -0400 (0:00:00.529) 0:01:12.150 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:38 Saturday 27 July 2024 12:38:33 -0400 (0:00:00.372) 0:01:12.522 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:42 Saturday 27 July 2024 12:38:33 -0400 (0:00:00.034) 0:01:12.557 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/containers/systemd/quadlet-demo.network", "state": "absent" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:48 Saturday 27 July 2024 12:38:34 -0400 (0:00:00.371) 0:01:12.929 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:58 Saturday 27 July 2024 12:38:34 -0400 (0:00:00.036) 0:01:12.966 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:95 Saturday 27 July 2024 12:38:34 -0400 (0:00:00.036) 0:01:13.002 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:112 Saturday 27 July 2024 12:38:34 -0400 (0:00:00.047) 0:01:13.049 ********* ok: [managed_node1] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 27 July 2024 12:38:34 -0400 (0:00:00.035) 0:01:13.085 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.033672", "end": "2024-07-27 12:38:34.790694", "rc": 0, "start": "2024-07-27 12:38:34.757022" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:127 Saturday 27 July 2024 12:38:34 -0400 (0:00:00.411) 0:01:13.496 ********* included: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed_node1 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 27 July 2024 12:38:34 -0400 (0:00:00.106) 0:01:13.603 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 27 July 2024 12:38:35 -0400 (0:00:00.035) 0:01:13.639 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 27 July 2024 12:38:35 -0400 (0:00:00.033) 0:01:13.672 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 27 July 2024 12:38:35 -0400 (0:00:00.035) 0:01:13.707 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.037466", "end": "2024-07-27 12:38:35.415188", "rc": 0, "start": "2024-07-27 12:38:35.377722" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:146 Saturday 27 July 2024 12:38:35 -0400 (0:00:00.410) 0:01:14.118 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.035310", "end": "2024-07-27 12:38:35.830780", "rc": 0, "start": "2024-07-27 12:38:35.795470" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:155 Saturday 27 July 2024 12:38:35 -0400 (0:00:00.417) 0:01:14.535 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.035266", "end": "2024-07-27 12:38:36.249893", "rc": 0, "start": "2024-07-27 12:38:36.214627" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:164 Saturday 27 July 2024 12:38:36 -0400 (0:00:00.422) 0:01:14.958 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.036054", "end": "2024-07-27 12:38:36.678640", "rc": 0, "start": "2024-07-27 12:38:36.642586" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:173 Saturday 27 July 2024 12:38:36 -0400 (0:00:00.428) 0:01:15.386 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 Saturday 27 July 2024 12:38:37 -0400 (0:00:00.420) 0:01:15.807 ********* ok: [managed_node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "certmonger.service": { "name": "certmonger.service", "source": "systemd", "state": "running", "status": "enabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "snapd.seeded.service": { "name": "snapd.seeded.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:114 Saturday 27 July 2024 12:38:38 -0400 (0:00:01.788) 0:01:17.596 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:143 Saturday 27 July 2024 12:38:38 -0400 (0:00:00.037) 0:01:17.634 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:149 Saturday 27 July 2024 12:38:39 -0400 (0:00:00.033) 0:01:17.667 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:158 Saturday 27 July 2024 12:38:39 -0400 (0:00:00.031) 0:01:17.698 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Ensure no resources] ***************************************************** task path: /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:170 Saturday 27 July 2024 12:38:39 -0400 (0:00:00.051) 0:01:17.750 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed PLAY RECAP ********************************************************************* managed_node1 : ok=245 changed=9 unreachable=0 failed=1 skipped=236 rescued=1 ignored=0 Saturday 27 July 2024 12:38:39 -0400 (0:00:00.035) 0:01:17.786 ********* =============================================================================== fedora.linux_system_roles.podman : For testing and debugging - services --- 2.08s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 1.79s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 1.79s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 1.78s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 1.77s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.podman : For testing and debugging - services --- 1.76s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:183 fedora.linux_system_roles.firewall : Configure firewall ----------------- 1.20s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 fedora.linux_system_roles.certificate : Slurp the contents of the files --- 1.14s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:152 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.13s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Gathering Facts --------------------------------------------------------- 1.13s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:3 fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed --- 1.08s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 fedora.linux_system_roles.certificate : Remove files -------------------- 1.07s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:181 fedora.linux_system_roles.firewall : Configure firewall ----------------- 1.04s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:71 fedora.linux_system_roles.certificate : Ensure provider packages are installed --- 0.94s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:23 fedora.linux_system_roles.firewall : Install firewalld ------------------ 0.93s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 fedora.linux_system_roles.podman : Gather the package facts ------------- 0.92s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.firewall : Install firewalld ------------------ 0.92s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:31 fedora.linux_system_roles.certificate : Ensure certificate requests ----- 0.86s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:101 fedora.linux_system_roles.certificate : Ensure provider service is running --- 0.76s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:90 fedora.linux_system_roles.firewall : Enable and start firewalld service --- 0.53s /tmp/tmp.UJtmxR54QA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:28