Skip to content

Conversation

@ben-grande
Copy link
Contributor

If the disposable was preloaded, in progress feature would return False, now it check is qube was a preloaded disposable that completed but is not running to signal improper shutdown.

When calling cleanup, if the qube is not running, the "_bare_cleanup()" was skipped on "cleanup()", now this is handled.

There is something that triggers "domain-remove-from-disk" on qubesd start for unnamed disposables, didn't detect the origin, but it is now handled by removing the preloaded disposable from the list.

Fixes: QubesOS/qubes-issues#10326
For: QubesOS/qubes-issues#1512


How can I simulate stopping qubesd on integration tests? I tried to do unittest and firing domain-load, but I could get the magic right.

@codecov
Copy link

codecov bot commented Oct 16, 2025

Codecov Report

❌ Patch coverage is 50.00000% with 8 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.38%. Comparing base (fce8bad) to head (3e842d7).
⚠️ Report is 13 commits behind head on main.

Files with missing lines Patch % Lines
qubes/vm/dispvm.py 50.00% 8 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #742      +/-   ##
==========================================
- Coverage   70.40%   70.38%   -0.02%     
==========================================
  Files          61       61              
  Lines       13682    13691       +9     
==========================================
+ Hits         9633     9637       +4     
- Misses       4049     4054       +5     
Flag Coverage Δ
unittests 70.38% <50.00%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@qubesos-bot
Copy link

qubesos-bot commented Oct 22, 2025

OpenQA test summary

Complete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025102219-4.3&flavor=pull-requests

Test run included the following:

New failures, excluding unstable

Compared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025081011-4.3&flavor=update

  • system_tests_gui_tools

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...
  • system_tests_guivm_vnc_gui_interactive

    • gui_filecopy: unnamed test (unknown)
    • gui_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'files-test-file' matc...
  • system_tests_qwt_win10@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Edge-address-...
  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/iDVvW-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_gui_tools@hw7

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...

Failed tests

12 failures
  • system_tests_gui_tools

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...
  • system_tests_extra

    • TC_00_QVCTest_whonix-workstation-17: test_010_screenshare (failure)
      AssertionError: 1 != 0 : Timeout waiting for /dev/video0 in test-in...
  • system_tests_guivm_vnc_gui_interactive

    • gui_filecopy: unnamed test (unknown)
    • gui_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'files-test-file' matc...
  • system_tests_qwt_win10@hw13

    • windows_install: Failed (test died)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...
  • system_tests_qwt_win10_seamless@hw13

    • windows_clipboard_and_filecopy: unnamed test (unknown)
    • windows_clipboard_and_filecopy: Failed (test died)
      # Test died: no candidate needle with tag(s) 'windows-Edge-address-...
  • system_tests_qwt_win11@hw13

    • windows_install: wait_serial (wait serial expected)
      # wait_serial expected: qr/iDVvW-\d+-/...

    • windows_install: Failed (test died + timed out)
      # Test died: command 'script -e -c 'bash -x /usr/bin/qvm-create-win...

  • system_tests_gui_tools@hw7

    • qubesmanager_vmsettings: unnamed test (unknown)
    • qubesmanager_vmsettings: Failed (test died)
      # Test died: no candidate needle with tag(s) 'vm-settings-devices-s...

Fixed failures

Compared to: https://openqa.qubes-os.org/tests/149225#dependencies

84 fixed
  • system_tests_kde_gui_interactive

    • gui_keyboard_layout: wait_serial (wait serial expected)
      # wait_serial expected: "echo -e '[Layout]\nLayoutList=us,de' | sud...

    • gui_keyboard_layout: Failed (test died)
      # Test died: command 'test "$(cd ~user;ls e1*)" = "$(qvm-run -p wor...

  • system_tests_audio

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.audio failed (exit code 1), details reporte...

    • system_tests: Failed (test died)
      # Test died: Some tests failed at qubesos/tests/system_tests.pm lin...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_223_audio_play_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_224_audio_rec_muted_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_225_audio_rec_unmuted_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_252_audio_playback_audiovm_switch_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 120 seco...

  • system_tests_dispvm_perf@hw7

  • system_tests_guivm_gpu_gui_interactive@hw13

    • guivm_startup: wait_serial (wait serial expected)
      # wait_serial expected: qr/lEcbc-\d+-/...

    • guivm_startup: Failed (test died + timed out)
      # Test died: command '! qvm-check sys-whonix || time qvm-start sys-...

  • system_tests_basic_vm_qrexec_gui_ext4

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.vm_qrexec_gui failed (exit code 1), details...

    • system_tests: Failed (test died)
      # Test died: Some tests failed at qubesos/tests/system_tests.pm lin...

    • TC_20_NonAudio_whonix-gateway-17-pool: test_012_qubes_desktop_run (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError

  • system_tests_audio@hw1

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.audio failed (exit code 1), details reporte...

    • system_tests: Failed (test died)
      # Test died: Some tests failed at qubesos/tests/system_tests.pm lin...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_223_audio_play_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 60 secon...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_224_audio_rec_muted_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 60 secon...

    • TC_20_AudioVM_Pulse_whonix-workstation-17: test_252_audio_playback_audiovm_switch_hvm (error)
      qubes.exc.QubesVMError: Cannot connect to qrexec agent for 60 secon...

  • system_tests_dispvm

    • system_tests: Fail (unknown)
      Tests qubes.tests.integ.dispvm failed (exit code 1), details report...

    • TC_20_DispVM_debian-13-xfce: test_012_preload_low_mem (failure)
      ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 1 != 0

    • TC_20_DispVM_debian-13-xfce: test_013_preload_gui (error)
      raise KeyError(key)... KeyError: 'disp3723'

    • TC_20_DispVM_debian-13-xfce: test_014_preload_nogui (error + cleanup)
      raise TimeoutError from exc_val... TimeoutError

    • TC_20_DispVM_debian-13-xfce: test_015_preload_race_more (error + cleanup)
      raise KeyError(key)... KeyError: 'disp1187'

    • TC_20_DispVM_debian-13-xfce: test_016_preload_race_less (failure + cleanup)
      ^^^^^^^^^^^^^^^^^^^^^^... AssertionError

    • TC_20_DispVM_debian-13-xfce: test_017_preload_autostart (error)
      raise KeyError(key)... KeyError: 'disp7317'

    • TC_20_DispVM_debian-13-xfce: test_018_preload_global (error)
      raise KeyError(key)... KeyError: 'disp8572'

    • TC_20_DispVM_debian-13-xfce: test_019_preload_refresh (error)
      raise KeyError(key)... KeyError: 'disp6425'

    • TC_20_DispVM_fedora-42-xfce: test_012_preload_low_mem (failure)
      ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 1 != 0

    • TC_20_DispVM_whonix-workstation-17: test_012_preload_low_mem (failure)
      ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^... AssertionError: 1 != 0

Unstable tests

Performance Tests

Performance degradation:

17 performance degradations
  • fedora-42-xfce_exec-data-duplex: 74.80 🔻 ( previous job: 67.92, degradation: 110.14%)
  • whonix-workstation-17_exec: 8.59 🔻 ( previous job: 7.57, degradation: 113.52%)
  • dom0_root_seq1m_q8t1_read 3:read_bandwidth_kb: 177215.00 🔻 ( previous job: 497426.00, degradation: 35.63%)
  • dom0_root_seq1m_q8t1_write 3:write_bandwidth_kb: 188190.00 🔻 ( previous job: 265260.00, degradation: 70.95%)
  • dom0_root_seq1m_q1t1_read 3:read_bandwidth_kb: 120482.00 🔻 ( previous job: 431512.00, degradation: 27.92%)
  • dom0_root_seq1m_q1t1_write 3:write_bandwidth_kb: 74418.00 🔻 ( previous job: 196254.00, degradation: 37.92%)
  • dom0_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 5803.00 🔻 ( previous job: 23940.00, degradation: 24.24%)
  • fedora-42-xfce_root_seq1m_q8t1_write 3:write_bandwidth_kb: 99799.00 🔻 ( previous job: 140215.00, degradation: 71.18%)
  • fedora-42-xfce_root_seq1m_q1t1_write 3:write_bandwidth_kb: 37850.00 🔻 ( previous job: 47575.00, degradation: 79.56%)
  • fedora-42-xfce_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 1399.00 🔻 ( previous job: 3020.00, degradation: 46.32%)
  • fedora-42-xfce_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 558.00 🔻 ( previous job: 1368.00, degradation: 40.79%)
  • fedora-42-xfce_private_seq1m_q1t1_write 3:write_bandwidth_kb: 48450.00 🔻 ( previous job: 79539.00, degradation: 60.91%)
  • fedora-42-xfce_private_rnd4k_q32t1_write 3:write_bandwidth_kb: 1529.00 🔻 ( previous job: 3765.00, degradation: 40.61%)
  • fedora-42-xfce_private_rnd4k_q1t1_write 3:write_bandwidth_kb: 340.00 🔻 ( previous job: 1251.00, degradation: 27.18%)
  • fedora-42-xfce_volatile_seq1m_q8t1_write 3:write_bandwidth_kb: 86646.00 🔻 ( previous job: 157382.00, degradation: 55.05%)
  • fedora-42-xfce_volatile_rnd4k_q32t1_write 3:write_bandwidth_kb: 3635.00 🔻 ( previous job: 4098.00, degradation: 88.70%)
  • fedora-42-xfce_volatile_rnd4k_q1t1_write 3:write_bandwidth_kb: 1438.00 🔻 ( previous job: 2384.00, degradation: 60.32%)

Remaining performance tests:

163 tests
  • debian-13-xfce_vm-dispvm (mean:6.69): 80.28
  • debian-13-xfce_vm-dispvm-gui (mean:7.675): 92.10
  • debian-13-xfce_vm-dispvm-concurrent (mean:3.181): 38.18
  • debian-13-xfce_vm-dispvm-gui-concurrent (mean:3.958): 47.49
  • debian-13-xfce_dom0-dispvm (mean:7.074): 84.89
  • debian-13-xfce_dom0-dispvm-gui (mean:8.196): 98.35
  • debian-13-xfce_dom0-dispvm-concurrent (mean:3.225): 38.69
  • debian-13-xfce_dom0-dispvm-gui-concurrent (mean:4.077): 48.93
  • debian-13-xfce_vm-dispvm-preload (mean:2.776): 33.31
  • debian-13-xfce_vm-dispvm-preload-gui (mean:4.061): 48.73
  • debian-13-xfce_vm-dispvm-preload-concurrent (mean:2.631): 31.58
  • debian-13-xfce_vm-dispvm-preload-gui-concurrent (mean:3.44): 41.28
  • debian-13-xfce_dom0-dispvm-preload (mean:3.502): 42.02
  • debian-13-xfce_dom0-dispvm-preload-gui (mean:10.669): 128.03
  • debian-13-xfce_dom0-dispvm-preload-concurrent (mean:3.159): 37.90
  • debian-13-xfce_dom0-dispvm-preload-gui-concurrent (mean:3.804): 45.65
  • debian-13-xfce_dom0-dispvm-api (mean:7.148): 85.78
  • debian-13-xfce_dom0-dispvm-gui-api (mean:8.267): 99.20
  • debian-13-xfce_dom0-dispvm-concurrent-api (mean:3.455): 41.46
  • debian-13-xfce_dom0-dispvm-gui-concurrent-api (mean:4.106): 49.28
  • debian-13-xfce_dom0-dispvm-preload-less-less-api (mean:3.827): 45.93
  • debian-13-xfce_dom0-dispvm-preload-less-api (mean:3.882): 46.58
  • debian-13-xfce_dom0-dispvm-preload-api (mean:3.482): 41.78
  • debian-13-xfce_dom0-dispvm-preload-more-api (mean:3.415): 40.98
  • debian-13-xfce_dom0-dispvm-preload-more-more-api (mean:3.749): 44.99
  • debian-13-xfce_dom0-dispvm-preload-gui-api (mean:4.484): 53.81
  • debian-13-xfce_dom0-dispvm-preload-concurrent-api (mean:3.11): 37.33
  • debian-13-xfce_dom0-dispvm-preload-gui-concurrent-api (mean:3.873): 46.47
  • debian-13-xfce_vm-vm (mean:0.039): 0.47
  • debian-13-xfce_vm-vm-gui (mean:0.031): 0.37
  • debian-13-xfce_vm-vm-concurrent (mean:0.019): 0.23
  • debian-13-xfce_vm-vm-gui-concurrent (mean:0.02): 0.23
  • debian-13-xfce_dom0-vm-api (mean:0.042): 0.51
  • debian-13-xfce_dom0-vm-gui-api (mean:0.052): 0.63
  • debian-13-xfce_dom0-vm-concurrent-api (mean:0.024): 0.29
  • debian-13-xfce_dom0-vm-gui-concurrent-api (mean:0.029): 0.34
  • fedora-42-xfce_vm-dispvm (mean:7.203): 86.43
  • fedora-42-xfce_vm-dispvm-gui (mean:8.162): 97.94
  • fedora-42-xfce_vm-dispvm-concurrent (mean:3.589): 43.06
  • fedora-42-xfce_vm-dispvm-gui-concurrent (mean:4.287): 51.44
  • fedora-42-xfce_dom0-dispvm (mean:7.685): 92.22
  • fedora-42-xfce_dom0-dispvm-gui (mean:8.868): 106.42
  • fedora-42-xfce_dom0-dispvm-concurrent (mean:3.908): 46.90
  • fedora-42-xfce_dom0-dispvm-gui-concurrent (mean:4.365): 52.38
  • fedora-42-xfce_vm-dispvm-preload (mean:3.246): 38.95
  • fedora-42-xfce_vm-dispvm-preload-gui (mean:6.702): 80.42
  • fedora-42-xfce_vm-dispvm-preload-concurrent (mean:2.953): 35.44
  • fedora-42-xfce_vm-dispvm-preload-gui-concurrent (mean:3.821): 45.85
  • fedora-42-xfce_dom0-dispvm-preload (mean:3.88): 46.55
  • fedora-42-xfce_dom0-dispvm-preload-gui (mean:4.923): 59.07
  • fedora-42-xfce_dom0-dispvm-preload-concurrent (mean:3.477): 41.73
  • fedora-42-xfce_dom0-dispvm-preload-gui-concurrent (mean:4.074): 48.88
  • fedora-42-xfce_dom0-dispvm-api (mean:7.801): 93.61
  • fedora-42-xfce_dom0-dispvm-gui-api (mean:9.029): 108.34
  • fedora-42-xfce_dom0-dispvm-concurrent-api (mean:3.771): 45.25
  • fedora-42-xfce_dom0-dispvm-gui-concurrent-api (mean:4.464): 53.56
  • fedora-42-xfce_dom0-dispvm-preload-less-less-api (mean:4.292): 51.50
  • fedora-42-xfce_dom0-dispvm-preload-less-api (mean:4.247): 50.96
  • fedora-42-xfce_dom0-dispvm-preload-api (mean:4.119): 49.43
  • fedora-42-xfce_dom0-dispvm-preload-more-api (mean:3.896): 46.75
  • fedora-42-xfce_dom0-dispvm-preload-more-more-api (mean:4.037): 48.44
  • fedora-42-xfce_dom0-dispvm-preload-gui-api (mean:5.069): 60.83
  • fedora-42-xfce_dom0-dispvm-preload-concurrent-api (mean:3.419): 41.03
  • fedora-42-xfce_dom0-dispvm-preload-gui-concurrent-api (mean:4.36): 52.32
  • fedora-42-xfce_vm-vm (mean:0.033): 0.39
  • fedora-42-xfce_vm-vm-gui (mean:0.025): 0.30
  • fedora-42-xfce_vm-vm-concurrent (mean:0.024): 0.29
  • fedora-42-xfce_vm-vm-gui-concurrent (mean:0.02): 0.24
  • fedora-42-xfce_dom0-vm-api (mean:0.043): 0.52
  • fedora-42-xfce_dom0-vm-gui-api (mean:0.045): 0.54
  • fedora-42-xfce_dom0-vm-concurrent-api (mean:0.028): 0.34
  • fedora-42-xfce_dom0-vm-gui-concurrent-api (mean:0.03): 0.36
  • whonix-workstation-17_vm-dispvm (mean:7.797): 93.57
  • whonix-workstation-17_vm-dispvm-gui (mean:8.998): 107.97
  • whonix-workstation-17_vm-dispvm-concurrent (mean:4.547): 54.56
  • whonix-workstation-17_vm-dispvm-gui-concurrent (mean:4.902): 58.83
  • whonix-workstation-17_dom0-dispvm (mean:8.357): 100.28
  • whonix-workstation-17_dom0-dispvm-gui (mean:9.374): 112.49
  • whonix-workstation-17_dom0-dispvm-concurrent (mean:4.38): 52.56
  • whonix-workstation-17_dom0-dispvm-gui-concurrent (mean:5.32): 63.84
  • whonix-workstation-17_vm-dispvm-preload (mean:3.393): 40.72
  • whonix-workstation-17_vm-dispvm-preload-gui (mean:4.762): 57.15
  • whonix-workstation-17_vm-dispvm-preload-concurrent (mean:3.339): 40.07
  • whonix-workstation-17_vm-dispvm-preload-gui-concurrent (mean:4.216): 50.59
  • whonix-workstation-17_dom0-dispvm-preload (mean:4.447): 53.36
  • whonix-workstation-17_dom0-dispvm-preload-gui (mean:5.444): 65.33
  • whonix-workstation-17_dom0-dispvm-preload-concurrent (mean:3.746): 44.96
  • whonix-workstation-17_dom0-dispvm-preload-gui-concurrent (mean:4.483): 53.80
  • whonix-workstation-17_dom0-dispvm-api (mean:8.51): 102.12
  • whonix-workstation-17_dom0-dispvm-gui-api (mean:9.737): 116.85
  • whonix-workstation-17_dom0-dispvm-concurrent-api (mean:4.062): 48.74
  • whonix-workstation-17_dom0-dispvm-gui-concurrent-api (mean:4.597): 55.17
  • whonix-workstation-17_dom0-dispvm-preload-less-less-api (mean:4.57): 54.84
  • whonix-workstation-17_dom0-dispvm-preload-less-api (mean:5.064): 60.77
  • whonix-workstation-17_dom0-dispvm-preload-api (mean:4.264): 51.16
  • whonix-workstation-17_dom0-dispvm-preload-more-api (mean:4.408): 52.89
  • whonix-workstation-17_dom0-dispvm-preload-more-more-api (mean:4.262): 51.15
  • whonix-workstation-17_dom0-dispvm-preload-gui-api (mean:5.348): 64.17
  • whonix-workstation-17_dom0-dispvm-preload-concurrent-api (mean:3.733): 44.79
  • whonix-workstation-17_dom0-dispvm-preload-gui-concurrent-api (mean:4.5): 54.00
  • whonix-workstation-17_vm-vm (mean:0.024): 0.29
  • whonix-workstation-17_vm-vm-gui (mean:0.048): 0.58
  • whonix-workstation-17_vm-vm-concurrent (mean:0.015): 0.18
  • whonix-workstation-17_vm-vm-gui-concurrent (mean:0.03): 0.37
  • whonix-workstation-17_dom0-vm-api (mean:0.037): 0.45
  • whonix-workstation-17_dom0-vm-gui-api (mean:0.039): 0.47
  • whonix-workstation-17_dom0-vm-concurrent-api (mean:0.031): 0.37
  • whonix-workstation-17_dom0-vm-gui-concurrent-api (mean:0.025): 0.31
  • debian-13-xfce_exec: 8.04 🟢 ( previous job: 8.36, improvement: 96.18%)
  • debian-13-xfce_exec-root: 27.04 🟢 ( previous job: 27.36, improvement: 98.82%)
  • debian-13-xfce_socket: 8.08 🟢 ( previous job: 8.57, improvement: 94.21%)
  • debian-13-xfce_socket-root: 8.71 🔻 ( previous job: 8.26, degradation: 105.53%)
  • debian-13-xfce_exec-data-simplex: 67.47 🟢 ( previous job: 72.43, improvement: 93.15%)
  • debian-13-xfce_exec-data-duplex: 67.40 🟢 ( previous job: 76.65, improvement: 87.93%)
  • debian-13-xfce_exec-data-duplex-root: 80.77 🟢 ( previous job: 91.79, improvement: 88.00%)
  • debian-13-xfce_socket-data-duplex: 131.73 🟢 ( previous job: 133.45, improvement: 98.71%)
  • fedora-42-xfce_exec: 9.16 🔻 ( previous job: 9.06, degradation: 101.16%)
  • fedora-42-xfce_exec-root: 59.71 🔻 ( previous job: 58.19, degradation: 102.62%)
  • fedora-42-xfce_socket: 8.33 🟢 ( previous job: 8.48, improvement: 98.22%)
  • fedora-42-xfce_socket-root: 8.01 🟢 ( previous job: 8.18, improvement: 97.88%)
  • fedora-42-xfce_exec-data-simplex: 68.24 🟢 ( previous job: 78.48, improvement: 86.94%)
  • fedora-42-xfce_exec-data-duplex-root: 104.92 🔻 ( previous job: 96.36, degradation: 108.88%)
  • fedora-42-xfce_socket-data-duplex: 143.26 🔻 ( previous job: 142.58, degradation: 100.48%)
  • whonix-gateway-17_exec: 7.48 🟢 ( previous job: 8.12, improvement: 92.19%)
  • whonix-gateway-17_exec-root: 39.06 🟢 ( previous job: 41.05, improvement: 95.15%)
  • whonix-gateway-17_socket: 8.02 🟢 ( previous job: 8.52, improvement: 94.03%)
  • whonix-gateway-17_socket-root: 7.13 🟢 ( previous job: 8.12, improvement: 87.84%)
  • whonix-gateway-17_exec-data-simplex: 69.19 🟢 ( previous job: 83.60, improvement: 82.77%)
  • whonix-gateway-17_exec-data-duplex: 73.35 🔻 ( previous job: 68.38, degradation: 107.26%)
  • whonix-gateway-17_exec-data-duplex-root: 89.69 🟢 ( previous job: 99.37, improvement: 90.25%)
  • whonix-gateway-17_socket-data-duplex: 150.59 🟢 ( previous job: 167.12, improvement: 90.11%)
  • whonix-workstation-17_exec-root: 54.57 🟢 ( previous job: 56.76, improvement: 96.15%)
  • whonix-workstation-17_socket: 8.72 🔻 ( previous job: 8.59, degradation: 101.56%)
  • whonix-workstation-17_socket-root: 8.78 🟢 ( previous job: 8.89, improvement: 98.79%)
  • whonix-workstation-17_exec-data-simplex: 72.49 🔻 ( previous job: 66.80, degradation: 108.51%)
  • whonix-workstation-17_exec-data-duplex: 72.97 🟢 ( previous job: 74.50, improvement: 97.94%)
  • whonix-workstation-17_exec-data-duplex-root: 92.74 🟢 ( previous job: 102.34, improvement: 90.62%)
  • whonix-workstation-17_socket-data-duplex: 146.49 🟢 ( previous job: 147.97, improvement: 99.00%)
  • dom0_root_rnd4k_q32t1_write 3:write_bandwidth_kb: 6286.00 🟢 ( previous job: 2446.00, improvement: 256.99%)
  • dom0_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 11909.00 🟢 ( previous job: 5874.00, improvement: 202.74%)
  • dom0_root_rnd4k_q1t1_write 3:write_bandwidth_kb: 1062.00 🟢 ( previous job: 29.00, improvement: 3662.07%)
  • dom0_varlibqubes_seq1m_q8t1_read 3:read_bandwidth_kb: 284939.00 🔻 ( previous job: 292489.00, degradation: 97.42%)
  • dom0_varlibqubes_seq1m_q8t1_write 3:write_bandwidth_kb: 107727.00 🔻 ( previous job: 110817.00, degradation: 97.21%)
  • dom0_varlibqubes_seq1m_q1t1_read 3:read_bandwidth_kb: 418760.00 🟢 ( previous job: 137802.00, improvement: 303.89%)
  • dom0_varlibqubes_seq1m_q1t1_write 3:write_bandwidth_kb: 198582.00 🟢 ( previous job: 121719.00, improvement: 163.15%)
  • dom0_varlibqubes_rnd4k_q32t1_read 3:read_bandwidth_kb: 106661.00 🟢 ( previous job: 103932.00, improvement: 102.63%)
  • dom0_varlibqubes_rnd4k_q32t1_write 3:write_bandwidth_kb: 6531.00 🟢 ( previous job: 6356.00, improvement: 102.75%)
  • dom0_varlibqubes_rnd4k_q1t1_read 3:read_bandwidth_kb: 7570.00 🔻 ( previous job: 7695.00, degradation: 98.38%)
  • dom0_varlibqubes_rnd4k_q1t1_write 3:write_bandwidth_kb: 4089.00 🟢 ( previous job: 3925.00, improvement: 104.18%)
  • fedora-42-xfce_root_seq1m_q8t1_read 3:read_bandwidth_kb: 403608.00 🟢 ( previous job: 366891.00, improvement: 110.01%)
  • fedora-42-xfce_root_seq1m_q1t1_read 3:read_bandwidth_kb: 308404.00 🟢 ( previous job: 299764.00, improvement: 102.88%)
  • fedora-42-xfce_root_rnd4k_q32t1_read 3:read_bandwidth_kb: 87506.00 🟢 ( previous job: 86001.00, improvement: 101.75%)
  • fedora-42-xfce_root_rnd4k_q1t1_read 3:read_bandwidth_kb: 8721.00 🔻 ( previous job: 9042.00, degradation: 96.45%)
  • fedora-42-xfce_private_seq1m_q8t1_read 3:read_bandwidth_kb: 367019.00 🔻 ( previous job: 387500.00, degradation: 94.71%)
  • fedora-42-xfce_private_seq1m_q8t1_write 3:write_bandwidth_kb: 129084.00 🔻 ( previous job: 136640.00, degradation: 94.47%)
  • fedora-42-xfce_private_seq1m_q1t1_read 3:read_bandwidth_kb: 320469.00 🔻 ( previous job: 325139.00, degradation: 98.56%)
  • fedora-42-xfce_private_rnd4k_q32t1_read 3:read_bandwidth_kb: 97952.00 🟢 ( previous job: 87396.00, improvement: 112.08%)
  • fedora-42-xfce_private_rnd4k_q1t1_read 3:read_bandwidth_kb: 8383.00 🔻 ( previous job: 8992.00, degradation: 93.23%)
  • fedora-42-xfce_volatile_seq1m_q8t1_read 3:read_bandwidth_kb: 359717.00 🔻 ( previous job: 383531.00, degradation: 93.79%)
  • fedora-42-xfce_volatile_seq1m_q1t1_read 3:read_bandwidth_kb: 297721.00 🟢 ( previous job: 293225.00, improvement: 101.53%)
  • fedora-42-xfce_volatile_seq1m_q1t1_write 3:write_bandwidth_kb: 89150.00 🟢 ( previous job: 64217.00, improvement: 138.83%)
  • fedora-42-xfce_volatile_rnd4k_q32t1_read 3:read_bandwidth_kb: 88228.00 🟢 ( previous job: 87141.00, improvement: 101.25%)
  • fedora-42-xfce_volatile_rnd4k_q1t1_read 3:read_bandwidth_kb: 8967.00 🟢 ( previous job: 8804.00, improvement: 101.85%)

Comment on lines 652 to 654
:param bool force: Force cleanup even if enabled, else it might \
be handled by ``domain-shutdown``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leftover from earlier version? This function doesn't have force parameter

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the remnant PR had remnants... It was dealt with the running check.

If the disposable was preloaded, in progress feature would return False,
now it check is qube was a preloaded disposable that completed but is
not running to signal improper shutdown.

When calling cleanup, if the qube is not running, the "_bare_cleanup()"
was skipped on "cleanup()", now this is handled.

There is something that triggers "domain-remove-from-disk" on qubesd
start for unnamed disposables, didn't detect the origin, but it is now
handled by removing the preloaded disposable from the list.

Fixes: QubesOS/qubes-issues#10326
For: QubesOS/qubes-issues#1512
@ben-grande ben-grande force-pushed the preload-dirty-shutdown branch from 5b6d695 to 3e842d7 Compare October 23, 2025 08:02
@marmarek marmarek merged commit a5d4bf1 into QubesOS:main Oct 23, 2025
3 of 6 checks passed
running = False
# Full cleanup will be done automatically if event 'domain-shutdown' is
# triggered and "auto_cleanup=True".
if not running or not self.auto_cleanup:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some strange issue appears when using not running here.

The not running statement was intended to deal with halted qube (remnant from previous boot). I guess it is better to accept a force parameter to not default to running state.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But still doesn't explain the issue... was cleanup called twice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some disposable kept after test has this in /var/lob/qubes/qrexec.dispX.log:

2025-10-31 16:27:02.254 qrexec-daemon[2150537]: qrexec-daemon.c:898:parse_policy_response: qrexec-policy-daemon didn't return any data                         WARNING:root:warning: !compat-4.0 directive in file /etc/qubes/policy.d/35-compat.policy line 16 is transitional and will be deprecated
Traceback (most recent call last):                                                                                                                               File "/usr/bin/qrexec-policy-exec", line 5, in <module>
    sys.exit(main())
             ~~~~^^
  File "/usr/lib/python3.13/site-packages/qrexec/tools/qrexec_policy_exec.py", line 331, in main
    result = get_result(args)
  File "/usr/lib/python3.13/site-packages/qrexec/tools/qrexec_policy_exec.py", line 275, in get_result
    result_str = asyncio.run(
        handle_request(
    ...<7 lines>...
        )
    )
  File "/usr/lib64/python3.13/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ~~~~~~~~~~^^^^^^
  File "/usr/lib64/python3.13/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "/usr/lib64/python3.13/asyncio/base_events.py", line 725, in run_until_complete                                                                             return future.result()
           ~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/qrexec/tools/qrexec_policy_exec.py", line 363, in handle_request
    system_info = utils.get_system_info()                                                                                                                        File "/usr/lib/python3.13/site-packages/qrexec/utils.py", line 164, in get_system_info
    system_info = qubesd_call("dom0", "internal.GetSystemInfo")
  File "/usr/lib/python3.13/site-packages/qrexec/utils.py", line 98, in qubesd_call
    client_socket.connect(socket_path)
    ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory
2025-10-31 16:27:02.370 qrexec-daemon[2150537]: qrexec-daemon.c:1155:connect_daemon_socket: qrexec-policy-exec failed: 1

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The messages in qrexec log are kinda expected, as qubesd is stopped between tests, so qrexec policy can't be evaluated at that time. But the unusual part is the dispvm shouldn't even be running at that time - all the preloaded should be cleaned up by then. Maybe something triggers preloading during test cleanup stage, and it remains running?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understood. The tests are not deleting everything on tearDown, some qubes are removed in qubes/tests/__init__.py with kill(). The correct way to deal with a disposable is to use cleanup() to make sure that everything is handled.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean - even though test cleanup does remove those VMs later (see _remove_vm_qubes called from remove_vms), dispvms do that also in domain-shutdown handler and a dispvm basically gets removed twice.

But something is still missing here: kill will remove the disposable (via domain-shutdown handler, that is finalized before kill returns), it will be fully clean at this stage. The _remove_vm_qubes will hit a bunch of errors at cleanup as the qube doesn't exist anymore (they are handled with except: pass), but "logical volume in use" error looks like the VM is still running at this point, not that it got already removed. Similarly, domain-unpaused looks like a new preloaded qube got started in the meantime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explained in detail in the commit how I understood the issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ben-grande added a commit to ben-grande/qubes-core-admin that referenced this pull request Nov 3, 2025
If it is not done on the test, it will be handled by
"qubes/tests/__init__.py", which will attempt to kill the domain. If the
preloaded disposable was still starting, exceptions will be handled by
also attempting to kill the domain. Both methods will trigger the
"_bare_cleanup()", sometimes indirectly via "cleanup()" or
"_auto_cleanup()", but if "_bare_cleanup()" happens on the other call,
not the one that called kill, it will not await and will attempt to
remove the domain from the disk while it is still running (not
completely killed).

For: QubesOS#742
For: QubesOS/qubes-issues#1512
ben-grande added a commit to ben-grande/qubes-core-admin that referenced this pull request Nov 3, 2025
If it is not done on the test, it will be handled by
"qubes/tests/__init__.py", which will attempt to kill the domain. If the
preloaded disposable was still starting, exceptions will be handled by
also attempting to kill the domain. Both methods will trigger the
"_bare_cleanup()", sometimes indirectly via "cleanup()" or
"_auto_cleanup()", but if "_bare_cleanup()" happens on the other call,
not the one that called kill, it will not await and will attempt to
remove the domain from the disk while it is still running (not
completely killed).

For: QubesOS#742
For: QubesOS/qubes-issues#1512
Fixes: QubesOS/qubes-issues#10369
ben-grande added a commit to ben-grande/qubes-core-admin that referenced this pull request Nov 3, 2025
If it is not done on the test, it will be handled by
"qubes/tests/__init__.py", which will attempt to kill the domain. If the
preloaded disposable was still starting, exceptions will be handled by
also attempting to kill the domain. Both methods will trigger the
"_bare_cleanup()", sometimes indirectly via "cleanup()" or
"_auto_cleanup()", but if "_bare_cleanup()" happens on the other call,
not the one that called kill, it will not await and will attempt to
remove the domain from the disk while it is still running (not
completely killed).

For: QubesOS#742
For: QubesOS/qubes-issues#1512
Fixes: QubesOS/qubes-issues#10369
ben-grande added a commit to ben-grande/qubes-core-admin that referenced this pull request Nov 3, 2025
If it is not done on the test, it will be handled by
"qubes/tests/__init__.py", which will attempt to kill the domain. If the
preloaded disposable was still starting, exceptions will be handled by
also attempting to kill the domain. Both methods will trigger the
"_bare_cleanup()", sometimes indirectly via "cleanup()" or
"_auto_cleanup()", but if "_bare_cleanup()" happens on the other call,
not the one that called kill, it will not await and will attempt to
remove the domain from the disk while it is still running (not
completely killed).

For: QubesOS#742
For: QubesOS/qubes-issues#1512
Fixes: QubesOS/qubes-issues#10369
ben-grande added a commit to ben-grande/openqa-tests-qubesos that referenced this pull request Nov 3, 2025
If it has fixed test alongside upload failure on the same result, upload
failure was skipped.

For: QubesOS/qubes-core-admin#742
ben-grande added a commit to ben-grande/qubes-core-admin that referenced this pull request Nov 4, 2025
Relying simple on domain not being running resulted in storage errors
of attempting to remove it while still in use (domain still running).
Didn't identify the cause, which is unfortunate as now it requires an
extra parameter.

For: QubesOS#742
For: QubesOS/qubes-issues#1512
Fixes: QubesOS/qubes-issues#10369
ben-grande added a commit to ben-grande/qubes-core-admin that referenced this pull request Nov 4, 2025
Relying simple on domain not being running resulted in storage errors
of attempting to remove it while still in use (domain still running).
Didn't identify the cause, which is unfortunate as now it requires an
extra parameter.

For: QubesOS#742
For: QubesOS/qubes-issues#1512
Fixes: QubesOS/qubes-issues#10369
ben-grande added a commit to ben-grande/qubes-core-admin that referenced this pull request Nov 5, 2025
Relying simple on domain not being running resulted in storage errors
of attempting to remove it while still in use (domain still running).
Didn't identify the cause, which is unfortunate as now it requires an
extra parameter.

For: QubesOS#742
For: QubesOS/qubes-issues#1512
Fixes: QubesOS/qubes-issues#10369
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Dirty shutdown keeps preloaded disposables on list even after qubesd restart

3 participants