-
Notifications
You must be signed in to change notification settings - Fork 32
Remove internal qubes from being target of ask #198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
67b3ebe to
fda9b2a
Compare
|
@deeplow do you have any opinion on this regarding your usage of "internal" qubes? With this change it's still possible to make qrexec calls to them if the policy says "allow", but cannot be targeted with "ask" anymore. |
|
Conceptually it makes sense, since the user is not supposed to be aware of them. But I'll double check and get back to you. |
|
I have confirmed, it's OK for internal qubes to no longer be the target of ask policies concerning the SecureDrop Workstation's use-case. Thanks! |
|
Ok, thanks :) @ben-grande now just black complains |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #198 +/- ##
==========================================
- Coverage 79.02% 78.93% -0.10%
==========================================
Files 55 55
Lines 10502 10508 +6
==========================================
- Hits 8299 8294 -5
- Misses 2203 2214 +11 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
5bf24d5 to
a85fac4
Compare
a85fac4 to
95403b0
Compare
|
Resolved merge conflicts. |
95403b0 to
c9dc8a0
Compare
c9dc8a0 to
c717c92
Compare
OpenQA test summaryComplete test suite and dependencies: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025052321-4.3&flavor=pull-requests Test run included the following:
New failures, excluding unstableCompared to: https://openqa.qubes-os.org/tests/overview?distri=qubesos&version=4.3&build=2025031804-4.3&flavor=update
Failed tests18 failures
Fixed failuresCompared to: https://openqa.qubes-os.org/tests/132953#dependencies 14 fixed
Unstable tests
Performance TestsPerformance degradation:15 performance degradations
Remaining performance tests:41 tests
|
Something not being cleaned and them every dispvm test after, broken because of that. Locally, I just encountered the first issue (which doesn't break following tests), have not encountered the selectors issue locally, will debug. |
Yes, some object leaked. We have quite strict checking for that between tests. Outside of tests it's easy to miss - usually it will "just" result in a memory leak that will accumulate over time so you'll notice it only after days/weeks of uptime. The test VM (intentionally) has just 8GB of memory, so issues like this are noticed earlier, even if the leak check missed it. And indeed, it looks like it happened here too: See also terminal log from the test run: https://openqa.qubes-os.org/tests/139759/file/system_tests-tests-qubes.tests.integ.dispvm.log looks like preloading remains running beyond test end, you may need to extend Quite possibly you may need to save a reference to tasks scheduled in background (for cases of ensure_future instead of await), at least to wait for them in the tearDown method. |
Makes sense, preloading by setting feature is a future, so I will make sure to wait for every domain that was preloaded to not be in the vm collection anymore.
https://openqa.qubes-os.org/tests/139759/#downloads And now I see I can download the tests logs, that is useful.
Just note that although this is an exception and clutters the log, it happens because the I could use an async lock but that would make it preload slower (think of 1 qube being used and them 4 qubes being used, waiting for 1 to become preloaded to start preloading other 4 seems inefficient) or I could handle the exception with a What do you think? |
Maybe change that from exception to a warning log message (and simple return)? |
Also possible and I am taking this route then. |
For: QubesOS/qubes-issues#1512