You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Timing out is frequently one of the worst outcomes of a test—a large number of tests timing out inevitably wastes a lot of machine time, as the causes of time outs often occur quite quickly and thus the machine is sitting waiting for the timer.
We frequently see this in crash and reftests uses test-wait, with patterns such as:
If unsupportedCallbackAPI is unsupported, then the test-wait class will never get removed, and the test will simply timeout.
What's unclear is whether we want to encourage people to write tests such as:
if ("unsupportedCallbackAPI" in window) {
unsupportedCallbackAPI(() => document.documentElement.className = '');
} else {
document.documentElement.className = '';
}
…and then trust the initial state of the page doesn't match the reference. But in crash tests this is effectively not running the actual test—though it indeed not crashing. Maybe it is okay to just skip the actual test code in such circumstances?
One possible move would be to make uncaught exceptions/unhandled promise rejections set the status to ERROR—like we do for testharness.js tests.
That would definitely be RFC territory, and probably would require a proof-of-concept implementation. It would also be a… more surprising result for a crash/reftest.
FWIW, the largely manual solution we currently have is to mark directories as implementation-status: backlog in metadata, and then adjust the behaviour for such tests (e.g. by only running them in some CI configurations).
One possible move would be to make uncaught exceptions/unhandled promise rejections set the status to ERROR—like we do for testharness.js tests.
That seems plausible for normal reftests, but not really for crashtests; an unhandled exception might be part of what causes the crash.
Timing out is frequently one of the worst outcomes of a test—a large number of tests timing out inevitably wastes a lot of machine time, as the causes of time outs often occur quite quickly and thus the machine is sitting waiting for the timer.
We frequently see this in crash and reftests uses
test-wait
, with patterns such as:If
unsupportedCallbackAPI
is unsupported, then thetest-wait
class will never get removed, and the test will simply timeout.What's unclear is whether we want to encourage people to write tests such as:
…and then trust the initial state of the page doesn't match the reference. But in crash tests this is effectively not running the actual test—though it indeed not crashing. Maybe it is okay to just skip the actual test code in such circumstances?
(Inspired by web-platform-tests/wpt#41576 among others, for the record)
The text was updated successfully, but these errors were encountered: