-
Notifications
You must be signed in to change notification settings - Fork 7
test: port integration tests from pytest-operator to Jubilant #326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
tests/integration/conftest.py
Outdated
|
||
|
||
@fixture(scope="module", name="app_config") | ||
@pytest.fixture(scope="session", name="app_config") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many of these should really be session-scoped, as they don't change per test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our integration tests are running the integration module by module (to parallelize them), this doesn't have much impact
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, it shouldn't have any real impact -- it's just a better way to signal intent.
await model.wait_for_idle() | ||
await saml_app.set_config( # type: ignore[attr-defined] | ||
|
||
def all_units_idle(status: jubilant.Status) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've opened canonical/jubilant#91 to track whether we should add a jubilant.all_units_idle
helper to do this. I'm not convinced it's worth it, though -- there are likely better ways of checking what we want here looking at app/unit status instead.
test_discourse_srv_status_ok() | ||
|
||
|
||
@pytest.mark.skip(reason="Frequent timeouts") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed this test rather than porting it to Jubilant, given that it wasn't running anyway. But maybe you guys can port this one if you need it. :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should port it too, otherwise it'll disappear from the code base, and we might forget about it
@@ -54,3 +54,6 @@ check_untyped_defs = true | |||
|
|||
[tool.pylint.SIMILARITIES] | |||
min-similarity-lines=10 | |||
disable = [ | |||
"W0621", # Redefining name %r from outer scope (line %s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is rather annoying for conftest.py, when defining pytest fixtures, for example a fixture named juju
and then using juju
as a local variable in another fixture (which is common). So I've turned it off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stopped at test_discourse_srv_status_ok
test, we resume on Monday
tests/integration/conftest.py
Outdated
|
||
|
||
@fixture(scope="module", name="app_config") | ||
@pytest.fixture(scope="session", name="app_config") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our integration tests are running the integration module by module (to parallelize them), this doesn't have much impact
def discourse_address_fixture(app: types.App, juju: jubilant.Juju): | ||
"""Get discourse web address.""" | ||
status = juju.status() | ||
unit_ip = status.apps[app.name].units[app.name + "/0"].address |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sometimes juju status returns APIv6, is it handled by jubilant ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this field is just a str
, so both IPv6 and IPv6 will come through fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder which address should be used:
# st = juju.status()
In [32]: st.apps["minio"].address
Out[32]: '10.152.183.203'
In [33]: st.apps["minio"].units["minio/2"].address
Out[33]: '10.1.0.89'
In my case, there's only one unit, minio/2
.
Both addresses work for me.
Which also makes me wonder if we're guaranteed to get unit id 0
, or if it's better to do something like list(status.apps[...].units.values())[0]
?
return action.results | ||
keep_models = cast(bool, request.config.getoption("--keep-models")) | ||
with jubilant.temp_model(keep=keep_models) as juju: | ||
juju.wait_timeout = 10 * 60 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is it used for ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Juju.wait_timeout
is the default timeout for wait() (in seconds) if that method’s timeout parameter is not specified.
In Jubilant the default wait_timeout
is 3 minutes. This changes it to 10 minutes, which is the same as the pytest-operator default wait_for_idle
timeout, just to keep things consistent with that.
|
||
|
||
@pytest.fixture(autouse=True) | ||
def abort_on_fail(request: pytest.FixtureRequest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it necessary ? And should be play with pytest behavior from conftest ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's kind of necessary for easier migration from pytest-operator's abort_on_fail
. I don't actually love this either, but it's a way to get other tests in a module to fail if previous ones have failed.
stdout = action.results.get("stdout") | ||
stderr = action.results.get("stderr") | ||
assert code == 0, f"{cmd} failed ({code}): {stderr or stdout}" | ||
juju.exec(cmd, unit=app.name + "/0", wait=60) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We were previously logging the stdout or stderr which won't be the case now if I understood jubilant behavior well (in case of exec failure)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously we were logging stdout or stderr in the case of the failed assert code == 0
. Jubilant's exec
raises an exception automatically. However, you're right -- the error message will be less than helpful. I've opened canonical/jubilant#97 to track.
{ | ||
"s3_enabled": "true", | ||
"s3_enabled": True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's some hidden side effect in action here:
https://github.com/canonical/jubilant/blob/695a500f7545f234acdc7088c9579b4c8659797b/jubilant/_juju.py#L870-L871
I'd rather not do any manipulation on the input
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is by design, and I believe it's actually an improvement. For boolean
config types (which s3_enabled
is), you provide a proper Python bool
type, which Jubilant then formats as "true" or "false" which is what Juju expects. So it's "using proper Python types" rather than "manipulation of the input".
def discourse_address_fixture(app: types.App, juju: jubilant.Juju): | ||
"""Get discourse web address.""" | ||
status = juju.status() | ||
unit_ip = status.apps[app.name].units[app.name + "/0"].address |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder which address should be used:
# st = juju.status()
In [32]: st.apps["minio"].address
Out[32]: '10.152.183.203'
In [33]: st.apps["minio"].units["minio/2"].address
Out[33]: '10.1.0.89'
In my case, there's only one unit, minio/2
.
Both addresses work for me.
Which also makes me wonder if we're guaranteed to get unit id 0
, or if it's better to do something like list(status.apps[...].units.values())[0]
?
@@ -38,8 +38,8 @@ async def test_saml_login( # pylint: disable=too-many-locals,too-many-arguments | |||
password = "test-discourse-k8s-password" # nosecue | |||
saml_helper.register_user(username=username, email=email, password=password) | |||
|
|||
action_result = await run_action(app.name, "create-user", email=email) | |||
assert "user" in action_result | |||
task = juju.run(app.name + "/0", "create-user", {"email": email}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
task = juju.run(app.name + "/0", "create-user", {"email": email}) | |
task = juju.run(app.name + "/0", "create-user", params={"email": email}) |
Here and in the other test file.
I could kindof guess the intent behind the third argument, but implicit is better than explicit, isn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re the unit vs app IP: sometimes the app IP address is different, so I'm sticking with unit address like the original tests here.
Re "/0", yes, unit IDs are guaranteed by Juju to be incrementing integers and start at "0", so this is common.
Re params=
, in Jubilant I haven't made this a keyword arg, 1) because I think it's obvious for run
, and 2) because it mirrors the CLI, which has params as key=val
optional positional arguments rather than --param key=val
or similar.
|
||
yield application | ||
yield types.App(app_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the new type encapsulates a single name, why not use a string?
@pytest.fixture(scope="module"):
def app_name(...) -> str:
...
Or if the idea is that there may be more fields... maybe return juju.status().apps[app_name]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wondered about that too. However, I think it's better as a proper App
type as an app_name
fixture to me implies it's very lightweight and just going to return the application name. But this is heavy and deploys a bunch of stuff, so the name "app" and a special name types.App
type seems to indicate that intent better.
test_discourse_srv_status_ok() | ||
juju.remove_relation(app.name, "postgresql-k8s:database") | ||
juju.wait( | ||
lambda status: status.apps[app.name].is_waiting and srv_status_raises_connection_error() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that this condition is used twice, what do you feel about something like:
juju.wait(not_ready)
def not_ready(status: jubilant.Status) -> bool:
if not status.apps[app_name].is_waiting:
return False
try: ...
except (...): return True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I think that's less clear than what I have, but happy for the discourse-k8s team to chime in.
Test results for commit 9745e32Test coverage for 9745e32
Static code analysis report
|
test_discourse_srv_status_ok() | ||
|
||
|
||
@pytest.mark.skip(reason="Frequent timeouts") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should port it too, otherwise it'll disappear from the code base, and we might forget about it
@@ -26,90 +23,87 @@ async def test_db_migration(model: Model, ops_test: OpsTest, pytestconfig: Confi | |||
with a patch but this patch only works for Discourse v3.2.0 and we might | |||
need to create a new patch for the new version of Discourse. | |||
""" | |||
postgres_app = await model.deploy( | |||
juju.deploy( | |||
"postgresql-k8s", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should put this string in a variable to avoid repeating it everywhere in the test
await break_action.wait() | ||
assert break_action.status == "failed" | ||
with pytest.raises(jubilant.TaskError): | ||
juju.run(app.name + "/0", "create-user", {"email": email}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be interesting to be able to distinguish between cancelled
, error
and failed
when running a task. Here it relies on raise_on_failure
that just checks if not completed
You didn't ask for it :-), but this charm has the privilege of being one of the first charms to get its integration tests ported from pytest-operator (and python-libjuju) to Jubilant. We chose this charm because it isn't trivial, but it also wasn't huge, and it has a decent number of semi-complex integration tests.
It's relatively straight-forward stuff, and simpler in most cases. The tests seem to run in a similar time, or a bit faster: 46 minutes on Jubilant vs 51 minutes on pytest-operator.
A few notes:
juju
to create a temporary Juju model,charm_file
to find the charm file, and I've updatedapp_fixture
to use Jubilant and return a lightweighttypes.App
object. Also theabort_on_fail
fixture which is part ofpytest-operator
.Let me know if you have any questions or want to discuss!