Skip to content

Conversation

@tblakex01
Copy link
Owner

@tblakex01 tblakex01 commented Sep 18, 2025

Summary

  • add moto-backed unit coverage for the password notification Lambda and expand password reset/secret rotation CLI regression tests
  • introduce CLI end-to-end checks, Terraform validation, and stricter pytest coverage thresholds
  • update tooling requirements with typing and security extras plus repository-wide mypy configuration

Testing

  • black .
  • flake8 scripts/ lambda/ tests/ --max-line-length=120 --ignore=E501,W503,E203
  • mypy --config-file mypy.ini scripts/ lambda/
  • bandit -r scripts/ lambda/
  • pytest
  • pytest tests/integration --no-cov

https://chatgpt.com/codex/tasks/task_e_68cc1b444a34833294619cb0df8e9442

Summary by Sourcery

Expand and strengthen testing coverage and validation infrastructure, and update tooling requirements and static typing configuration.

Enhancements:

  • Enforce stricter pytest coverage thresholds and extend coverage measurement to lambda directory

Tests:

  • Add moto-backed unit tests for the password notification Lambda, secret key rotation S3 script, and CLI password reset main workflow
  • Introduce integration tests for CLI end-to-end flows (key rotation and user listing) and Terraform configuration validation
  • Add a regression test to ensure malformed CSV rows are safely skipped by the compliance report parser
  • Update pytest configuration to include lambda code in coverage and enforce an 85% coverage threshold

Chores:

  • Introduce a repository-wide mypy configuration and update tooling requirements (mypy, typing, and security extras)

Summary by CodeRabbit

  • Tests

    • Added end-to-end CLI tests and moto-backed integration tests for IAM key rotation, password reset, and password notification workflows.
    • Introduced Terraform configuration validation test.
    • Added regression test ensuring malformed credential report rows are ignored.
    • Expanded coverage for S3 credential rotation and error handling.
  • Chores

    • Added static type checking configuration; relaxed missing-import checks.
    • Enforced minimum 85% test coverage via pytest configuration.
    • Introduced security linter dependency (Bandit) for scripts.
    • Added typing stubs for dateutil in test dependencies.

@sourcery-ai
Copy link

sourcery-ai bot commented Sep 18, 2025

Reviewer's Guide

This PR broadens test coverage by introducing moto-backed unit tests for the password notification Lambda, expanding CLI regression tests for password reset and secret rotation, adding end-to-end CLI integration tests, validating Terraform configs in CI, enforcing stricter pytest coverage thresholds, and updating tooling dependencies along with a global mypy configuration.

File-Level Changes

Change Details Files
Add moto-backed unit tests for password notification Lambda
  • introduce pytest fixture to load the Lambda module with moto-backed IAM and SES clients
  • generate fake credential reports and simulate SES send_email calls
  • verify email notifications are sent only for expiring passwords and skipped for missing email tags
tests/test_password_notification_lambda.py
Expand password reset CLI regression tests
  • cover main() success path and multiple failure scenarios via mocked AWS clients
  • assert correct exit codes on missing profiles, invalid connection, missing users, policy violations, and unexpected errors
tests/test_password_reset.py
Expand secret key rotation CLI regression tests
  • add moto-backed test for rotate_and_store_credentials function
  • simulate script invocation with fake boto3 module and verify IAM calls
  • assert error propagation on client errors with proper logging
tests/test_secret_key_rotation_s3.py
Introduce CLI end-to-end integration tests
  • invoke key rotation CLI in integration mode and validate JSON output
  • execute password reset CLI against moto IAM and verify user listing
tests/integration/test_cli_end_to_end.py
Add Terraform configuration validation tests
  • copy Terraform IAM module to temp directory and run terraform init and validate
  • skip tests when Terraform CLI is unavailable or providers cannot be fetched
tests/test_terraform_validation.py
Enforce stricter pytest coverage thresholds
  • update pytest.ini to include lambda coverage in reports
  • set coverage-fail-under to 85%
pytest.ini
Update tooling requirements and add mypy config
  • pin moto, pytest, coverage and add typing/security extras in requirements
  • add repository-wide mypy.ini to ignore missing imports
scripts/requirements.txt
tests/requirements.txt
mypy.ini

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link

coderabbitai bot commented Sep 18, 2025

Walkthrough

Adds type-checker and test configs, expands test requirements, and introduces multiple new tests: CLI end-to-end, password reset workflow, password notification Lambda (Moto), secret key rotation to S3 (Moto and CLI), Terraform validation, and a compliance-report regression. No production code changes.

Changes

Cohort / File(s) Summary
Type-checking config
mypy.ini
New mypy config with ignore_missing_imports enabled.
Test runner and coverage
pytest.ini
Adds coverage for lambda, sets --cov-fail-under=85, formatting tweak on addopts.
Tooling requirements
scripts/requirements.txt, tests/requirements.txt
Adds Bandit to scripts requirements; adds types-python-dateutil to tests; coverage line effectively unchanged.
CLI integration tests
tests/integration/test_cli_end_to_end.py
New end-to-end tests for key rotation and password-reset CLIs using mocks/Moto, run via runpy and stdout capture.
Compliance report regression test
tests/test_compliance_report.py
Adds test ensuring parse_credential_report skips incomplete CSV rows.
Password notification Lambda tests
tests/test_password_notification_lambda.py
New Moto-backed SES/IAM tests for Lambda notifications, including users without email tags.
Password reset main-flow tests
tests/test_password_reset.py
Adds comprehensive tests for interactive main flow: success, missing profile, invalid connection, missing user, policy violation, unexpected error.
Secret key rotation to S3 tests
tests/test_secret_key_rotation_s3.py
Adds Moto integration, CLI execution with fake boto3, and error propagation tests; import scaffolding expanded.
Terraform validation
tests/test_terraform_validation.py
New test that runs terraform init/validate in a temp workspace; skips on missing Terraform or provider errors.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant CLI as Key Rotation CLI
  participant boto3 as boto3 (IAM/S3)
  participant AWS as AWS (Moto/Real)

  User->>CLI: Invoke --list --json
  CLI->>boto3: client('iam').list_access_keys
  boto3->>AWS: ListAccessKeys
  AWS-->>boto3: Keys
  boto3-->>CLI: Keys
  CLI-->>User: Print JSON payload
Loading
sequenceDiagram
  autonumber
  participant Event as Event Trigger
  participant Lambda as Password Notification Lambda
  participant IAM as boto3.client('iam')
  participant SES as boto3.client('ses')

  Event->>Lambda: lambda_handler(event, context)
  Lambda->>IAM: get_credential_report
  IAM-->>Lambda: CSV credential data
  Lambda->>Lambda: Parse, filter expiring users with email
  Lambda->>SES: send_email(to=user.email)
  SES-->>Lambda: MessageId
  Lambda-->>Event: "Password expiry notifications sent."
Loading
sequenceDiagram
  autonumber
  actor User
  participant CLI as Password Reset CLI
  participant IAM as boto3.client('iam')

  User->>CLI: Run main flow
  CLI->>IAM: GetLoginProfile
  alt Exists
    CLI->>CLI: Prompt for old password
    CLI->>CLI: Generate new password
    CLI->>IAM: ChangePassword(old, new)
    alt Success
      CLI-->>User: Print new password, exit 0
    else Error
      CLI-->>User: Log error, exit 1
    end
  else NoSuchEntity
    CLI-->>User: Exit 1
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • bspeagle

Poem

I thump my paws on turf so green,
New tests hop in—what a scene! 🐇
Moto clouds and JSON streams,
Terraform checks our dream-machine.
With Bandit watching, neat and tight,
Our burrow’s code sleeps safe at night. ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 26.83% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Expand moto-backed coverage and infrastructure validation" accurately captures the primary intent of the changeset: it expands moto-backed test coverage (Lambda and CLI/rotation tests) and adds infrastructure validation (Terraform), as shown in the PR summary. It is concise, focused, and free of noisy details while remaining specific enough for a reviewer scanning history to understand the main change. The title does not need to enumerate every tooling or config tweak included in the PR.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/evaluate-and-enhance-test-coverage

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • I’d recommend unifying the test suite to a single framework (pytest or unittest) rather than mixing TestCase classes and pytest fixtures, which will simplify configuration and improve readability.
  • The CLI tests rely on runpy and patching sys.argv/boto3, which can be brittle—consider refactoring your scripts to expose core functions or a testable entry point so you can invoke them directly without runpy hacks.
  • The Terraform validation test runs real terraform commands in CI, which can slow down or fail tests—consider isolating it behind a dedicated marker or mocking providers to keep the main test suite fast and reliable.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- I’d recommend unifying the test suite to a single framework (pytest or unittest) rather than mixing TestCase classes and pytest fixtures, which will simplify configuration and improve readability.
- The CLI tests rely on runpy and patching sys.argv/boto3, which can be brittle—consider refactoring your scripts to expose core functions or a testable entry point so you can invoke them directly without runpy hacks.
- The Terraform validation test runs real terraform commands in CI, which can slow down or fail tests—consider isolating it behind a dedicated marker or mocking providers to keep the main test suite fast and reliable.

## Individual Comments

### Comment 1
<location> `tests/test_password_notification_lambda.py:69-78` </location>
<code_context>
+def test_lambda_sends_notifications_for_expiring_passwords(
</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding a test for SES send_email failures.

Please add a test case where SES send_email raises an exception to verify the Lambda handles errors correctly and does not misreport success.

Suggested implementation:

```python
def test_lambda_sends_notifications_for_expiring_passwords(
    password_notification_module,
):
    """Verify that users beyond the warning threshold receive an email."""

    module = password_notification_module
    iam_client = module.iam_client

    iam_client.create_user(UserName="alice")
    iam_client.tag_user(
        UserName="alice", Tags=[{"Key": "email", "Value": "alice@example.com"}]


def test_lambda_handles_ses_send_email_failure(password_notification_module, mocker):
    """Verify that the Lambda handles SES send_email failures gracefully."""

    module = password_notification_module
    iam_client = module.iam_client
    ses_client = module.ses_client

    iam_client.create_user(UserName="bob")
    iam_client.tag_user(
        UserName="bob", Tags=[{"Key": "email", "Value": "bob@example.com"}]
    )

    # Simulate password expiration for bob
    # (You may need to set up the password last changed date here as in other tests)

    # Mock SES send_email to raise an exception
    mocker.patch.object(ses_client, "send_email", side_effect=Exception("SES failure"))

    # Run the Lambda handler
    try:
        module.lambda_handler({}, {})
    except Exception:
        pytest.fail("Lambda should handle SES send_email exceptions gracefully")

    # Optionally, check logs or other error handling mechanisms if available

```

- You may need to adjust the password expiration setup for "bob" to match your test conventions.
- If your Lambda logs errors or sets a status, you can add assertions to check those behaviors.
- Ensure `pytest` and `mocker` are available in your test environment.
</issue_to_address>

### Comment 2
<location> `tests/test_password_reset.py:401-402` </location>
<code_context>
+    @patch(
+        "aws_iam_self_service_password_reset.validate_aws_connection", return_value=True
+    )
+    def test_main_handles_unexpected_exception(
+        self,
+        mock_validate,
+        mock_get_user,
+        mock_boto_client,
+        mock_passwordgen,
+        mock_getpass,
+        mock_exit,
+        mock_file,
+    ):
+        iam_client = Mock()
+        mock_boto_client.return_value = iam_client
+        iam_client.get_login_profile.return_value = {}
+        iam_client.change_password.side_effect = RuntimeError("boom")
+
+        with patch("aws_iam_self_service_password_reset.rprint"):
+            with self.assertRaises(SystemExit):
+                pwd_reset.main()
+
+        mock_exit.assert_called_with(1)
+        mock_file().write.assert_called()
+
+
</code_context>

<issue_to_address>
**suggestion (testing):** Consider asserting the content written to the file in error cases.

Please add assertions to verify the actual content written to the file during error handling, such as the error message or traceback, to confirm correct error reporting.

```suggestion
        mock_exit.assert_called_with(1)
        mock_file().write.assert_called()
        # Assert the error message or traceback is written to the file
        written_content = "".join(
            call.args[0] for call in mock_file().write.call_args_list
        )
        self.assertIn("boom", written_content)
```
</issue_to_address>

### Comment 3
<location> `tests/test_secret_key_rotation_s3.py:143-152` </location>
<code_context>
+    def test_cli_propagates_client_errors(self):
</code_context>

<issue_to_address>
**suggestion (testing):** Consider testing for other error codes and exception types.

Adding tests for other ClientError codes and non-ClientError exceptions will help verify error handling and logging are robust.
</issue_to_address>

### Comment 4
<location> `tests/test_terraform_validation.py:45-46` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 5
<location> `tests/test_terraform_validation.py:54-55` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 6
<location> `tests/test_terraform_validation.py:58-59` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 7
<location> `tests/test_password_notification_lambda.py:53` </location>
<code_context>
def _credential_report_for(users: list[tuple[str, datetime]]):
    """Generate a credential report CSV for the provided users."""

    rows = [
        "user,arn,user_creation_time,password_enabled,password_last_used,password_last_changed",
    ]
    for username, changed_at in users:
        rows.append(
            ",".join(
                [
                    username,
                    f"arn:aws:iam::123456789012:user/{username}",
                    "2023-01-01T00:00:00+00:00",
                    "true",
                    "N/A",
                    changed_at.astimezone(timezone.utc).isoformat(),
                ]
            )
        )
    return "\n".join(rows).encode("utf-8")

</code_context>

<issue_to_address>
**issue (code-quality):** Replace a for append loop with list extend ([`for-append-to-extend`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/for-append-to-extend/))
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +45 to +46
if shutil.which("terraform") is None:
pytest.skip("Terraform CLI is not installed")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)

ExplanationAvoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:

  • loops
  • conditionals

Some ways to fix this:

  • Use parametrized tests to get rid of the loop.
  • Move the complex logic into helpers.
  • Move the complex part into pytest fixtures.

Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / Don't Put Logic in Tests

Comment on lines +54 to +55
if init_result.returncode != 0:
pytest.fail(f"terraform init failed: {init_result.stderr}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)

ExplanationAvoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:

  • loops
  • conditionals

Some ways to fix this:

  • Use parametrized tests to get rid of the loop.
  • Move the complex logic into helpers.
  • Move the complex part into pytest fixtures.

Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / Don't Put Logic in Tests

Comment on lines +58 to +59
if validate_result.returncode != 0:
pytest.fail(f"terraform validate failed: {validate_result.stderr}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)

ExplanationAvoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:

  • loops
  • conditionals

Some ways to fix this:

  • Use parametrized tests to get rid of the loop.
  • Move the complex logic into helpers.
  • Move the complex part into pytest fixtures.

Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / Don't Put Logic in Tests

rows = [
"user,arn,user_creation_time,password_enabled,password_last_used,password_last_changed",
]
for username, changed_at in users:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Replace a for append loop with list extend (for-append-to-extend)

@tblakex01
Copy link
Owner Author

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Chef's kiss.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

@tblakex01 tblakex01 merged commit 94ab934 into main Sep 18, 2025
4 of 6 checks passed
@tblakex01 tblakex01 deleted the codex/evaluate-and-enhance-test-coverage branch September 18, 2025 22:44
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (15)
mypy.ini (1)

1-2: Consider adding stricter type checking options

While ignore_missing_imports = True helps with third-party libraries lacking type stubs, consider adding more strict options to improve type safety gradually:

 [mypy]
 ignore_missing_imports = True
+python_version = 3.8
+warn_return_any = True
+warn_unused_configs = True
+disallow_untyped_defs = False

This would provide better type checking while still allowing incremental adoption.

tests/test_terraform_validation.py (2)

1-2: Remove shebang from non-executable test file

The shebang line is present but the file is not executable. Either make the file executable or remove the shebang.

-#!/usr/bin/env python3
 """Lightweight infrastructure validation executed during the test suite."""

15-26: Consider adding timeout protection for subprocess calls

The subprocess.run call could hang indefinitely if Terraform encounters an interactive prompt or network issue. Consider adding a timeout.

     result = subprocess.run(
         ["terraform", *args],
         cwd=work_dir,
         check=False,
         capture_output=True,
         text=True,
         env=env,
+        timeout=60,  # 60 second timeout for Terraform operations
     )
tests/test_password_notification_lambda.py (2)

1-2: Remove shebang from non-executable test file

The shebang line is present but the file is not executable.

-#!/usr/bin/env python3
 """Tests for the password notification Lambda using moto-backed AWS services."""

47-47: Improve type hint for better clarity

The type hint could be more specific about the expected user data structure.

-def _credential_report_for(users: list[tuple[str, datetime]]):
+def _credential_report_for(users: list[tuple[str, datetime]]) -> bytes:
tests/test_password_reset.py (1)

289-310: Clean up unused mock arguments

Several mock arguments are injected but not used in the test methods. While they're needed for patching, consider using _ prefix to indicate they're intentionally unused.

     def test_main_successful_reset(
         self,
-        mock_validate,
-        mock_get_user,
+        _mock_validate,
+        _mock_get_user,
         mock_boto_client,
-        mock_passwordgen,
-        mock_getpass,
+        _mock_passwordgen,
+        _mock_getpass,
         mock_display,
         mock_log,
     ):

Apply similar changes to other test methods with unused mock arguments.

tests/integration/test_cli_end_to_end.py (5)

1-1: Remove non-executable shebang or make the file executable.

Ruff flags EXE001; tests don’t need a shebang.

Apply:

-#!/usr/bin/env python3

41-47: Prefer pytest’s capsys over manual stdout redirection.

Simpler and less brittle.

Example:

-    stdout = io.StringIO()
-    ...
-    with patch.object(sys, "argv", argv):
-        with patch("boto3.client", return_value=_FakeIAM()):
-            with redirect_stdout(stdout):
-                runpy.run_path(script, run_name="__main__")
-
-    payload = json.loads(stdout.getvalue())
+    with patch.object(sys, "argv", argv), patch("boto3.client", return_value=_FakeIAM()):
+        runpy.run_path(script, run_name="__main__")
+    captured = capsys.readouterr()
+    payload = json.loads(captured.out)

49-50: Guard JSON parsing from incidental stdout noise.

If the CLI logs to stdout, json.loads will fail. Consider emitting JSON only (logs to stderr) when --json is set, or parse the last JSON-looking line in the test.


59-63: Avoid env leakage; patch env in a scoped context.

setdefault can leave globals mutated across tests.

-        os.environ.setdefault("AWS_DEFAULT_REGION", "us-east-1")
-        os.environ.setdefault("AWS_ACCESS_KEY_ID", "testing")
-        os.environ.setdefault("AWS_SECRET_ACCESS_KEY", "testing")
+        with patch.dict(
+            os.environ,
+            {
+                "AWS_DEFAULT_REGION": "us-east-1",
+                "AWS_ACCESS_KEY_ID": "testing",
+                "AWS_SECRET_ACCESS_KEY": "testing",
+            },
+            clear=False,
+        ):
+            iam = boto3.client("iam", region_name="us-east-1")
+            iam.create_user(UserName="cli-user-one")
+            iam.create_user(UserName="cli-user-two")
+            stdout = io.StringIO()
+            argv = [script.name, "list-users"]
+            with patch.object(sys, "argv", argv), redirect_stdout(stdout):
+                runpy.run_path(script, run_name="__main__")
+        output = stdout.getvalue().splitlines()

75-77: Make the header assertion resilient to extra output.

Don’t assume it’s the first line.

-    assert "Found 2 IAM users:" in output[0]
+    assert any("Found 2 IAM users:" in line for line in output)
tests/test_secret_key_rotation_s3.py (4)

1-1: Remove non-executable shebang or chmod +x.

Keep tests shebang-free to satisfy EXE001.

-#!/usr/bin/env python3

64-67: Avoid shared module state; restore cached clients after test.

Resetting rotate._iam_client/_s3_client without restore risks cross-test flakiness.

-    def test_rotate_and_store_with_moto(self):
-        rotate._iam_client = None
-        rotate._s3_client = None
+    def test_rotate_and_store_with_moto(self):
+        prev_iam, prev_s3 = rotate._iam_client, rotate._s3_client
+        rotate._iam_client = None
+        rotate._s3_client = None
         try:
-            with mock_aws():
+            with patch.dict(
+                os.environ,
+                {
+                    "AWS_DEFAULT_REGION": "us-east-1",
+                    "AWS_ACCESS_KEY_ID": "testing",
+                    "AWS_SECRET_ACCESS_KEY": "testing",
+                },
+                clear=False,
+            ), mock_aws():
                 iam = boto3.client("iam", region_name="us-east-1")
                 s3 = boto3.client("s3", region_name="us-east-1")
                 ...
-            self.assertTrue(url.startswith("https://"))
+            self.assertTrue(url.startswith("https://"))
+        finally:
+            rotate._iam_client, rotate._s3_client = prev_iam, prev_s3

150-158: Silence Ruff ARG002 by marking unused parameters.

These dummies aren’t used; prefix with underscore.

-        class _ErroringIAM:
-            def list_access_keys(self, UserName):
+        class _ErroringIAM:
+            def list_access_keys(self, _username):
                 return {"AccessKeyMetadata": []}
 
-            def create_access_key(self, UserName):
+            def create_access_key(self, _username):
                 raise ClientError(
                     {"Error": {"Code": "AccessDenied", "Message": "denied"}},
                     "CreateAccessKey",
                 )

181-186: Assert on stable signal (“AccessDenied”) rather than log copy.

Log text “AWS error” is brittle; assert the code appears in logs.

-        self.assertTrue(any("AWS error" in message for message in logs.output))
+        self.assertTrue(any("AccessDenied" in message for message in logs.output))
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bacc427 and 577359f.

📒 Files selected for processing (10)
  • mypy.ini (1 hunks)
  • pytest.ini (1 hunks)
  • scripts/requirements.txt (1 hunks)
  • tests/integration/test_cli_end_to_end.py (1 hunks)
  • tests/requirements.txt (1 hunks)
  • tests/test_compliance_report.py (1 hunks)
  • tests/test_password_notification_lambda.py (1 hunks)
  • tests/test_password_reset.py (2 hunks)
  • tests/test_secret_key_rotation_s3.py (2 hunks)
  • tests/test_terraform_validation.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Always use Flake8 for linting the application. E501 errors can be ignored up to 120 lines.
Always use Black for formatting the application.
Always use Pyright or mypy for type checking the application.

Files:

  • tests/test_compliance_report.py
  • tests/integration/test_cli_end_to_end.py
  • tests/test_password_notification_lambda.py
  • tests/test_secret_key_rotation_s3.py
  • tests/test_terraform_validation.py
  • tests/test_password_reset.py
🧬 Code graph analysis (4)
tests/test_compliance_report.py (1)
scripts/aws_iam_compliance_report.py (1)
  • parse_credential_report (118-132)
tests/integration/test_cli_end_to_end.py (1)
tests/test_secret_key_rotation_s3.py (3)
  • _FakeIAM (89-108)
  • list_access_keys (93-95)
  • list_access_keys (151-152)
tests/test_secret_key_rotation_s3.py (2)
scripts/aws_iam_secret_key_rotation_s3.py (2)
  • rotate_and_store_credentials (78-116)
  • generate_presigned_url (68-75)
tests/integration/test_cli_end_to_end.py (2)
  • _FakeIAM (29-39)
  • list_access_keys (30-39)
tests/test_password_reset.py (1)
scripts/aws_iam_self_service_password_reset.py (1)
  • main (192-302)
🪛 Ruff (0.12.2)
tests/integration/test_cli_end_to_end.py

1-1: Shebang is present but file is not executable

(EXE001)

tests/test_password_notification_lambda.py

1-1: Shebang is present but file is not executable

(EXE001)

tests/test_secret_key_rotation_s3.py

151-151: Unused method argument: UserName

(ARG002)


154-154: Unused method argument: UserName

(ARG002)

tests/test_terraform_validation.py

1-1: Shebang is present but file is not executable

(EXE001)


19-19: subprocess call: check for execution of untrusted input

(S603)


20-20: Starting a process with a partial executable path

(S607)

tests/test_password_reset.py

291-291: Unused method argument: mock_validate

(ARG002)


292-292: Unused method argument: mock_get_user

(ARG002)


294-294: Unused method argument: mock_passwordgen

(ARG002)


295-295: Unused method argument: mock_getpass

(ARG002)


322-322: Unused method argument: mock_validate

(ARG002)


322-322: Unused method argument: mock_get_user

(ARG002)


344-344: Unused method argument: mock_validate

(ARG002)


357-357: Unused method argument: mock_validate

(ARG002)


357-357: Unused method argument: mock_get_user

(ARG002)


379-379: Unused method argument: mock_validate

(ARG002)


380-380: Unused method argument: mock_get_user

(ARG002)


382-382: Unused method argument: mock_passwordgen

(ARG002)


383-383: Unused method argument: mock_getpass

(ARG002)


418-418: Unused method argument: mock_validate

(ARG002)


419-419: Unused method argument: mock_get_user

(ARG002)


421-421: Unused method argument: mock_passwordgen

(ARG002)


422-422: Unused method argument: mock_getpass

(ARG002)

🔇 Additional comments (12)
tests/requirements.txt (1)

6-7: LGTM! Type stubs addition aligns with mypy configuration

The addition of types-python-dateutil>=2.8.19 provides type hints for the python-dateutil library, which complements the new mypy configuration. The coverage line appears unchanged despite the diff formatting.

scripts/requirements.txt (1)

15-16: LGTM! Security scanning with Bandit added

The addition of bandit>=1.7.0 for security analysis is a valuable addition to the development toolchain. The Black entry remains unchanged despite the diff formatting.

pytest.ini (2)

15-15: Consider the coverage threshold implications

Setting --cov-fail-under=85 enforces high test coverage, which is excellent. However, ensure the team is prepared for builds to fail if coverage drops below this threshold during development.


11-11: Resolved — lambda/ directory present

--cov=lambda is valid; repository contains a top-level lambda/ directory.

tests/test_compliance_report.py (1)

203-211: Good defensive programming for malformed CSV data

This regression test ensures the code gracefully handles incomplete CSV rows without crashing. This is important for robustness when dealing with potentially malformed credential reports.

tests/test_terraform_validation.py (1)

45-46: Consider using shutil.which consistently

Good use of shutil.which to check for Terraform. This is the Pythonic way to check for executable availability.

tests/test_password_notification_lambda.py (1)

94-96: Well-structured mocking approach

Good use of nested MonkeyPatch contexts to cleanly manage multiple patches. The approach of capturing calls while still invoking the original function is a solid testing pattern.

tests/test_password_reset.py (2)

269-437: Comprehensive test coverage for main workflow

Excellent addition of the TestMainFlow class with thorough coverage of success and error scenarios. The tests properly mock all external dependencies and verify expected behavior.


365-402: Good error handling test coverage

The test properly verifies that policy violations are logged to the audit file. This ensures compliance and security requirements are met.

tests/test_secret_key_rotation_s3.py (2)

26-63: LGTM: solid unit test coverage of happy path with precise assertions.

Mocks capture all critical calls; assertions verify S3 hardening and IAM deactivation.


143-186: No change required — CLI logs an ERROR-level message.
The script calls _logger.error("AWS error: %s", exc) in the ClientError handler (scripts/aws_iam_secret_key_rotation_s3.py), so assertLogs(level="ERROR") and the test's check for "AWS error" are correct.

tests/integration/test_cli_end_to_end.py (1)

45-47: Resolved — CLI uses boto3.client so the test patch is effective.
scripts/aws_iam_self_service_key_rotation.py calls boto3.client("iam") at line 42, so patch("boto3.client", ...) will intercept the call.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants