Skip to content

Conversation

@sujalgawas
Copy link
Contributor

Summary

This PR introduces a new unit test file (test_lt_memory.py) to provide dedicated test coverage for the LongTermMemory class (mesa_llm/memory/lt_memory.py).

Motive

I noticed that while LongTermMemory functionality is partially tested via the test_STLT_memory.py integration test, it didn't have its own dedicated unit test file.

Adding a separate unit test file is valuable for:

Isolating Failures: If a bug is introduced in LongTermMemory, these unit tests will fail directly, making it faster to pinpoint the problem.

Thoroughness: It allows for testing the class's internal methods (like _update_long_term_memory) and specific edge cases in isolation.

Safer Refactoring: It provides a safety net, allowing the LongTermMemory class to be modified or optimized in the future with confidence that its core contract is being met.

This PR addresses this testing gap and improves overall code coverage.

Implementation

Created a new test file: tests/memory/test_lt_memory.py.

Followed the project's existing testing patterns by using pytest for the test class and fixtures (e.g., mock_agent).

Utilized unittest.mock (specifically patch and mock fixtures) to isolate the LongTermMemory class from its dependencies (like the LLM and rich.console).

Tests cover the following key functionalities:

test_memory_initialization: Ensures the class initializes correctly with the expected attributes.

test_update_long_term_memory: Verifies the private update method by mocking the LLM's generate call. It asserts that the prompt sent to the LLM is structured correctly and that the long_term_memory attribute is updated with the mock response.

test_process_step: Tests the process_step logic for both pre_step=True (buffer creation) and pre_step=False (committing the summary to long-term memory), again mocking the LLM call.

test_format_long_term: A simple test to ensure the format_long_term method returns the memory string as expected.

Usage Examples

This PR adds development tests, not a new user-facing feature. The tests can be run by executing pytest from the root of the repository

@coderabbitai
Copy link

coderabbitai bot commented Oct 31, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Oct 31, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 86.27%. Comparing base (55be38c) to head (381eb26).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #27      +/-   ##
==========================================
+ Coverage   86.17%   86.27%   +0.09%     
==========================================
  Files          18       19       +1     
  Lines        1273     1311      +38     
==========================================
+ Hits         1097     1131      +34     
- Misses        176      180       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@colinfrisch colinfrisch added the tests Add or improve tests for the repo label Oct 31, 2025
@colinfrisch colinfrisch self-requested a review October 31, 2025 18:17
@colinfrisch
Copy link
Collaborator

Lots of good things, I'll review it in details asap.
On a different note, have you tried mesa-llm to build models yourself ? If yes do you have any feedback ? It's a recent extension of mesa so we are trying to figure out what works with the users and what doesn't :)

@sujalgawas
Copy link
Contributor Author

I haven’t tried it yet, but I was thinking about recreating the Sugarscape Constant Growback Model from Mesa’s main project (in the Examples directory) to better understand how simulations work and how they can be integrated with mesa-llm. I’m still fairly new to open source, but the idea of using AI agents to create simulations and run tests seemed fascinating, so I started contributing.

If I do end up building sugarscape_g1mt, do you think it would make a good example? And would it make sense to open a PR to add it under mesa-llm/examples?

@colinfrisch
Copy link
Collaborator

If I do end up building sugarscape_g1mt, do you think it would make a good example? And would it make sense to open a PR to add it under mesa-llm/examples?

Definitely, it's a very interesting model ! If you need anything don't hesitate to ask :)

Copy link
Collaborator

@colinfrisch colinfrisch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good, thanks for your work !

@colinfrisch colinfrisch merged commit 89801c8 into projectmesa:main Nov 1, 2025
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

tests Add or improve tests for the repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants