This document provides an overview of the testing strategy and instructions for running different types of tests for the Cognitive Mesh Convener Backend API.
- Test Types
- Prerequisites
- Running Tests
- Test Descriptions
- Performance Testing
- Concurrency Testing
- Rate Limiting Tests
- Test Data
- Continuous Integration
- Best Practices
- Unit Tests: Test individual components in isolation
- Integration Tests: Test interactions between components
- Performance Tests: Test API performance under load
- Concurrency Tests: Test handling of concurrent requests
- Rate Limiting Tests: Test API rate limiting functionality
- Node.js 16+
- npm or yarn
- k6 (for performance testing)
- Access to a running instance of the API
npm installnpm testRun all performance tests:
npm run test:performanceRun specific test scenarios:
# Large payload tests
k6 run tests/performance/large-payloads.test.js
# Concurrency tests
k6 run tests/performance/concurrency.test.js
# Rate limiting tests
k6 run tests/performance/rate-limiting.test.jsConfigure tests using environment variables:
export BASE_URL=http://localhost:3000/v1
export AUTH_TOKEN=your-auth-token
export RATE_LIMIT=100 # requests per minute- Location:
__tests__/unit/ - Purpose: Test individual functions and components in isolation
- Coverage:
- Input validation
- Business logic
- Error handling
- Utility functions
- Location:
__tests__/integration/ - Purpose: Test API endpoints and their interactions
- Coverage:
- API endpoints
- Database interactions
- External service integrations
- Authentication and authorization
Tests the API's ability to handle requests with large payloads.
Scenarios:
- Small payloads (10KB)
- Medium payloads (100KB)
- Large payloads (1MB+)
Metrics Tracked:
- Response times
- Success rates
- Error rates
- Memory usage
Tests how the API handles multiple simultaneous requests.
Scenarios:
- Low concurrency (10 concurrent users)
- Medium concurrency (50 concurrent users)
- High concurrency (100+ concurrent users)
Metrics Tracked:
- Requests per second
- Error rates
- Response time percentiles
- System resource usage
Tests the API's rate limiting functionality.
Scenarios:
- Requests within rate limit
- Requests exceeding rate limit
- Rate limit reset behavior
- Multiple clients with different rate limits
Metrics Tracked:
- Rate limit headers
- 429 responses
- Retry-after headers
- Successful vs. rate-limited requests
Test data is generated dynamically for most tests. For specific test cases, you can find test data in the test-data/ directory.
Tests are automatically run on pull requests and merges to the main branch. The CI pipeline includes:
- Linting
- Unit tests
- Integration tests
- Performance tests (on schedule)
- Security scanning
- Isolate Tests: Each test should be independent and not rely on the state from other tests.
- Clean Up: Always clean up test data after tests complete.
- Use Mocks: Mock external services to make tests more reliable and faster.
- Test Edge Cases: Include tests for edge cases and error conditions.
- Performance Baselines: Establish performance baselines and monitor for regressions.
- Documentation: Keep test documentation up to date.
- Security: Include security tests in your test suite.
-
Tests Failing:
- Check the test logs for specific error messages
- Verify that all services are running
- Check for network connectivity issues
-
Performance Test Failures:
- Check server resource usage
- Verify network latency
- Check for external service dependencies
-
Rate Limiting Issues:
- Verify rate limit headers in responses
- Check for proper 429 responses
- Verify rate limit reset behavior
If you encounter issues not covered in this guide, please:
- Check the project's issue tracker
- Review the API documentation
- Contact the development team
This testing documentation is part of the Cognitive Mesh project and is licensed under the MIT License.