Testing Guide: A Comprehensive Explanation for Developers¶
This guide provides a detailed explanation of the testing framework used in this project, explaining the what, why, and how of our testing approach. It's designed to help developers understand our testing strategy and how to effectively use and extend our test suite.
Table of Contents¶
- What Are Tests?
- Why Do We Test?
- Test Structure in This Project
- Test Configuration
- Test Types
- How Tests Work in This Project
- Fixtures (Test Setup)
- Mocking
- Test Cases
- Assertions
- Key Testing Patterns
- End-to-End Testing
- Security Testing
- Performance Testing
- Test Coverage
- How to Run Tests
- Best Practices
- Test-Driven Development
- Common Testing Scenarios
What Are Tests?¶
Tests are pieces of code that verify your application works correctly. They check that your code does what it's supposed to do by running it with specific inputs and checking the outputs match what you expect.
In this project, we use pytest, a popular Python testing framework that makes it easy to write and run tests.
Why Do We Test?¶
- Catch bugs early: Tests help you find problems before they reach production.
- Ensure quality: Tests verify that your code meets requirements.
- Enable safe changes: Tests give you confidence to modify code without breaking existing functionality.
- Document behavior: Tests show how your code is expected to work.
- Support collaboration: Tests help team members understand what code does and how to use it.
Test Structure in This Project¶
Our project has a well-organized testing structure with different types of tests:
Test Configuration¶
The conftest.py file sets up the testing environment. It includes:
- Fixtures: Reusable components for tests (like database connections, test users, etc.)
- Database setup: Creates a test database for testing
- Client setup: Creates test API clients
Important fixtures include: - test_db: Creates and tears down the test database - db: Provides a database session that gets rolled back after each test - client: Creates a FastAPI test client - async_client: Creates an async client for async testing
# Example from conftest.py
@pytest.fixture(scope="session")
def test_db():
# Create test database
if database_exists(settings.DATABASE_URL):
drop_database(settings.DATABASE_URL)
create_database(settings.DATABASE_URL)
Base.metadata.create_all(bind=engine)
yield
drop_database(settings.DATABASE_URL)
@pytest.fixture
def client(db):
def override_get_db():
try:
yield db
finally:
db.close()
app.dependency_overrides[get_db] = override_get_db
with TestClient(app) as c:
yield c
Test Types¶
Unit Tests (tests/services/*)¶
These test individual services in isolation. For example:
test_claude_service.py: Tests the Claude AI servicetest_openai_service.py: Tests the OpenAI servicetest_weather_service.py: Tests the weather service
Why: Unit tests ensure each component works correctly on its own before being integrated.
# Example from test_claude_service.py
@pytest.mark.asyncio
async def test_travel_recommendations():
service = ClaudeService()
recommendations = await service.get_travel_recommendations(
budget=2000,
duration=7,
departure_city="Sydney",
preferences=["beach", "culture"]
)
assert len(recommendations) > 0
for rec in recommendations:
assert "destination" in rec
assert "total_cost" in rec
assert rec["total_cost"] <= 2000
API Tests (tests/api/*)¶
These test your API endpoints directly by making HTTP requests and checking responses:
test_recommendations.py: Tests recommendation endpointstest_rate_limiting.py: Tests rate limiting functionality
Why: API tests verify that your endpoints handle requests correctly, return proper responses, and include appropriate error handling.
# Example from test_recommendations.py
@pytest.mark.asyncio
async def test_get_recommendations(client, mock_app_state):
"""Test getting recommendations"""
# Setup request data
params = {
"budget": 1000,
"duration": 7,
"departure_city": "New York",
"preferences": ["culture", "food"],
}
# Make request
response = client.get("/recommendations", params=params)
# Assertions
assert response.status_code == 200
data = response.json()
assert len(data) > 0
assert data[0]["destination"] == "Paris"
assert data[0]["country"] == "France"
assert data[0]["total_cost"] == 1000
Integration Tests (tests/integration/*)¶
These test how multiple components work together:
test_recommendations_flow.py: Tests the entire recommendation flow from request to database savingtest_ai_services.py: Tests AI services working together
Why: Integration tests ensure that different parts of your system interact correctly.
# Example from test_recommendations_flow.py
@pytest.mark.asyncio
async def test_end_to_end_recommendation_flow(client, auth_headers, test_user, db, mock_travel_service):
"""
Test the end-to-end flow:
1. Get travel recommendations
2. Save a trip based on recommendations
3. Retrieve the saved trip
4. Update the saved trip
5. Delete the saved trip
"""
# Step 1: Get travel recommendations
recommendation_request = {
"budget": 1500,
"duration": 7,
"departure_city": "New York",
"max_travel_time": 10,
"preferences": ["culture", "food"],
"max_results": 5
}
response = client.post(
"/api/v1/recommendations",
headers=auth_headers,
json=recommendation_request
)
# Assertions for recommendations
assert response.status_code == 200
# ... more assertions and steps ...
Performance Tests (tests/performance/*)¶
These test your application's speed and scalability:
test_api_performance.py: Tests API response timestest_load_performance.py: Tests how the system handles load
Why: Performance tests help catch performance regressions and ensure your application meets speed requirements.
# Example from test_api_performance.py
@pytest.mark.asyncio
async def test_recommendations_response_time(self, auth_headers):
"""Test response time for recommendations endpoint"""
# ... test setup ...
# Measure response time
start_time = time.time()
response = await ac.post(
"/api/v1/recommendations",
headers=auth_headers,
json=recommendation_request
)
end_time = time.time()
# Assertions
assert response.status_code == 200
# Check response time (should be under 500ms with mocked service)
response_time = (end_time - start_time) * 1000 # Convert to ms
assert response_time < 500, f"Response time too slow: {response_time}ms"
Security Tests (tests/security/*)¶
These test your application's security measures:
test_api_security.py: Tests authentication, authorization, and input validation
Why: Security tests help prevent vulnerabilities and ensure your application properly protects data and resources.
# Example from test_api_security.py
@pytest.mark.asyncio
async def test_invalid_token(self):
"""Test using an invalid authentication token"""
# Create an invalid token
invalid_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJpbnZhbGlkQGV4YW1wbGUuY29tIiwiZXhwIjoxNjE2MTc2MDAwfQ.invalid_signature"
headers = {"Authorization": f"Bearer {invalid_token}"}
async with AsyncClient(app=app, base_url="http://test") as ac:
response = await ac.get("/api/v1/users/me", headers=headers)
# Should return 401 Unauthorized
assert response.status_code == 401
assert "Could not validate credentials" in response.json()["detail"]
How Tests Work in This Project¶
Fixtures (Test Setup)¶
Fixtures are special functions that set up test prerequisites. In your code:
@pytest.fixture
def test_user(db):
"""Create a test user and return it"""
user = User(
email="integration_test@example.com",
name="Integration Test User",
hashed_password=User.get_password_hash("password123"),
email_verified=True
)
db.add(user)
db.commit()
db.refresh(user)
return user
This fixture creates a test user that can be used in multiple tests.
Mocking¶
Mocking replaces real external services with fake ones for testing. For example:
@pytest.fixture
def mock_travel_service():
"""Mock the travel service to return predefined recommendations"""
with patch('app.services.travel_services.TravelService') as mock_service:
service_instance = mock_service.return_value
service_instance.get_travel_recommendations.return_value = [
{
"destination": "Paris",
"country": "France",
"total_cost": 1200,
# ...
}
]
yield service_instance
This creates a fake travel service that returns predefined data instead of making real API calls.
Test Cases¶
Test cases are functions that check specific behaviors:
@pytest.mark.asyncio
async def test_get_recommendations(client, mock_app_state):
"""Test getting recommendations"""
# Setup request data
params = {
"budget": 1000,
"duration": 7,
"departure_city": "New York",
"preferences": ["culture", "food"],
}
# Make request
response = client.get("/recommendations", params=params)
# Assertions
assert response.status_code == 200
data = response.json()
assert len(data) > 0
assert data[0]["destination"] == "Paris"
assert data[0]["country"] == "France"
assert data[0]["total_cost"] == 1000
This test verifies that the recommendations endpoint returns the expected data.
Assertions¶
Assertions verify that your code behaves as expected:
assert response.status_code == 200 # Checks status code
assert "destination" in data[0] # Checks structure
assert data[0]["total_cost"] <= 2000 # Checks business logic
If any assertion fails, the test fails, indicating a problem.
Key Testing Patterns¶
End-to-End Testing¶
Our integration tests (like test_end_to_end_recommendation_flow) test entire user flows:
- Get recommendations
- Save a trip
- Retrieve the saved trip
- Update the trip
- Delete the trip
This ensures the entire application works together as expected.
Security Testing¶
Our security tests cover:
- Authentication (missing, invalid, expired tokens)
- Authorization (admin-only endpoints, resource access)
- Input validation (protecting against invalid inputs)
- Protection against common attacks (SQL injection, XSS)
- Password strength requirements
- Rate limiting
Performance Testing¶
Our performance tests verify:
- Response times for individual endpoints
- Performance under concurrent load
- Database query performance
- Caching effectiveness
- Response time distribution
Test Coverage¶
Our tests cover different levels:
- Unit: Individual services and functions
- Integration: Component interactions
- API: Endpoint behavior
- End-to-End: Full user flows
- Performance: Response times and scaling
- Security: Protection measures
How to Run Tests¶
You can run tests using the following commands:
# Install test dependencies
pip install -r tests/requirements-test.txt
# Run all tests
pytest
# Run specific test files or directories
pytest tests/api/
pytest tests/test_recommendations.py
# Run with verbose output
pytest -v
# Run tests matching a keyword
pytest -k "recommendations"
# Run tests with coverage report
pytest --cov=app
# Run tests in parallel
pytest -n auto
Best Practices¶
Our tests follow these best practices:
- Isolation: Each test runs in isolation with its own setup and teardown
- Clear Purpose: Tests have descriptive names and docstrings
- Fixtures: Common setup code is reused via fixtures
- Mocking: External services are mocked for reliable testing
- Specific Assertions: Tests make specific assertions about behavior
- Clean Teardown: Resources are properly cleaned up after tests
Test-Driven Development¶
We encourage a test-driven development (TDD) approach:
- Write a failing test for new functionality
- Implement the functionality to make the test pass
- Refactor the code while keeping tests passing
TDD helps ensure that all code is covered by tests and that tests accurately reflect requirements.
Common Testing Scenarios¶
Testing New Features¶
When implementing a new feature:
- Write tests that define the expected behavior
- Implement the feature until all tests pass
- Refactor as needed, ensuring tests continue to pass
Testing Bug Fixes¶
When fixing a bug:
- Write a test that reproduces the bug (it should fail)
- Fix the bug so the test passes
- Ensure all other tests still pass
Testing API Endpoints¶
When testing API endpoints:
- Test happy paths (successful requests)
- Test with invalid inputs
- Test authentication and authorization
- Test error handling
Testing External Services¶
When testing code that uses external services:
- Use mocks to simulate the service
- Test with different response scenarios (success, error, timeout)
- Consider integration tests with real services in specific environments
This comprehensive testing approach helps ensure our application is reliable, secure, and performs well. When you modify code, our tests will catch regressions, allowing you to develop with confidence.