Python Testing & Debugging Cheat Sheet
Quick Reference Card
| Operation | Syntax | Example |
|---|---|---|
| Pytest test function | def test_something(): |
Simple test function |
| Test function naming | def test_something(): |
Must start with test_ |
| Assert equal | assert a == b |
Simple assertion |
| Pytest test | def test_something(): |
Simple function |
| Pytest assert | assert a == b |
Simple assertion |
| Run pytest | pytest |
Run all tests |
| Mock object | Mock() |
Create mock |
| Patch function | @patch('module.func') |
Replace function |
| Fixture | @pytest.fixture |
Setup/teardown |
| Breakpoint | breakpoint() |
Start debugger (Python 3.7+) |
| Debug command | python -m pdb script.py |
Debug script |
Note: For Python versions older than 3.7, use
import pdb; pdb.set_trace()to start the debugger.
Table of Contents
- Testing Basics
- Testing Frameworks (pytest preferred)
- pytest Framework
- Mocking and Patching
- Test Coverage
- Debugging with pdb
- Debugging Techniques
- Best Practices
Testing Basics
Why Test?
# Testing helps:
# - Catch bugs early
# - Document expected behavior
# - Enable refactoring with confidence
# - Improve code design
# Example: Function to test
def add(a, b):
"""Add two numbers"""
return a + b
# Manual testing (bad)
print(add(2, 3)) # 5
print(add(-1, 1)) # 0
# Automated testing (good)
def test_add():
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
Types of Tests
# Unit tests - Test individual functions/methods
def test_calculate_discount():
assert calculate_discount(100, 10) == 90
# Integration tests - Test components together
def test_user_registration_flow():
user = register_user('alice@example.com')
assert user.email == 'alice@example.com'
assert user in database.users
# End-to-end tests - Test entire system
def test_complete_purchase_flow():
# Login -> Browse -> Add to cart -> Checkout
pass
Test Structure (AAA Pattern)
def test_user_creation():
# Arrange - Setup test data
username = 'alice'
email = 'alice@example.com'
# Act - Execute the function
user = create_user(username, email)
# Assert - Verify results
assert user.username == username
assert user.email == email
assert user.is_active == True
Testing Frameworks (pytest preferred)
The unittest library is part of the standard library and is fully supported, but pytest is recommended for most projects because of its simpler syntax, richer fixture system, powerful parametrization, and broad plugin ecosystem. You can run pytest alongside existing unittest tests during migration.
Learning Objectives
- Understand why
pytestis preferred in modern Python projects. - Learn how to write fixtures for setup/teardown and how to parametrize tests.
- Practice converting
unittest.TestCasetests topyteststyle incrementally.
Quick conversion example — unittest to pytest:
# unittest style
import unittest
class TestMath(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 3), 5)
# pytest style (preferred)
def test_add():
assert add(2, 3) == 5
Why prefer pytest?
- Use plain
assertstatements — no boilerplateTestCaseclasses orselfin tests. - Fixtures provide flexible setup/teardown with scoped lifetimes and composition.
- Parametrization reduces repetitive tests and improves coverage of edge cases.
- Excellent plugin ecosystem (
pytest-cov,pytest-mock,pytest-asyncio,pytest-xdist).
Migration tips:
- Run
pytestas-is; it will discover bothpytestandunittesttests. - Convert
unittest.TestCaseclasses incrementally to simple functions andpytestfixtures. - Replace
setUp/tearDownwith fixtures usingyieldfor teardown behavior.
pytest Framework
Basic pytest Test
# test_math.py
def add(a, b):
return a + b
def test_add():
"""Simple pytest test"""
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
# Run with: pytest test_math.py
pytest Assertions
def test_assertions():
# Simple assertions
assert 1 == 1
assert 'hello' in 'hello world'
assert [1, 2, 3] == [1, 2, 3]
# With custom messages
x = 5
assert x > 0, f'x should be positive, got {x}'
def test_exceptions():
import pytest
# Test that exception is raised
with pytest.raises(ValueError):
int('not a number')
# Test exception message
with pytest.raises(ValueError, match='invalid literal'):
int('not a number')
# Capture exception for inspection
with pytest.raises(ZeroDivisionError) as exc_info:
1 / 0
assert 'division by zero' in str(exc_info.value)
pytest Fixtures
import pytest
@pytest.fixture
def sample_data():
"""Fixture provides test data"""
return [1, 2, 3, 4, 5]
def test_sum(sample_data):
"""Test uses fixture"""
assert sum(sample_data) == 15
def test_len(sample_data):
"""Another test using same fixture"""
assert len(sample_data) == 5
# Fixture with setup and teardown
@pytest.fixture
def database():
"""Setup and teardown database"""
db = Database()
db.connect()
yield db # Test runs here
db.disconnect() # Cleanup
def test_query(database):
result = database.query('SELECT * FROM users')
assert len(result) > 0
Fixture Scopes
import pytest
@pytest.fixture(scope='function') # Default - runs for each test
def function_fixture():
return 'function'
@pytest.fixture(scope='class') # Once per test class
def class_fixture():
return 'class'
@pytest.fixture(scope='module') # Once per module
def module_fixture():
return 'module'
@pytest.fixture(scope='session') # Once per test session
def session_fixture():
return 'session'
Parametrized Tests
import pytest
@pytest.mark.parametrize('input,expected', [
(2, 4),
(3, 9),
(4, 16),
(5, 25),
])
def test_square(input, expected):
"""Test runs for each parameter set"""
assert input ** 2 == expected
@pytest.mark.parametrize('a,b,expected', [
(2, 3, 5),
(0, 0, 0),
(-1, 1, 0),
(10, -5, 5),
])
def test_add(a, b, expected):
assert a + b == expected
pytest Marks
import pytest
@pytest.mark.slow
def test_slow_operation():
"""Mark test as slow"""
time.sleep(5)
assert True
@pytest.mark.integration
def test_database_integration():
"""Mark test as integration test"""
pass
@pytest.mark.skip(reason='Not implemented')
def test_future_feature():
pass
@pytest.mark.skipif(sys.version_info < (3, 8), reason='Requires Python 3.8+')
def test_new_feature():
pass
@pytest.mark.xfail(reason='Known bug')
def test_known_bug():
"""Test expected to fail"""
assert False
# Run specific marks:
# pytest -m slow
# pytest -m "not slow"
Mocking and Patching
Why Mock?
# Mock external dependencies to:
# - Test in isolation
# - Avoid slow operations (API calls, database queries)
# - Simulate error conditions
# - Make tests deterministic
# Example: Function with external dependency
import requests
def get_user_data(user_id):
"""Fetch user from API"""
response = requests.get(f'https://api.example.com/users/{user_id}')
return response.json()
# Problem: Test requires network and external API
# Solution: Mock the API call
Mock Basics
from unittest.mock import Mock
# Create mock object
mock_obj = Mock()
# Mock has any attribute/method you access
mock_obj.some_method() # Works
mock_obj.some_attribute # Works
# Configure return value
mock_obj.some_method.return_value = 42
result = mock_obj.some_method()
assert result == 42
# Check if method was called
mock_obj.some_method()
assert mock_obj.some_method.called
assert mock_obj.some_method.call_count == 1
# Check call arguments
mock_obj.some_method(1, 2, key='value')
mock_obj.some_method.assert_called_with(1, 2, key='value')
Patching Functions
from unittest.mock import patch
import requests
def get_user_name(user_id):
"""Get username from API"""
response = requests.get(f'https://api.example.com/users/{user_id}')
return response.json()['name']
# Patch with decorator
@patch('requests.get')
def test_get_user_name(mock_get):
"""Test with mocked API call"""
# Configure mock response
mock_response = Mock()
mock_response.json.return_value = {'name': 'Alice'}
mock_get.return_value = mock_response
# Call function
name = get_user_name(123)
# Assertions
assert name == 'Alice'
mock_get.assert_called_once_with('https://api.example.com/users/123')
# Patch with context manager
def test_get_user_name_context():
with patch('requests.get') as mock_get:
mock_response = Mock()
mock_response.json.return_value = {'name': 'Bob'}
mock_get.return_value = mock_response
name = get_user_name(456)
assert name == 'Bob'
Patching Methods
from unittest.mock import patch
class Database:
def get_user(self, user_id):
# Expensive database query
return {'id': user_id, 'name': 'Alice'}
def get_user_email(user_id):
db = Database()
user = db.get_user(user_id)
return f"{user['name'].lower()}@example.com"
@patch.object(Database, 'get_user')
def test_get_user_email(mock_get_user):
"""Mock database method"""
mock_get_user.return_value = {'id': 1, 'name': 'Bob'}
email = get_user_email(1)
assert email == 'bob@example.com'
mock_get_user.assert_called_once_with(1)
Mock Side Effects
from unittest.mock import Mock
# Return different values on successive calls
mock = Mock()
mock.side_effect = [1, 2, 3]
assert mock() == 1
assert mock() == 2
assert mock() == 3
# Raise exceptions
mock = Mock()
mock.side_effect = ValueError('Invalid input')
try:
mock()
except ValueError as e:
assert str(e) == 'Invalid input'
# Use function for complex behavior
def custom_behavior(arg):
if arg < 0:
raise ValueError('Negative not allowed')
return arg * 2
mock = Mock()
mock.side_effect = custom_behavior
assert mock(5) == 10
pytest Mocking
import pytest
from unittest.mock import Mock, patch
def test_with_mocker(mocker):
"""pytest-mock provides mocker fixture"""
# Patch with mocker
mock_get = mocker.patch('requests.get')
mock_response = Mock()
mock_response.json.return_value = {'name': 'Alice'}
mock_get.return_value = mock_response
# Test your code
result = get_user_name(123)
assert result == 'Alice'
Test Coverage
Measuring Coverage
# Install coverage tool
# pip install coverage
# Run tests with coverage
# coverage run -m pytest
# Generate report
# coverage report
# Generate HTML report
# coverage html
# open htmlcov/index.html
# Example output:
# Name Stmts Miss Cover
# -----------------------------------------
# myapp.py 20 4 80%
# myapp_test.py 15 0 100%
# -----------------------------------------
# TOTAL 35 4 89%
Coverage Configuration
# .coveragerc file
[run]
source = src
omit =
*/tests/*
*/venv/*
*/__pycache__/*
[report]
precision = 2
exclude_lines =
pragma: no cover
def __repr__
raise NotImplementedError
if __name__ == .__main__.:
Debugging with pdb
Starting Debugger
# Method 1: Set breakpoint in code
import pdb
def buggy_function(x):
result = x * 2
pdb.set_trace() # Execution pauses here
return result + 10
# Method 2: Run script with debugger
# python -m pdb script.py
# Method 3: Python 3.7+ built-in
def another_function(x):
result = x * 2
breakpoint() # Built-in debugger
return result + 10
pdb Commands
# Common pdb commands:
# h (help) - Show help
# l (list) - Show code around current line
# n (next) - Execute next line
# s (step) - Step into function
# c (continue) - Continue execution
# r (return) - Continue until function returns
# p variable - Print variable value
# pp variable - Pretty-print variable
# a (args) - Show function arguments
# w (where) - Show stack trace
# u (up) - Move up stack frame
# d (down) - Move down stack frame
# q (quit) - Quit debugger
# Example debug session:
def calculate(x, y):
breakpoint()
result = x + y
result = result * 2
return result
# Run and interact:
calculate(5, 3)
# > (Pdb) p x
# 5
# > (Pdb) p y
# 3
# > (Pdb) n
# > (Pdb) p result
# 8
# > (Pdb) c
# 16
Conditional Breakpoints
import pdb
def process_items(items):
for i, item in enumerate(items):
# Only break on specific condition
if item < 0:
pdb.set_trace()
process(item)
# Or use pdb's conditional breakpoint
# (Pdb) b 10, item < 0
Post-Mortem Debugging
import pdb
def buggy_code():
x = 10
y = 0
return x / y # Will crash
try:
buggy_code()
except:
pdb.post_mortem() # Debug at point of exception
Debugging Techniques
Print Debugging
# Simple but effective
def calculate_discount(price, percent):
print(f'DEBUG: price={price}, percent={percent}') # Debug print
discount = price * (percent / 100)
print(f'DEBUG: discount={discount}')
final_price = price - discount
print(f'DEBUG: final_price={final_price}')
return final_price
Logging for Debugging
import logging
# Configure logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
def process_data(data):
logging.debug(f'Processing data: {data}')
try:
result = complex_operation(data)
logging.debug(f'Operation successful: {result}')
return result
except Exception as e:
logging.error(f'Operation failed: {e}', exc_info=True)
raise
Assertions for Debugging
def calculate_average(numbers):
"""Calculate average with debug assertions"""
assert isinstance(numbers, list), 'numbers must be a list'
assert len(numbers) > 0, 'numbers cannot be empty'
total = sum(numbers)
assert isinstance(total, (int, float)), 'sum should be numeric'
average = total / len(numbers)
assert average >= min(numbers), 'average should be >= minimum'
assert average <= max(numbers), 'average should be <= maximum'
return average
Debugging with repr
class User:
def __init__(self, name, email):
self.name = name
self.email = email
def __repr__(self):
"""Helpful debug representation"""
return f'User(name={self.name!r}, email={self.email!r})'
# When debugging
user = User('Alice', 'alice@example.com')
print(user) # User(name='Alice', email='alice@example.com')
Timing Code
import time
def debug_timing(func):
"""Decorator to time function execution"""
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(f'{func.__name__} took {end - start:.4f} seconds')
return result
return wrapper
@debug_timing
def slow_function():
time.sleep(1)
return 'done'
Best Practices
Write Testable Code
# Bad: Hard to test (hidden dependencies)
def process_user():
db = Database() # Hidden dependency
user = db.get_user(1)
return user.name
# Good: Easy to test (dependency injection)
def process_user(database):
user = database.get_user(1)
return user.name
# Can easily test with mock database
def test_process_user():
mock_db = Mock()
mock_db.get_user.return_value = Mock(name='Alice')
result = process_user(mock_db)
assert result == 'Alice'
Test One Thing Per Test
# Bad: Testing multiple things
def test_user_operations():
user = create_user('alice')
assert user.name == 'alice'
user.update_email('alice@example.com')
assert user.email == 'alice@example.com'
user.delete()
assert user.is_deleted
# Good: Separate tests
def test_user_creation():
user = create_user('alice')
assert user.name == 'alice'
def test_user_email_update():
user = create_user('alice')
user.update_email('alice@example.com')
assert user.email == 'alice@example.com'
def test_user_deletion():
user = create_user('alice')
user.delete()
assert user.is_deleted
Use Descriptive Test Names
# Bad
def test_user():
pass
def test_1():
pass
# Good
def test_create_user_with_valid_email():
pass
def test_create_user_raises_error_for_invalid_email():
pass
def test_user_can_update_password():
pass
Don’t Test Implementation Details
# Bad: Testing implementation
def test_sort_uses_quicksort():
# Don't test internal algorithm
pass
# Good: Test behavior
def test_sort_returns_ordered_list():
result = sort([3, 1, 2])
assert result == [1, 2, 3]
Keep Tests Fast
# Use mocks to avoid slow operations
@patch('requests.get')
def test_api_call(mock_get):
mock_get.return_value = Mock(status_code=200)
# Fast test - no real API call
result = fetch_data()
assert result.status_code == 200
Clean Up After Tests
import pytest
import tempfile
import os
@pytest.fixture
def temp_file():
"""Create and cleanup temporary file"""
fd, path = tempfile.mkstemp()
yield path
os.close(fd)
os.unlink(path) # Cleanup
def test_file_operations(temp_file):
# Use temp_file
# Automatically cleaned up after test
pass
See Also
- Error Handling Cheat Sheet - Exception handling for tests
- Decorators Cheat Sheet - Decorators for test fixtures
- File Operations Cheat Sheet - Testing file operations