HomeSample Page

Sample Page Title


The Full Information to Logging for Python Builders
Picture by Creator

 

Introduction

 
Most Python builders deal with logging as an afterthought. They throw round print() statements throughout growth, possibly swap to primary logging later, and assume that’s sufficient. However when points come up in manufacturing, they be taught they’re lacking the context wanted to diagnose issues effectively.

Correct logging strategies provide you with visibility into utility habits, efficiency patterns, and error situations. With the proper strategy, you’ll be able to hint consumer actions, determine bottlenecks, and debug points with out reproducing them regionally. Good logging turns debugging from guesswork into systematic problem-solving.

This text covers the important logging patterns that Python builders can use. You’ll learn to construction log messages for searchability, deal with exceptions with out dropping context, and configure logging for various environments. We’ll begin with the fundamentals and work our manner as much as extra superior logging methods that you should utilize in tasks straight away. We will probably be utilizing solely the logging module.

You will discover the code on GitHub.

 

Setting Up Your First Logger

 
As an alternative of leaping straight to complicated configurations, allow us to perceive what a logger really does. We’ll create a primary logger that writes to each the console and a file.
 

import logging

logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)

console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)

formatter = logging.Formatter(
    '%(asctime)s - %(identify)s - %(levelname)s - %(message)s'
)
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)

logger.addHandler(console_handler)
logger.addHandler(file_handler)

logger.debug('It is a debug message')
logger.information('Software began')
logger.warning('Disk house operating low')
logger.error('Failed to hook up with database')
logger.essential('System shutting down')

 

Here’s what every bit of the code does.

The getLogger() perform creates a named logger occasion. Consider it as making a channel to your logs. The identify ‘my_app’ helps you determine the place logs come from in bigger purposes.

We set the logger degree to DEBUG, which suggests it would course of all messages. Then we create two handlers: one for console output and one for file output. Handlers management the place logs go.

The console handler solely exhibits INFO degree and above, whereas the file handler captures every thing, together with DEBUG messages. That is helpful since you need detailed logs in recordsdata however cleaner output on display screen.

The formatter determines how your log messages look. The format string makes use of placeholders like %(asctime)s for the timestamp and %(levelname)s for severity.

 

Understanding Log Ranges and When to Use Every

 
Python’s logging module has 5 commonplace ranges, and realizing when to make use of every one is vital for helpful logs.

Right here is an instance:
 

logger = logging.getLogger('payment_processor')
logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)

def process_payment(user_id, quantity):
    logger.debug(f'Beginning fee processing for consumer {user_id}')

    if quantity <= 0:
        logger.error(f'Invalid fee quantity: {quantity}')
        return False

    logger.information(f'Processing ${quantity} fee for consumer {user_id}')

    if quantity > 10000:
        logger.warning(f'Giant transaction detected: ${quantity}')

    strive:
        # Simulate fee processing
        success = charge_card(user_id, quantity)
        if success:
            logger.information(f'Cost profitable for consumer {user_id}')
            return True
        else:
            logger.error(f'Cost failed for consumer {user_id}')
            return False
    besides Exception as e:
        logger.essential(f'Cost system crashed: {e}', exc_info=True)
        return False

def charge_card(user_id, quantity):
    # Simulated fee logic
    return True

process_payment(12345, 150.00)
process_payment(12345, 15000.00)

 

Allow us to break down when to make use of every degree:

  • DEBUG is for detailed data helpful throughout growth. You’d use it for variable values, loop iterations, or step-by-step execution traces. These are often disabled in manufacturing.
  • INFO marks regular operations that you just need to report. Beginning a server, finishing a activity, or profitable transactions go right here. These affirm your utility is working as anticipated.
  • WARNING alerts one thing surprising however not breaking. This consists of low disk house, deprecated API utilization, or uncommon however dealt with conditions. The appliance continues operating, however somebody ought to examine.
  • ERROR means one thing failed however the utility can proceed. Failed database queries, validation errors, or community timeouts belong right here. The precise operation failed, however the app retains operating.
  • CRITICAL signifies critical issues that may trigger the appliance to crash or lose knowledge. Use this sparingly for catastrophic failures that want fast consideration.

Whenever you run the above code, you’ll get:
 

DEBUG: Beginning fee processing for consumer 12345
DEBUG:payment_processor:Beginning fee processing for consumer 12345
INFO: Processing $150.0 fee for consumer 12345
INFO:payment_processor:Processing $150.0 fee for consumer 12345
INFO: Cost profitable for consumer 12345
INFO:payment_processor:Cost profitable for consumer 12345
DEBUG: Beginning fee processing for consumer 12345
DEBUG:payment_processor:Beginning fee processing for consumer 12345
INFO: Processing $15000.0 fee for consumer 12345
INFO:payment_processor:Processing $15000.0 fee for consumer 12345
WARNING: Giant transaction detected: $15000.0
WARNING:payment_processor:Giant transaction detected: $15000.0
INFO: Cost profitable for consumer 12345
INFO:payment_processor:Cost profitable for consumer 12345
True

 

Subsequent, allow us to proceed to know extra about logging exceptions.

 

Logging Exceptions Correctly

 
When exceptions happen, you want extra than simply the error message; you want the complete stack hint. Right here is easy methods to seize exceptions successfully.
 

import json

logger = logging.getLogger('api_handler')
logger.setLevel(logging.DEBUG)

handler = logging.FileHandler('errors.log')
formatter = logging.Formatter(
    '%(asctime)s - %(identify)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)

def fetch_user_data(user_id):
    logger.information(f'Fetching knowledge for consumer {user_id}')

    strive:
        # Simulate API name
        response = call_external_api(user_id)
        knowledge = json.hundreds(response)
        logger.debug(f'Acquired knowledge: {knowledge}')
        return knowledge
    besides json.JSONDecodeError as e:
        logger.error(
            f'Didn't parse JSON for consumer {user_id}: {e}',
            exc_info=True
        )
        return None
    besides ConnectionError as e:
        logger.error(
            f'Community error whereas fetching consumer {user_id}',
            exc_info=True
        )
        return None
    besides Exception as e:
        logger.essential(
            f'Sudden error in fetch_user_data: {e}',
            exc_info=True
        )
        elevate

def call_external_api(user_id):
    # Simulated API response
    return '{"id": ' + str(user_id) + ', "identify": "John"}'

fetch_user_data(123)

 

The important thing right here is the exc_info=True parameter. This tells the logger to incorporate the complete exception traceback in your logs. With out it, you solely get the error message, which regularly just isn’t sufficient to debug the issue.

Discover how we catch particular exceptions first, then have a basic Exception handler. The precise handlers allow us to present context-appropriate error messages. The final handler catches something surprising and re-raises it as a result of we have no idea easy methods to deal with it safely.

Additionally discover we log at ERROR for anticipated exceptions (like community errors) however CRITICAL for surprising ones. This distinction helps you prioritize when reviewing logs.

 

Making a Reusable Logger Configuration

 
Copying logger setup code throughout recordsdata is tedious and error-prone. Allow us to create a configuration perform you’ll be able to import anyplace in your challenge.
 

# logger_config.py

import logging
import os
from datetime import datetime


def setup_logger(identify, log_dir="logs", degree=logging.INFO):
    """
    Create a configured logger occasion

    Args:
        identify: Logger identify (often __name__ from calling module)
        log_dir: Listing to retailer log recordsdata
        degree: Minimal logging degree

    Returns:
        Configured logger occasion
    """
    # Create logs listing if it would not exist

    if not os.path.exists(log_dir):
        os.makedirs(log_dir)
    logger = logging.getLogger(identify)

    # Keep away from including handlers a number of occasions

    if logger.handlers:
        return logger
    logger.setLevel(degree)

    # Console handler - INFO and above

    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    console_format = logging.Formatter("%(levelname)s - %(identify)s - %(message)s")
    console_handler.setFormatter(console_format)

    # File handler - every thing

    log_filename = os.path.be a part of(
        log_dir, f"{identify.substitute('.', '_')}_{datetime.now().strftime('%Ypercentmpercentd')}.log"
    )
    file_handler = logging.FileHandler(log_filename)
    file_handler.setLevel(logging.DEBUG)
    file_format = logging.Formatter(
        "%(asctime)s - %(identify)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
    )
    file_handler.setFormatter(file_format)

    logger.addHandler(console_handler)
    logger.addHandler(file_handler)

    return logger

 

Now that you’ve got arrange logger_config, you should utilize it in your Python script like so:
 

from logger_config import setup_logger

logger = setup_logger(__name__)

def calculate_discount(value, discount_percent):
    logger.debug(f'Calculating low cost: {value} * {discount_percent}%')
    
    if discount_percent < 0 or discount_percent > 100:
        logger.warning(f'Invalid low cost share: {discount_percent}')
        discount_percent = max(0, min(100, discount_percent))
    
    low cost = value * (discount_percent / 100)
    final_price = value - low cost
    
    logger.information(f'Utilized {discount_percent}% low cost: ${value} -> ${final_price}')
    return final_price

calculate_discount(100, 20)
calculate_discount(100, 150)

 

This setup perform handles a number of vital issues. First, it creates the logs listing if wanted, stopping crashes from lacking directories.

The perform checks if handlers exist already earlier than including new ones. With out this examine, calling setup_logger a number of occasions would create duplicate log entries.

We generate dated log filenames robotically. This prevents log recordsdata from rising infinitely and makes it simple to seek out logs from particular dates.

The file handler consists of extra element than the console handler, together with perform names and line numbers. That is invaluable when debugging however would muddle console output.

Utilizing __name__ because the logger identify creates a hierarchy that matches your module construction. This allows you to management logging for particular elements of your utility independently.

 

Structuring Logs with Context

 
Plain textual content logs are wonderful for easy purposes, however structured logs with context make debugging a lot simpler. Allow us to add contextual data to our logs.
 

import json
from datetime import datetime, timezone

class ContextLogger:
    """Logger wrapper that provides contextual data to all log messages"""

    def __init__(self, identify, context=None):
        self.logger = logging.getLogger(identify)
        self.context = context or {}

        handler = logging.StreamHandler()
        formatter = logging.Formatter('%(message)s')
        handler.setFormatter(formatter)
        # Verify if handler already exists to keep away from duplicate handlers
        if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
            self.logger.addHandler(handler)
        self.logger.setLevel(logging.DEBUG)

    def _format_message(self, message, degree, extra_context=None):
        """Format message with context as JSON"""
        log_data = {
            'timestamp': datetime.now(timezone.utc).isoformat(),
            'degree': degree,
            'message': message,
            'context': {**self.context, **(extra_context or {})}
        }
        return json.dumps(log_data)

    def debug(self, message, **kwargs):
        self.logger.debug(self._format_message(message, 'DEBUG', kwargs))

    def information(self, message, **kwargs):
        self.logger.information(self._format_message(message, 'INFO', kwargs))

    def warning(self, message, **kwargs):
        self.logger.warning(self._format_message(message, 'WARNING', kwargs))

    def error(self, message, **kwargs):
        self.logger.error(self._format_message(message, 'ERROR', kwargs))

 

You need to use the ContextLogger like so:
 

def process_order(order_id, user_id):
    logger = ContextLogger(__name__, context={
        'order_id': order_id,
        'user_id': user_id
    })

    logger.information('Order processing began')

    strive:
        objects = fetch_order_items(order_id)
        logger.information('Objects fetched', item_count=len(objects))

        complete = calculate_total(objects)
        logger.information('Complete calculated', complete=complete)

        if complete > 1000:
            logger.warning('Excessive worth order', complete=complete, flagged=True)

        return True
    besides Exception as e:
        logger.error('Order processing failed', error=str(e))
        return False

def fetch_order_items(order_id):
    return [{'id': 1, 'price': 50}, {'id': 2, 'price': 75}]

def calculate_total(objects):
    return sum(merchandise['price'] for merchandise in objects)

process_order('ORD-12345', 'USER-789')

 

This ContextLogger wrapper does one thing helpful: it robotically consists of context in each log message. The order_id and user_id get added to all logs with out repeating them in each logging name.

The JSON format makes these logs simple to parse and search.

The **kwargs in every logging technique helps you to add further context to particular log messages. This combines world context (order_id, user_id) with native context (item_count, complete) robotically.

This sample is very helpful in net purposes the place you need request IDs, consumer IDs, or session IDs in each log message from a request.

 

Rotating Log Recordsdata to Stop Disk Area Points

 
Log recordsdata develop shortly in manufacturing. With out rotation, they’ll finally fill your disk. Right here is easy methods to implement automated log rotation.
 

from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler

def setup_rotating_logger(identify):
    logger = logging.getLogger(identify)
    logger.setLevel(logging.DEBUG)

    # Dimension-based rotation: rotate when file reaches 10MB
    size_handler = RotatingFileHandler(
        'app_size_rotation.log',
        maxBytes=10 * 1024 * 1024,  # 10 MB
        backupCount=5  # Preserve 5 outdated recordsdata
    )
    size_handler.setLevel(logging.DEBUG)

    # Time-based rotation: rotate each day at midnight
    time_handler = TimedRotatingFileHandler(
        'app_time_rotation.log',
        when='midnight',
        interval=1,
        backupCount=7  # Preserve 7 days
    )
    time_handler.setLevel(logging.INFO)

    formatter = logging.Formatter(
        '%(asctime)s - %(identify)s - %(levelname)s - %(message)s'
    )
    size_handler.setFormatter(formatter)
    time_handler.setFormatter(formatter)

    logger.addHandler(size_handler)
    logger.addHandler(time_handler)

    return logger


logger = setup_rotating_logger('rotating_app')

 

Allow us to now attempt to use rotation of log recordsdata:
 

for i in vary(1000):
    logger.information(f'Processing report {i}')
    logger.debug(f'Report {i} particulars: accomplished in {i * 0.1}ms')

 

RotatingFileHandler manages logs based mostly on file dimension. When the log file reaches 10MB (laid out in bytes), it will get renamed to app_size_rotation.log.1, and a brand new app_size_rotation.log begins. The backupCount of 5 means you’ll preserve 5 outdated log recordsdata earlier than the oldest will get deleted.

TimedRotatingFileHandler rotates based mostly on time intervals. The ‘midnight’ parameter means it creates a brand new log file each day at midnight. You would additionally use ‘H’ for hourly, ‘D’ for each day (at any time), or ‘W0’ for weekly on Monday.

The interval parameter works with the when parameter. With when='H' and interval=6, logs would rotate each 6 hours.

These handlers are important for manufacturing environments. With out them, your utility may crash when the disk fills up with logs.

 

Logging in Completely different Environments

 
Your logging wants differ between growth, staging, and manufacturing. Right here is easy methods to configure logging that adapts to every setting.
 

import logging
import os

def configure_environment_logger(app_name):
    """Configure logger based mostly on setting"""
    setting = os.getenv('APP_ENV', 'growth')
    
    logger = logging.getLogger(app_name)
    
    # Clear present handlers
    logger.handlers = []
    
    if setting == 'growth':
        # Improvement: verbose console output
        logger.setLevel(logging.DEBUG)
        handler = logging.StreamHandler()
        handler.setLevel(logging.DEBUG)
        formatter = logging.Formatter(
            '%(levelname)s - %(identify)s - %(funcName)s:%(lineno)d - %(message)s'
        )
        handler.setFormatter(formatter)
        logger.addHandler(handler)
        
    elif setting == 'staging':
        # Staging: detailed file logs + vital console messages
        logger.setLevel(logging.DEBUG)
        
        file_handler = logging.FileHandler('staging.log')
        file_handler.setLevel(logging.DEBUG)
        file_formatter = logging.Formatter(
            '%(asctime)s - %(identify)s - %(levelname)s - %(funcName)s - %(message)s'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.WARNING)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
        
    elif setting == 'manufacturing':
        # Manufacturing: structured logs, errors solely to console
        logger.setLevel(logging.INFO)
        
        file_handler = logging.handlers.RotatingFileHandler(
            'manufacturing.log',
            maxBytes=50 * 1024 * 1024,  # 50 MB
            backupCount=10
        )
        file_handler.setLevel(logging.INFO)
        file_formatter = logging.Formatter(
            '{"timestamp": "%(asctime)s", "degree": "%(levelname)s", '
            '"logger": "%(identify)s", "message": "%(message)s"}'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.ERROR)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
    
    return logger

 

This environment-based configuration handles every stage in a different way. Improvement exhibits every thing on the console with detailed data, together with perform names and line numbers. This makes debugging quick.

Staging balances growth and manufacturing. It writes detailed logs to recordsdata for investigation however solely exhibits warnings and errors on the console to keep away from noise.

Manufacturing focuses on efficiency and construction. It solely logs INFO degree and above to recordsdata, makes use of JSON formatting for straightforward parsing, and implements log rotation to handle disk house. Console output is restricted to errors solely.
 

# Set setting variable (usually accomplished by deployment system)
os.environ['APP_ENV'] = 'manufacturing'

logger = configure_environment_logger('my_application')

logger.debug('This debug message will not seem in manufacturing')
logger.information('Person logged in efficiently')
logger.error('Didn't course of fee')

 

The setting is set by the APP_ENV setting variable. Your deployment system (Docker, Kubernetes, or different cloud platforms) units this variable robotically.

Discover how we clear present handlers earlier than configuration. This prevents duplicate handlers if the perform known as a number of occasions throughout the utility lifecycle.

 

Wrapping Up

 
Good logging makes the distinction between shortly diagnosing points and spending hours guessing what went incorrect. Begin with primary logging utilizing acceptable severity ranges, add structured context to make logs searchable, and configure rotation to forestall disk house issues.

The patterns proven right here work for purposes of any dimension. Begin easy with primary logging, then add structured logging while you want higher searchability, and implement environment-specific configuration while you deploy to manufacturing.

Comfortable logging!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embody DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and low! Presently, she’s engaged on studying and sharing her data with the developer group by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates partaking useful resource overviews and coding tutorials.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles