What’s new in LogicMonitor? Explore the latest innovations advancing Autonomous IT

Read more

Python Logging Levels Explained

Python logging levels are severity categories that control which log messages are recorded. Learn how they work and how to configure them.
20 min read
August 17, 2021

The quick download

Python logging levels help you filter log noise, prioritize events by severity, and improve root cause analysis in distributed systems.

  • Python logging levels (DEBUG > CRITICAL) enable severity-based log filtering and prioritization, reducing noise and improving event visibility

  • Logging works through loggers, handlers, and formatters, where messages must pass level checks before being emitted

  • Using proper logging (instead of print) provides structured, scalable, and centralized visibility across applications

  • Recommendation: Standardize logging levels and configurations across environments to get consistent, actionable insights in production

     

Python logging levels control which events get recorded and how important they are, helping you reduce noise and focus on what’s important in applications.

As systems grow more complex, logging becomes essential for debugging, understanding performance, identifying issues, and maintaining reliability. Python’s built-in logging module provides a flexible way to capture this information without requiring additional setup.

In this article, you’ll learn what each Python logging level means, how to configure logging levels for different use cases, how logger and handler levels work together, and when to use each level in real-world scenarios.

What Is Python Logging?

Python logging is the process of recording events that happen while an application runs. These records help teams monitor behavior, troubleshoot issues, and understand what the system is doing over time.

Python includes a built-in logging module that provides a flexible framework for generating and routing log messages. It allows developers to capture messages from different parts of an application and send them to the right destination, such as the console, a file, or external systems (e.g., log management platforms).

To emit a log message, the application first uses a named logger. That logger creates a log record, which is then passed to handlers (either directly or via propagation to ancestor loggers) and each handler uses a formatter to structure the final output.

To simplify how this works in practice, the logging process follows  the following flow:

  • The logger creates a log record from your message
  • The logger checks its level to decide if the message should be processed
  • If enabled, the log record is passed to handlers attached to the logger (and optionally its ancestors, depending on propagation)
  • The handler checks its own level to decide if it should output the message
  • The formatter formats the message before it is sent to its destination, such as the console or a file

Most developers do not need to manage all of this manually. Once logging is configured, the process runs in the background.

Why Printing Is Unsuitable

The following are the main reasons why you should avoid this method: 

  • Print can write to file-like objects, but it lacks built-in support for structured logging, multiple destinations, and advanced output control.
  • Print messages are first converted into text strings. Developers can use the file argument in print to save messages to a file. However, it must be an object with a write (string) method. Print writes text output and is not designed for binary logging or structured data handling.
  • Print statements are difficult to categorize.

Take, for example, a log file that contains a large variety of print statements. Once the application has gone through different stages of development and is put into production, categorizing and debugging these print statements is nearly impossible. 

The print statements may be modified to suit the different stages and provide additional information, but this would add a load of useless data to the codebase in an attempt to force that print to do something it is not suited or built to do.

To better understand the differences, here’s a quick comparison of print statements, logging, and modern observability tools:

ApproachBest forLimitationsTypical use case
print()Quick debugging in small scriptsNo levels, no built-in structure or metadata, and hard to manage at scaleTesting simple scripts or one-off debugging
Logging moduleApplication monitoring and debuggingRequires configuration and can become complex in large or distributed systemsProduction apps, debugging, and tracking system behavior
Monitoring / Observability toolsLarge-scale systems and distributed environmentsRequires setup, external tools, and costCentralized logging, alerting, dashboards (e.g., ELK, Datadog), and correlation across logs, metrics, and traces

Best Python Logging Practices According to Level

Logging levels should be configured based on the environment your application is running in:

  • Development: Use DEBUG to capture detailed execution flow and diagnose issues quickly
  • Staging/Testing: Use DEBUG or INFO depending on how much visibility is needed during testing
  • Production: Use INFO or WARNING to reduce noise and focus on meaningful events
  • Critical systems: Use ERROR and CRITICAL with alerting to detect urgent issues immediately

Python Logging Module Advantages

The Python logging module provides flexible, built-in capabilities for capturing and managing application logs. Its key advantages include: 

  • Flexible formatting: Customize log output with timestamps, levels, file names, and other contextual information
  • Log level control: Categorize messages by severity (DEBUG, INFO, WARNING, etc.) to reduce noise and focus on what matters
  • Multiple destinations: Route logs to different outputs such as console, files, or other systems via integrations (e.g., sockets, HTTP handlers, or third-party tools)
  • Modular logging design: Each module can generate logs independently using its own named logger without needing to manage global configuration 
  • Centralized control: Applications can configure logging behavior globally while keeping module-level code simple

Python logging also supports a hierarchical structure:

  • Loggers follow a naming hierarchy (for example, app, app.database, app.api)
  • Child loggers can inherit configuration such as levels and handlers from parent loggers (unless explicitly overridden)
  • This makes it easier to manage logging across large or multi-module applications 

Where Should Python Logs Be Written? (File vs Stdout)

File-based logging is good for applications running on standalone servers or environments where local persistence is important. 

However, in containerized or cloud-native environments (such as Docker or Kubernetes), it is generally recommended to log to standard output (stdout) or standard error (stderr) instead. This allows orchestration platforms and logging agents to automatically collect, aggregate, and manage logs centrally.

It’s also important to manage log file size over time. 

Python provides built-in handlers such as RotatingFileHandler (which rotates logs based on file size) and TimedRotatingFileHandler (which rotates logs at time intervals). In many production environments, external tools like logrotate are used alongside or instead of these handlers to manage log retention and prevent disk space issues.

What Are Python Logging Levels?

Five primary logging levels indicate the seriousness of an event, along with Notset, which is used for inheritance behavior.

  1. Notset = 0: This is the initial default setting of a log when it is created. It is not really relevant, and most developers will not even take notice of this category. In many circles, it has already become nonessential. It typically means the logger will inherit its effective level from its parent logger. The root log is initialized with level WARNING by default.
  2. Debug = 10: This level gives detailed information, useful only when a problem is being diagnosed.
  3. Info = 20: This is used to confirm that everything is working as it should.
  4. Warning = 30: This level indicates that something unexpected has happened or some problem is about to happen in the near future.
  5. Error = 40: As it implies, an error has occurred. The software was unable to perform a function.
  6. Critical = 50: A serious error has occurred. The program itself may shut down or not be able to continue running properly.

Note: By default, the root logger is set to WARNING, which means DEBUG and INFO messages will not appear unless the logging level is explicitly configured.

Quick Overview

Here’s a quick reference for when to use each logging level in practice:

LevelUse it forExample message
DEBUGDetailed diagnostics and troubleshooting, typically only enabled in development or during debugging“Connecting to database with config X”
INFOConfirming normal application behavior and key state changes“User logged in successfully”
WARNINGUnexpected events that don’t break the app but may require attention“Disk space is running low”
ERRORFailures that affect functionality for a specific operation or request“Database connection failed”
CRITICALSevere failures that may disrupt the system or require immediate intervention“System outage – service unavailable”

Developers can define their levels, but this is not a recommended practice. The levels in the module have been created through many years of practical experience and are designed to cover all the necessary bases. 

When a programmer does feel the need to create custom levels, great care should be exercised because the results could be less than ideal, especially when developing a library.

This is because when multiple library authors define their custom levels, the logging output will be nearly impossible for the developer using the library to control or understand because the numeric values can mean different things.

Should You Create Custom Python Log Levels?

Python allows you to define custom log levels by assigning a numeric value and registering a name using logging.addLevelName(). For example:

NOTICE_LEVEL = 25

logging.addLevelName(NOTICE_LEVEL, "NOTICE")

To fully use a custom level, you must also define a corresponding method (e.g., logger.notice()) or use logger.log() with the custom level value.

However, custom levels are generally discouraged. The standard levels (DEBUG, INFO, WARNING, ERROR, CRITICAL) cover most use cases, and introducing new levels can make logs harder to understand and maintain—especially across teams or shared libraries.

How to Configure Python Logging

The quickest way to configure the logging feature, the way the logger is supposed to behave, is by using the logging module’s basicConfig() method. However, according to the Python documentation, creating a separate logger for each module in the application is recommended.

Configuring a separate logger for each module can be difficult with basicConfig() alone. That is why most applications will automatically use a system based on a file or a dictionary logging configuration instead.

Main Parameters of basicConfig()

The three main parameters of basicConfig() are:

  • Level: The level determines the minimum priority level of messages to log. Messages will be logged in order of increasing severity: DEBUG is the least threatening, INFO is also not very threatening, WARNING needs attention, ERROR needs immediate attention, and CRITICAL means “drop everything and find out what’s wrong.” The default starting point is WARNING, which means that the logging module will automatically filter out any DEBUG or INFO messages.
  • Handler: This parameter determines where to route the logs. Unless the destination is specifically identified, the logging library uses a StreamHandler by default, directing all logged messages to sys.stderr (usually the console).
  • Format: The default setting for logging messages is: <LEVEL>:<LOGGER_NAME>:<MESSAGE>.

For example, you can configure the root logger to show all messages (including DEBUG) like this:

import logging

logging.basicConfig(level=logging.DEBUG)

logging.debug("This debug message will now appear")

This changes the default threshold from WARNING to DEBUG, allowing lower-severity logs to be captured.

Since the logging module only captures WARNING and higher-level logs by default, there may be a lack of visibility concerning lower-priority logs that could be useful when a root cause analysis is required.

The main application should be able to configure the logs in the subsystem so that all log messages go to the correct location. The logging module in Python provides a large number of ways that this can be fine-tuned, but for nearly all of the applications, configurations are usually quite simple.

Generally speaking, a configuration will consist of the addition of a formatter and a handler to the root logger. Since this is such a common practice, the logging module is equipped with a standardized utility function called basicConfig that handles the majority of use cases.

In more complex applications, it is common to use custom loggers instead of relying only on the root logger:

import logging

logger = logging.getLogger(__name__)

logger.setLevel(logging.INFO)

logger.info("This is an info message from a custom logger")

Custom loggers allow different modules to have independent logging behavior while still participating in a shared logging hierarchy and inheriting configuration (such as handlers) from parent loggers unless propagation is disabled.

You can also configure multiple handlers with different logging levels for the same logger:

import logging

logger = logging.getLogger(__name__)

logger.setLevel(logging.DEBUG)

console_handler = logging.StreamHandler()

console_handler.setLevel(logging.DEBUG)

file_handler = logging.FileHandler("app.log")

file_handler.setLevel(logging.ERROR)

logger.addHandler(console_handler)

logger.addHandler(file_handler)

logger.debug("Debug message (console only)")

logger.error("Error message (console and file)")

In this setup, debug messages are shown in the console, while only error messages are written to the file.

A log message must pass two checks before it is output: first, it must meet the logger’s level, and then it must meet the handler’s level. This provides fine-grained control over what gets logged and where it is sent.

The application should configure the logs as early in the process as possible. Preferably, this is the first thing the application does, so that log messages won’t get lost during the startup.

The applications should be designed to wrap a try and except block around the main application code, telling it that any exceptions should be sent through the logging interface rather than stderr.

Note: basicConfig() only configures the root logger once. If handlers are already configured, calling it again will have no effect unless you use force=True.

If a logger’s level is set to NOTSET, it inherits its effective level from its parent logger. You can check the actual level being used with logger.getEffectiveLevel(), which reflects this inheritance behavior.

You can also disable logging globally using logging.disable(). For example, logging.disable(logging.WARNING) will suppress all messages at WARNING level and below (i.e., WARNING, INFO, and DEBUG..

Python Logging Formatting

The Python logging formatter adds context to log messages. This is very useful when time sent, destination, file name, line number, method, and other information about the log are needed. Also adding the thread and process can be extremely helpful when debugging a multithreaded application.

Here is a simple example of what happens to the log “hello world” when it is sent through a log formatter:

“%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:%(lineno)d — %(message)s”

turns into:

2018-02-07 19:47:41,864 – a.b.c – WARNING – <module>:1 – hello world

Common Logging Formatter Fields and What They Represent

The following are the most common logging formatter fields in python: 

FieldDescription
%(asctime)sTimestamp of when the log was created (formatted based on the formatter’s datefmt setting)
%(levelname)sLogging level (DEBUG, INFO, etc.)
%(name)sName of the logger
%(lineno)dLine number where the log was triggered
%(funcName)sFunction name where the log originated
%(message)sThe actual log message

In modern applications, logs are often structured rather than plain text. 

Structured logging (such as JSON format) allows logs to be machine-readable, making it easier to search, filter, and analyze them in log aggregation systems like ELK, Datadog, or Splunk. This is quite helpful in distributed systems where logs need to be correlated across multiple services.

You can also use different formatters for different outputs. 

For example, a simple format for console output and a more detailed format for file logs:

import logging 

logger = logging.getLogger(__name__)

logger.setLevel(logging.DEBUG)

# Console handler (simple format)

console_handler = logging.StreamHandler()

console_handler.setFormatter(logging.Formatter("%(levelname)s: %(message)s"))

# File handler (detailed format)

file_handler = logging.FileHandler("app.log")

file_handler.setFormatter(logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s"))

logger.addHandler(console_handler)

logger.addHandler(file_handler)

logger.info("Application started")

This approach provides flexibility to customize log output for different audiences — developers reading console logs versus systems processing detailed log files.

String Formatting in Python

When working with Python logging specifically, you must choose the right string formatting approach. In logging calls, %s-style formatting is often preferred because it defers string interpolation until the logging system processes the message, after log level checks determine whether the message will actually be emitted.

This avoids unnecessary performance overhead when lower-level logs (like DEBUG) are filtered out.

With the Python logging formatter, string formatting is made easy.

The old Zen of Python states that there should be “one obvious way to do something in Python.” Now, there are four major ways to do string formatting in Python.

1) The “Old Style” Python String Formatting

Strings in Python are designed with a unique built-in operation that developers can access with the % operation. This allows for quick, simple positional formatting. Those familiar with a printf-style function in C will immediately recognize how this operation works.

For example:

>>> ‘Hello, %s’ % name

“Hello, Mike”

The %s format specifier tells Python that the value of the name should be substituted at this location and represented as a string.

In logging, this style is commonly used like this:

logger.debug(“User %s logged in”, username)

It means the string is only formatted if the log level allows it to be processed.

Other format specifiers are available to give the programmer greater control of the output format. For instance, a designer may want to convert numbers to hexadecimal notations or add a little white space padding to create custom-formatted tables and reports.

The “old-style” format of string format syntax changes in slight ways when the desire is to make multiple substitutions in one string. Since the % operator only responds to one argument, a wrap in the right-hand side in a tuple is needed. 

2) Python 3 Introduces “New Style” String Formatting

Python 3 introduced a new way of doing string formatting that was also later retrograded to Python 2.7. With this “new style” of string formatting, the special syntax % operator is no longer used and the syntax for string formatting is made more regular. The formatting on a string object is now handled by calling up .format().

The format() command can be used for simple positional tasks just like the ones with the “old style” of formatting or it can be referred to as variable substitutions designated by name and used in any desired order.

People working in DevOps will agree that this is quite a powerful feature because it allows the order of display to be easily rearranged without changing the arguments passed to format():

>>> ‘Hey {name}, there is a 0x{errno:x} error!’.format(

…     name=name, errno=errno)

‘Hey Mike, there is a 0xbadc0ffee error!’

This example also demonstrates that the syntax to format an int variable as a hexadecimal string has been altered. What is needed now is a format spec pass, which can be accomplished by adding a 😡 suffix. Instantly, the format string syntax becomes more powerful and the simpler use cases have not been made more complicated.

When using Python 3, the “new style” string formatting is highly recommended and should be preferred over the % style of formatting. Although the “old style” formatting is no longer emphasized as the be-all and end-all, it has not been deprecated. Python still supports this style in its latest versions.

3) String Interpolation

With the introduction of Python 3.6, a new way of string formatting was added. This one is called formatted string literals or simply “f-strings.” This new approach to formatting strings allows developers to use embedded Python expressions within string constants. This is a simple example of how this feature feels:

>>> f’Hello, {name}!’

‘Hello, Mike!’

F-strings are widely used in general Python code, but in logging, they can be less efficient because the string is always evaluated, even if the log message is never emitted.

It is plain to see that the string constant is prefixed with the letter “f” — that’s why it is called “f-strings.” This powerful new formatting syntax allows programmers to embed arbitrary Python expressions, including complicated math problems. The new formatted string literals created by Python are considered a unique parser feature created to convert f-strings into a series of string constants and expressions. They then get connected to build the final string.

Look at this greet() function containing an f-string:

>>> def greet(name, question):

… return f”Hello, {name}! How are you {question}?”

>>> greet(‘Mike’, ‘are you’)

“Hello, Mike! How are you?”

4) Template Strings

One more exceptional tool for string formatting in Python is the template string method. This is a simpler, yet less powerful mechanism, but when it comes to functionality, it could be the answer developers are looking for. Look at this simple greeting:

>>> from string import Template

>>> t = Template(‘Hey, $name!’)

>>> t.substitute(name=name)

‘Hey, Mike!

The template class from Python’s built-in string module had to be imported. The template created this code quickly and easily. Template strings are not a core language feature, but they are supplied by the string module in the standard Python library.

Another factor that separates this format from the others is that template strings do not allow format specifiers. This means that for the previous error string example to work, the int error number will have to be manually transformed into a hex-string.

So, when is it a good idea to use template strings in a Python program? The best case when template strings should be used is when the situation calls for the handling of formatted strings that users of the program have generated. Since they are not very complicated, template strings are often a much safer choice when catering to a novice audience.

Best choice for logging: For most logging scenarios, %s-style formatting is the recommended approach because it avoids unnecessary computation and integrates directly with the logging module. F-strings and .format() are still useful in general Python code, but should be used carefully in logging-heavy or performance-sensitive applications.

Errors and Exceptions in Python Handling

Syntax or parsing errors are the most common alerts. The parser will repeat the incorrect line and point to where the error was first detected. This can be easily fixed by simply inputting the missing data.

An exception occurs when an error is detected during execution. A statement may be syntactically correct, but the function was unable to be completed. This is not a fatal error and can therefore be handled easily. However, the program will not handle the problem automatically. The programmer will have to find the line with the mistake and solve it manually.

Alternatively, the program can be written to handle certain, predictable exceptions. The user will be asked to enter a valid integer. However, the user can interrupt the program with the Control-C command or the try statement.

How to Log Exceptions in Python

When working with logging, record exceptions with enough context to make debugging easier. For this, you can use Python’s logging module’s built-in methods because they capture both the error message and the full stack trace:

logging.exception() vs logging.error(…, exc_info=True)

Both approaches include stack trace information, but they are used slightly differently:

import logging

try:

   result = 10 / 0

except Exception:

   logging.exception("An error occurred")  # Automatically includes stack trace

import logging

try:

   result = 10 / 0

except Exception:

   logging.error("An error occurred", exc_info=True)  # Explicitly includes stack trace

logging.exception() is a shortcut that logs at ERROR level and is intended for use inside an except block, while logging.error(…, exc_info=True) gives you more flexibility and can be used in other contexts as well.

Logging Variables and Context with Exceptions

In real-world applications, logging only the error message is often not enough. You should include relevant variables or context, such as user IDs, request IDs, or input values, to speed up debugging significantly.

import logging

user_id = 123

filename = "data.csv"

try:

   open(filename)

except Exception:

   logging.error(

       "Failed to process file for user_id=%s, filename=%s",

       user_id,

       filename,

       exc_info=True

   )

This approach provides both the stack trace and the runtime context needed to understand what caused the issue.

Choose and Apply the Right Python Logging Levels

Effective logging comes down to being deliberate about what you capture and why. Logging levels should reflect how your system behaves in different environments, while handlers and formatting should make it easy to surface the signals that actually matter.

As your applications grow, logs become a key part of how you understand system behavior. In fact, well-structured logs make it easier to trace execution flow, diagnose issues, monitor performance, and respond to problems with confidence without getting lost in unnecessary noise.

Connect your logging levels to a system that can actually act on those insights

Logs on their own only tell part of the story. When combined with metrics, alerts, and automated correlation, they become a powerful tool for understanding system behavior and resolving issues faster.

FAQs

1. What is the default logging level in Python?

The default logging level in Python is WARNING. This means only messages at WARNING, ERROR, and CRITICAL levels are displayed unless you explicitly change the configuration.

2. What’s the difference between logger level and handler level?

A log message must pass two filters before being output:

  • The logger level determines whether the message is processed at all
  • The handler level determines whether it gets sent to a specific destination

Both conditions must be met for the message to appear.

3. When should I use DEBUG vs INFO vs ERROR?

Use logging levels based on intent:

  • DEBUG: Detailed diagnostics for development
  • INFO: Normal application behavior and key events
  • WARNING: Unexpected situations that don’t break functionality
  • ERROR: Failures affecting a specific operation
  • CRITICAL: Severe issues that may stop the application

4. Is Python logging thread-safe?

Yes, Python’s logging module is thread-safe by default. It uses internal locking to ensure log messages from multiple threads do not interfere with each other.

5. Should I use logging or print for debugging?

Use logging instead of print() for anything beyond simple scripts. Logging provides:

  • Severity levels
  • Structured output
  • Flexible destinations (console, file, external systems)
  • Better scalability in production environments

6. How should logs be structured in production systems?

In production, logs should be structured and consistent:

  • Use formats like JSON for machine readability
  • Include context (e.g., request IDs, user IDs)
  • Standardize logging levels across services

This makes logs easier to search, filter, and analyze.

7. Can Python logging be integrated with monitoring tools?

Yes. Python logs can be integrated with monitoring and observability platforms by:

  • Writing logs to stdout/stderr (common in containers)
  • Using log shippers like Fluentd or Logstash
  • Sending logs to platforms like ELK, Datadog, or LogicMonitor

This enables centralized logging, alerting, and correlation with metrics and traces.

14-day access to the full LogicMonitor platform