Python Logging Levels Explained

Python Logging Levels Explained

The complexity of applications is continually increasing the need for good logs. This need is not just for debugging purposes but also for gathering insight about the performance and possible issues with an application.

The Python standard library is an extensive range of facilities and modules that provide most of the basic logging features. Python programmers are given access to system functionalities they would not otherwise be able to employ. When set up correctly, log messages can pull up a slew of useful information concerning the time and place a log was fired as well as the context of the log.

Python has a built-in logging module that is designed to provide critical visibility into applications without the need for a complicated setup. Whether an enterprise is just starting out or is already fully immersed with Python’s logging module, there is always something new and interesting to learn about configuring this module. Team members will be able to easily log all of the data they need, route it to the proper locations, and centralize all logs for a deeper insight into vital applications.

Contents

What Is Python Logging?

Logging is the way that IT teams track events that occur when software applications are run. Logging is an important part of software development, debugging, and smooth running of the completed program. Without an accurate log of the development stages, if the program crashes, there is a very slim chance that the cause of the problem can be detected.

If by some miracle the cause can be detected, it will be an arduous, time-consuming ordeal. With logging, a digital breadcrumb trail is left that can be easily followed back to the root of the problem.

Python is equipped with a logging module in the standard, yet extensive library that provides a flexible framework for putting out log messages from all Python programs. This Python logging module is used widely by libraries and has become the first go-to point for many developers when logging.

This indispensable module provides the best method for applications to configure a variety of log handlers and a way of routing the log messages to the correct handlers. This makes it possible for a highly flexible configuration capable of dealing with a wide range of different use cases.

Log levels relate to the “importance” of the log. For example, an “error” log is a top priority and should be considered more urgent than a “warn” log. A “debug” log is usually only useful when the application is being debugged.

Python has six log levels with each one assigned a specific integer indicating the severity of the log:

  • NOTSET=0
  • DEBUG=10
  • INFO=20
  • WARN=30
  • ERROR=40
  • CRITICAL=50

To emit a log message, the caller must first request a named logger. This name may be used by the application to configure various sets of rules for different loggers. The logger can then be used to send out simple formatted messages at different logging levels (such as DEBUG, INFO, ERROR), which the application in turn can use for handling messages of higher importance and different from those with a lower priority. Although this may sound quite complicated, it is really simple.

Behind the scenes, the message is transferred into a log record object and then routed over to a handler object that is registered specifically to this logger. The handler then uses a formatter to turn the log record into a string and send out that string.

Luckily, most of the time, developers do not have to be aware of the details. Everything happens seamlessly in the background.

Some developers choose to use the printing method to validate whether or not statements are executed correctly. However, printing is not the ideal solution. It may be able to solve issues on simple scripts, but when it comes to complex scripts, the printing approach is not adequate.

Why Printing Is Unsuitable

The main reason printing is not the ideal solution for logging is because printing does not provide a timestamp of when the error occurred.

Knowing exactly when the error occurred can prove to be an important factor while debugging an application. It might not be as critical for small packets of code that can be run and tested in real-time, but larger applications without an error timestamp will have some serious consequences. One option would be to add the datetime module for that extra information, but this creates a very messy codebase. Here are a few other reasons why you should avoid this method: 

  • Print messages can not be saved to every type of file.
  • Print messages are first converted into text strings. Developers can use the file argument in print to save messages to a file. However, it must be an object with a write (string) method as it is not possible to write messages to binary files.
  • Print statements are difficult to categorize.

Take, for example, a log file that contains a large variety of print statements. Once the application has gone through different stages of development and is put into production, categorizing and debugging these print statements is nearly impossible. The print statements may be modified to suit the different stages and provide additional information, but this would add a load of useless data to the codebase in an attempt to force that print to do something it is not suited or built to do.

Best Python Logging Practices According to Level

The standard library in Python includes a flexible logging module built right in, allowing developers to create different configurations for various logging needs. The functions contained in this module are designed to allow developers to log to different destinations. This is done by defining specific handlers and sending the log messages to the designated handlers.

Logging levels are the labels added to the log entries for the purpose of searching, filtering, and classifying log entries. This helps to manage the granularity of information. When log levels are set using the standard logging library, only events of that level or higher will be recorded.

It is important to always include a timestamp for each log entry. Knowing an event occurred without knowing when is not much better than not knowing about the event at all. This information is useful for troubleshooting, as well as for better insights for analytical uses.

Unfortunately, people don’t always agree on the format to use for timestamps. The first instinct would be to use the standard of the country of origin, but if the application is available worldwide, this can get very confusing. The recommended format is called ISO-8601. It’s an internationally recognized standard expressed as YYYY-MM-DD followed by the time, like this: 2021-07-14T14:00-02:00.

Python Logging Module Advantages

The Python logging module provides intelligent solutions for all of these problems.

The format of the messages that are logged can be easily controlled. The module is equipped with various useful attributes that can be included or left out of the log. This leaves a clean, informative log of the stages of development.

Messages can be logged with different levels of urgency or warning information to make categorization easier. Debugging an application is also easier when logs are categorized properly. Plus, the destination of logs can be set to anything, even sockets.

A well-organized Python application is most likely composed of more than one module. In some cases, the intention is for these modules to be used by other programs, but unless the developer deliberately designs reusable modules inside the application, it is likely that the user is using modules available from the Python Package Index and modules that the developer wrote specifically for a certain application.

Generally, a module will produce log messages only as a best practice and not configure how those messages are handled. The application is responsible for that part.

The only responsibility the modules should have is to make it simple for the application to route the log messages. This is the reason it is standard for each module to simply use a logger with the same name as the module itself. This way, it is easier for the application to route the different modules differently, but still, keep the log code inside the module simple. The module only requires two simple lines to set up logging and then use the logger named.

Python to File

Using the logging module to record the events in a file is simple and straightforward. The module is simply imported from the library. The logger can then be created and configured as desired. Several perimeters can be set, but it is important to pass the name of the file where the events are to be recorded.

This is where the format of the logger can also be set. The default is set to append mode, but if required, that can be changed to write mode. Additionally, the logger level can be set at this time. This will act as the threshold for tracking purposes, based on the values assigned to each level. Several attributes can be passed as parameters. A list of parameters is available in the Python Library. Attributes are chosen according to requirements.

One main advantage of logging to a file is that the application doesn’t necessarily need to account for the chance of encountering an error related to the network while streaming logs to an external destination. If any issues do arise when streaming logs over the network, access to those logs will not be lost because they are all stored locally on each server. Another advantage of logging to a file is the ability to create a completely customizable logging setup. Different types of logs can be routed to separate files and then tailed and centralized with a log monitoring service.

What Are Python Logging Levels?

The logging modules needed are already a part of the Python standard library. So the IT team just needs to import logging and everything is good to go. The default contains six standard logging levels that indicate the seriousness of an event. These are:

  • Notset = 0: This is the initial default setting of a log when it is created. It is not really relevant and most developers will not even take notice of this category. In many circles, it has already become nonessential. The root log is usually created with level WARNING.
  • Debug = 10: This level gives detailed information, useful only when a problem is being diagnosed.
  • Info = 20: This is used to confirm that everything is working as it should.
  • Warning = 30: This level indicates that something unexpected has happened or some problem is about to happen in the near future.
  • Error = 40: As it implies, an error has occurred. The software was unable to perform some function.
  • Critical = 50: A serious error has occurred. The program itself may shut down or not be able to continue running properly.

Developers can define their levels, but this is not a recommended practice. The levels in the module have been created through many years of practical experience and are designed to cover all the necessary bases. When a programmer does feel the need to create custom levels, great care should be exercised because the results could be less than ideal, especially when developing a library. This is because when multiple library authors define their custom levels the logging output will be nearly impossible for the developer using the library to control or understand because the numeric values can mean different things.

How to Configure Python Logging

The logging module developers need is already included in the Python standard library, which means they can implement the logging features immediately without the need to install anything. The quickest way to configure the logging feature the way the logger is supposed to behave is by using the logging module’s basicConfig() method. However, according to the Python documentation, creating a separate logger for each module in the application is recommended.

Configuring a separate logger for each module can be difficult with basicConfig() alone. That is why most applications will automatically use a system based on a file or a dictionary logging configuration instead.

The three main parameters of basicConfig() are:

  • Level: The level determines the minimum priority level of messages to log. Messages will be logged in order of increasing severity: DEBUG is the least threatening, INFO is also not very threatening, WARNING needs attention, ERROR needs immediate attention, and CRITICAL means “drop everything and find out what’s wrong.” The default starting point is WARNING, which means that the logging module will automatically filter out any DEBUG or INFO messages.
  • Handler: This parameter determines where to route the logs. Unless the destination is specifically identified, the logging library will instinctively use a StreamHandler to direct all logged messages to sys.stderr (usually the console).
  • Format: The default setting for logging messages is: <LEVEL>:<LOGGER_NAME>:<MESSAGE>.

Since the logging module only captures WARNING and higher-level logs by default, there may be a lack of visibility concerning lower-priority logs that could be useful when a root cause analysis is required.

The main application should be able to configure the logs in the subsystem so that all log messages go to the correct location. The logging module in Python provides a large number of ways that this can be fine-tuned, but for nearly all of the applications, configurations are usually quite simple.

Generally speaking, a configuration will consist of the addition of a formatter and a handler to the root logger. Since this is such a common practice, the logging module is equipped with a standardized utility function called basicConfig that handles the majority of use cases.

The application should configure the logs as early in the process as possible. Preferably, this is the first thing the application does, so that log messages won’t get lost during the startup.

The applications should be designed to wrap a try and except block around the main application code telling it that any exceptions should be sent through the logging interface and not to stderr.

Python Logging Formatting

The Python logging formatter adds context information to enhance the log message. This is very useful when time sent, destination, file name, line number, method, and other information about the log are needed. Also adding the thread and process can be extremely helpful when debugging a multithreaded application.

Here is a simple example of what happens to the log “hello world” when it is sent through a log formatter:

“%(asctime)s — %(name)s — %(levelname)s — %(funcName)s:%(lineno)d — %(message)s”

turns into:

2018-02-07 19:47:41,864 – a.b.c – WARNING – <module>:1 – hello world

String Formatting in Python

With the Python logging formatter, string formatting is made easy.

The old Zen of Python states that there should be “one obvious way to do something in Python.” Now, there are four major ways to do string formatting in Python.

1) The “Old Style” Python String Formatting

Strings in Python are designed with a unique built-in operation that developers can access with the % operation. This allows for quick, simple positional formatting. Those familiar with a printf-style function in C will recognize immediately how this operation works.

For example:

>>> ‘Hello, %s’ % name

“Hello, Mike”

The %s format specifier tells Python that the value of the name should be substituted at this location and represented as a string.

Other format specifiers are available to give the programmer greater control of the output format. For instance, a designer may want to convert numbers to hexadecimal notations or add a little white space padding to create custom formatted tables and reports.

The “old-style” format of string format syntax changes in slight ways when the desire is to make multiple substitutions in one solitary string. Since the % operator only responds to one argument, a wrap in the right-hand side in a tuple is needed. 

2) Python 3 Introduces “New Style” String Formatting

Python 3 introduced a new way of doing string formatting that was also later retrograded to Python 2.7. With this “new style” of string formatting, the special syntax % operator is no longer used and the syntax for string formatting is made more regular. The formatting on a string object is now handled by calling up .format().

The format() command can be used for simple positional tasks just like the ones with the “old style” of formatting or it can be referred to as variable substitutions designated by name and used in any desired order. People working in DevOps will agree that this is quite a powerful feature because it allows the order of display to be easily rearranged without changing the arguments passed to format():

>>> ‘Hey {name}, there is a 0x{errno:x} error!’.format(

…     name=name, errno=errno)

‘Hey Mike, there is a 0xbadc0ffee error!’

This example also demonstrates that the syntax to format an int variable as a hexadecimal string has been altered. What is needed now is a format spec pass, which can be accomplished by adding a 😡 suffix. Instantly, the format string syntax becomes more powerful and the simpler use cases have not been made more complicated.

When using Python 3, the “new style” string formatting is highly recommended and should be preferred over the % style of formatting. Although the “old style” formatting is no longer emphasized as the be-all and end-all, it has not been deprecated. Python still supports this style in its latest versions.

According to experts in the field discussing this matter on the Python dev email list and the recent issue of the Python dev bug tracker, the “old” % formatting is not going to be leaving anytime soon. It will still be around for quite some time. Official Python 3 documentation doesn’t support this opinion or speak too highly of the “old style” formatting:

“The formatting operations described here exhibit a variety of quirks that lead to some common errors (such as failing to display tuples and dictionaries correctly). Using the newer formatted string literals or the str.format() interface helps avoid these errors. These alternatives also provide more powerful, flexible, and extensible approaches to formatting text.” Source: Python 3

This is why the majority of developers prefer to put their loyalties with the str.format for new code. Now, starting with Python 3.6, there is yet another innovative way to format strings.

3) String Interpolation

With the introduction of Python 3.6, a new way of string formatting was added. This one is called formatted string literals or simply “f-strings.” This new approach to formatting strings allows developers to use embedded Python expressions within string constants. This is a simple example of how this feature feels:

>>> f’Hello, {name}!’

‘Hello, Mike!’

It is plain to see that the string constant is prefixed with the letter “f” — that’s why it is called “f-strings.” This powerful new formatting syntax allows programmers to embed arbitrary Python expressions, including complicated math problems. The new formatted string literals created by Python are considered a unique parser feature created to convert f-strings into a series of string constants and expressions. They then get connected to build the final string.

Look at this greet() function containing an f-string:

>>> def greet(name, question):

… return f”Hello, {name}! How are you {question}?”

>>> greet(‘Mike’, ‘are you’)

“Hello, Mike! How are you?”

By disassembling the function and inspecting what is happening behind the scenes, it is easy to see that the f-string in the function is being transformed into something similar to what is shown here:

>>> def greet(name, question):

… return “Hello, ” + name + “! How are ” + question + “?”

The real implementation is slightly faster than that because it uses the BUILD_STRING opcode as an optimization. However, functionally speaking, the concept is the same.

4) Template Strings

One more exceptional tool for string formatting in Python is the template string method. This is a simpler, yet less powerful mechanism, but when it comes to functionality, it could be the answer developers are looking for. Look at this simple greeting:

>>> from string import Template

>>> t = Template(‘Hey, $name!’)

>>> t.substitute(name=name)

‘Hey, Mike!

The template class from Python’s built-in string module had to be imported. The template created this code quickly and easily. Template strings are not a core language feature, but they are supplied by the string module in the standard Python library.

Another factor that separates this format from the others is that template strings do not allow format specifiers. This means that for the previous error string example to work, the int error number will have to be manually transformed into a hex-string.

So, when is it a good idea to use template strings in a Python program? The best case when template strings should be used is when the situation calls for the handling of formatted strings that users of the program have generated. Since they are not very complicated, template strings are often a much safer choice when catering to a novice audience.

Errors and Exceptions in Python Handling

Syntax or parsing errors are the most common alerts. The parser will repeat the incorrect line and point to where the error was first detected. This can be easily fixed by simply inputting the missing data.

An exception occurs when an error is detected during execution. A statement may be syntactically correct, but the function was unable to be completed. This is not a fatal error and can therefore be handled easily. However, the program will not handle the problem automatically. The programmer will have to find the line with the mistake and solve it manually.

Alternatively, the program can be written to handle certain, predictable exceptions. The user will be asked to enter a valid integer. However, the user can interrupt the program with the Control-C command or the try statement.

This is how the try statement works:

  • First, the “try clause” (which is the statement between the try and except keywords) is put into action.
  • If an exception does not occur, the except clause is ignored and the execution part of the try clause is completed.
  • If an exception does occur during the execution of the try clause, all other parts of the clause are skipped. If the type matches the named exception after the specified except keyword, then the except clause is put into action. The execution then continues after the try clause.
  • In the event of an exception occurs which does not match the named exception in the except clause, then it is passed on over to an outer try statement. When a handler is not found, it becomes an unhandled exception and the execution phase stops, and an error message is displayed.

A try statement will often contain more than one except clause to better specify handlers for various exceptions. However, at the most, the execution process will only be for one handler. A handler will only handle the exceptions that occur within the corresponding try statement, not the exceptions occurring in other handlers of the same try statement. The except clause will often name multiple exceptions enclosed within as a parenthesized tuple.

Conclusion

Python is such a popular language because it is simple to use, extremely versatile, and offers a large ecosystem of third-party tools. It is versatile enough to accommodate a wide variety of case scenarios, from web applications to data science libraries, SysAdmin scripts, and many other types of programs.

Python logging is simple and well standardized, due to its powerful logging framework built right into the standard library.

Every module simply logs everything into a logger record for each module name. By doing this, it is easier for the application to route all of the log messages of the different modules to the right places.

The applications are then able to choose the best option for configuring the logs. However, in modern infrastructure, following best practices will greatly simplify the entire process.