In scientific/research computing, especially in high-performance computing when you are running many jobs with a batch system, it’s a good idea to include logging in your libraries and tools. If you’re running many tasks that are meant to occur in a particular sequence, especially if multiple such workflows are occurring asynchronously across more than one machine, having detailed logs to review (manually or automatically by another tool) is a powerful way of managing workloads.


A log is a record of some event or action. In software, logging is the act of keeping a record of the events and actions done by a program. There are two defining characteristics that distinguish logging from simple print statements. Messages emitted by the logger can automatically have additional metadata attached. Further, messages should be filterable based on severity such that messages below a given level of severity can be suppressed.

Several common attributes (metadata) are typically included in logging messages. Which ones you should include in your logging setup is a matter of use-case and preference. The following are things you should consider.

Logging Level

As stated, one of the core features of logging is that you should be able to control which messages are printed as a matter of category. These categories are usually a reference to the level of severity of the message. There is always a numeric value and a name for a logging level.

Conventionally you have a subset or a superset of the following levels.

  1. DEBUG - Detailed information typically used when diagnosing problems.

  2. INFO - General purposes messages for tracking progress.

  3. WARNING - Something unexpected has occurred and may be important.

  4. ERROR - A problem has occurred and a task cannot be completed.

  5. CRITICAL - The code or application must halt.

These particular levels are not required, they are merely conventional. As we’ll see in a moment, Python automatically supports these levels as described. Some applications include additional levels for more fine grained control. Completely different schemes are not unheard of either; another common sort of approach for simple tools may be to just have OK/ERR messages instead.

The important feature is that these levels are an ordered set. The numeric value of the level is what is used to filter against. Setting a logging level of WARNING (i.e., 3) means that levels below that (DEBUG and INFO) will be suppressed. This is meant to be a run-time option that lets you include the statements in your code, but switch them off if not needed.

When printing statements, the name of the logging level (often in all-caps), is typically included before the message.

INFO - all done

Logging Output

There are many targets for outputting logs. Often, logs are simply printed (or redirected from) the console, or written in plain text to a file. You might imagine having logs written to another service over the network or maybe even a database. Sometimes, one doesn’t preclude the other.


Many software engineers have very strong opinions about what to do with logging output, how it should be formatted, and where it should go. We’ll stick to the basics and not worry about that too much for now.

We’ll assume that our messages should be printed as text in some controllable fashion. As discussed in the previous section on command-line interfaces, you should print messages to stderr. This is if the messages are going to the console; you might allow the user to instead specify a file path to write the messages to.

What to Include

In addition to the logging level, you might want to include a few other things. Some good ones include the timestamp for the message, a topic, and a hostname.

Timestamp — You should include a timestamp for your message so you have a record of when the event occurred. It also might be the case that you are running more than one workflow at a time from more than one program and it helps if they are all logging timestamps so you can merge them to get an order of events. In this regard, it’s a good idea to include the timestamp at the very beginning of the message so that you can combine output from multiple sources and sort it with standard command-line tools.

Topic — You should include a topic in your message so that you can associate it with some part of your library. It’s not always the case, but often this is done by tagging all your messages by which module and/or function it came from. This can be done manually in the messages themselves, or automatically by the logging framework you are using. This can be very helpful in tracking down messages pertaining to a particular part of your code.

Hostname — You might want to include the hostname of the machine you are running on. This is not as common as other attributes, but in scientific computing the chances are high that you are running your workflow on many machines that are not your laptop. It might be useful to know what machine a particular task ran on when analyzing your logs. This is easy to include automatically in most logging frameworks.

Logging in Python

We’ve covered a lot of details here, but as with the previous section on command-line interfaces, Python has you covered with a module from the standard library.


The logging module is quite comprehensive and allows the user to heavily customize many parts of the behavior. It is pretty straightforward to implement your own logging functionality; unless you’re doing something special why not use the standard library?

In [1]: import logging

In [2]: log = logging.getLogger('python201')

In [3]: log
Out[3]: <Logger python201 (WARNING)>

Here we’ve started with a name (usable as a topic) and by default our logging level is WARNING. Our filter works, but our formatting is simply the message for now.

In [4]: log.info('hello, world')

In [5]: log.warning('uh oh')
uh oh

We can alter the formatting of the messages that are written by creating a formatter object. The formatter is an attribute of a handler. A logger object has zero or more handlers that all are free to deal with messages in their own way. We’ll setup a stream handler to write to stderr and add it to our logger.

In [6]: handler = logging.StreamHandler()

In [7]: handler
Out[7]: <StreamHandler <stderr> (NOTSET)>

In [8]: formatter = logging.Formatter('%(asctime)s %(levelname)s [%(name)s] %(message)s',
   ...:                               datefmt='%Y-%m-%d %H:%M:%S')

In [9]: handler.setFormatter(formatter)

In [10]: log.addHandler(handler)

In [11]: log.warning('oh my')
2020-07-26 15:35:49 WARNING [python201] oh my

We can change the level if we like to allow lower level messages to come through.

In [12]: log.setLevel(logging.INFO)

In [13]: log.info('hello')
2020-07-26 15:41:33 INFO [python201] hello

There is a way to have the logging library add an attribute so you can include the hostname in the message as a named field, like %(hostname)s. This is a bit cumbersome and makes things more complicated than we’d like at this point. An easy hack though is to literally just code it in.

In [14]: import socket

In [15]: HOST = socket.gethostname()

In [16]: HOST
Out[16]: 'my_machine.local'

In [17]: formatter = logging.Formatter(f'%(asctime)s {HOST} %(levelname)s [%(name)s] %(message)s',
    ...:                               datefmt='%Y-%m-%d %H:%M:%S')

In [18]: handler.setFormatter(formatter)

In [19]: log.info('look at this')
2020-07-26 15:46:09 my_machine.local INFO [python201] look at this


These attributes (e.g., hostname, level, topic/name) are easily grep-able, but by putting the timestamp first we can ensuring ordering easily by just sorting our logging message alphanumerically.

Module Level Logging

Typical behavior for a logging framework (as Python provides) is for the Logger instances to understand their place in a hierarchy, typically following that of the modules in a package. In our case, we’ve initialized the “python201” Logger. You might imagine us initializing a “python201.algorithms” Logger within that module. The ‘dot’ in that name demarcates the levels in the hierarchy, and the child logger automatically grabs a reference to its parent logger, “python201”.

When a child logger is invoked, if the message passes that loggers filter, it will not only hand the message off to its own handlers, but will automatically inherit its parent’s handlers as well. So configuring the “python201” logger is entirely sufficient for the project.

In [20]: child_logger = logging.getLogger('python201.algorithms')

In [21]: child_logger
Out[21]: <Logger python201.algorithms (INFO)>

In [22]: child_logger.parent.handlers
Out[22]: [<StreamHandler <stderr> (NOTSET)>]

In [23]: child_logger.info('look at this too')
2020-07-26 15:46:28 my_machine.local INFO [python201.algorithms] look at this too


Setting the level on a parent logger will push that down to its children.

Updating the Code

Let’s update our code to include logging. When you are developing a Python package, you don’t want to duplicate code in each module. Instead, let’s create some generic code in a logging module. We could just add it right alongside our existing module, but it might be a good idea to create a distinct core sub-package for generic code that is separate from our main business-logic. That way, we can add things later (like configuration file handling) and not clutter our top-level API.

$ tree .
├── docs/
│   ├── build/
│   │   └── ...
│   ├── source/
│   │   ├── conf.py
│   │   ├── manpage.rst
│   │   └── index.rst
│   ├── Makefile
│   └── make.bat
├── python201/
│   ├── __init__.py
│   ├── algorithms.py
│   └── core/
│       ├── __init__.py
│       └── logging.py
├── README.rst
└── setup.py

Again, the __init__.py file is a signal that this folder is a package (or sub-package). The find_packages function we used in our setup.py will automatically pick this up so we don’t need to worry about modifying any code there.

Our logging module might look something like this.

from socket import gethostname
from logging import (getLogger, NullHandler, StreamHandler, Formatter,
                     DEBUG, INFO, WARNING, ERROR, CRITICAL)

HOST = gethostname()
handler = StreamHandler()
formatter = Formatter(f'%(asctime)s {HOST} %(levelname)s [%(name)s] %(message)s',
                      datefmt='%Y-%m-%d %H:%M:%S')

# NOTE: NullHandler for library-only use
logger = getLogger('python201')

levels = {'debug': DEBUG, 'info': INFO, 'warning': WARNING,
          'error': ERROR, 'critical': CRITICAL}

def initialize_logging(level: str) -> None:
    """Initialize the top-level logger with the stream handler and a `level`."""
    if handler not in logger.handlers:

We want to ensure that this code is executed automatically and immediately when importing from the package, so we can import from here in our top-level __init__.py module.

from .core import logging  # force initialization

Now let’s modify the algorithms module to make use of our logging module.

import sys
from typing import List
from argparse import ArgumentParser, FileType

from .core.logging import getLogger, initialize_logging
log = getLogger(__name__)

def cumulative_product(array: List[float]) -> List[float]:
    Compute the cumulative product of an array of numbers.

        array (list): An array of numeric values.

        result (list): A list of the same shape as `array`.

        >>> cumulative_product([1, 2, 3, 4, 5])
        [1, 2, 6, 24, 120]
    result = list(array)
    for i, value in enumerate(array[1:]):
        result[i+1] = result[i] * value
    sample = '[]' if not result else f'[..., {result[-1]:g}]'
    log.debug(f'cumulative_product: length-{len(result)} array {sample}')
    return result

def main(argv: List[str] = None) -> int:
    """Command-line entry-point for `cumulative_product`."""

    # command-line interface
    description = 'Compute the cumulative product of an array of numbers.'
    parser = ArgumentParser(prog='cumprod', description=description)
    parser.add_argument('-v', '--version', action='version', version='0.0.1')
    parser.add_argument('infile', metavar='FILE', type=FileType(mode='r'),
                        help='input file path (default <stdin>)')
    parser.add_argument('-o', '--output', dest='outfile', metavar='FILE',
                        default=sys.stdout, type=FileType(mode='w'),
                        help='output file path (default <stdout>)')
    parser.add_argument('-l', '--last-only', action='store_true',
                        help='only keep the last value')

    parser.add_argument('-d', '--debug', action='store_true',
                        help='show debugging messages')
    cmdline = parser.parse_args(argv)

    # initialize logger for console output
    initialize_logging('debug' if cmdline.debug else 'warning')

    values = map(float, cmdline.infile)
    result = cumulative_product(list(values))

    # '%g' formatting automatically pretty-prints
    start = -1 if cmdline.last_only else 0
    print('\n'.join([f'{value:g}' for value in result[start:]]), file=cmdline.outfile)
    return 0


If you’ve not seen that syntax before, the import statement for our logging module is called a relative import. The leading dot means adjacent. If we had two leading dots it would signify one level up, and so on. This way, we need not hard code the name of our package.

Let’s reinstall our package and try it out.

$ pip install . --upgrade
$ cumprod <(seq 10) --last-only
$ cumprod <(seq 10) --last-only --debug >results.txt
2020-07-26 20:21:22 my_machine.local DEBUG [python201.algorithms] cumulative_product: length-10 array [..., 3.6288e+06]