Log Boss

Log Boss is a logging module that gives you control over what you log and how you view it. It has some handy tools to filter logs, tag logs, persist logs in memory and on disk, and collect information about log events to make it easier for you to understand what is going on in your code. It features the ability to store the logs to a text file and generate a HTML log file for a prettier view. It is thread-safe and even groups information by thread execution.

Basic Logging

Getting Started

The logger is a thread-safe, singleton instance of Logger that gives its consumers the ability to create customized logs, store and persist logs in memory and on disk, rules to filter logs, and much more! The logger has two main endpoints: the console (or redirected stream) and in memory (persistence). Persistence in memory is the method by which Log Boss can pass persisted logs to the HTML log generator to create a pretty view of the logs. Note that persistence can be very intense in memory consumption.

The recommended construct of the logger is to have a logger.py file that defines the logger that is consumed throughout your project.

logger.py
from logboss import Logger

logger = Logger()

The logger must be started in order to log events. By default it is disabled. This should only be started by the main script.

from mypackage.logger import logger  # from above

# Set logging options here.
logger.start()

Log Boss uses the same concepts in logging levels as the built-in logging module for Python, namely DEBUG, INFO, WARNING, ERROR, and CRITICAL. Each log level has an associated value, which is imported by Log Boss from the built-in logging module. Each log level value is an integer threshold that allows the logger to determine what to log and what to ignore. This is particularly useful when settings rules. Here is the basic in using the different log levels:

from mypackage.logger import logger

logger.debug('This is a debug message!')
logger.info('This is an info message!')
logger.warning('This is a warning message!')
logger.error('This is an error message!')
logger.critical('This is a critical message!')

Automatic Logging

There’s nothing more annoying than counting on developers and contributors logging information about what their code is doing. Log Boss can help with its class and function wrappers.

Wrapping Functions

Log Boss will log both the inputs and outputs of a function wrapped with logger.wrap_func(). The inputs and outputs can optionally be masked based on case-insensitive regular expressions or entirely.

from mypackage.logger import logger


# None of these arguments are required unless you are writing a staticmethod or classmethod.
@logger.wrap_func(
    mask_input_regexes=['In.*b$'], # This will mask input_b.
    mask_output=False, # This is the default. If True the entire output would be masked.
    mask_output_regexes=['Nested.*$'], # This will mask Nested B.
    is_staticmethod=False, # This is the default. Set to True if it should be a staticmethod.
    is_classmethod=False, # This is the default. Set to True if it should be a classmethod.
)
def some_function(input_a: int, input_b: str = None):
    return {
        'A': input_a,
        'B': {
            'Nested B': input_b
        },
        'ObjectA': ObjectA(input_a)
    }

Wrapping Classes

Log Boss will apply the logger.wrap_func() wrapper to all method and function types within a class. Methods and functions can be excluded based on a single regular expression. Regular expressions can also be used to mask inputs and outputs for each entity wrapped. Here the is_staticmethod and is_classmethod arguments to the logger.wrap_func() method are dynamically handled for you because the logger can read the methods and functions bound to the class object.

from mypackage.logger import logger

@logger.wrap_class(
    func_regex_exclude='_.*',  # This will exclude all underscore methods and function in the class
    mask_input_regexes=['In.*(a|b)$'], # This will mask input_a and input_b, but not input_c.
    mask_output=False, # This is the default. If True the entire output would be masked.
    mask_output_regexes=['password'], # This will mask password.
)
class MyClass(SomeParentClass):
    def __init__(self, *args, **kwargs):
        super().__init__()

    def some_method(self, input_a): ...

    @classmethod
    def some_classmethod(cls, input_b): ...

    @staticmethod
    def some_staticmethod(input_c): ...

    def _some_private_method(self): ...

Log messages are created with the format [TAG NAME] <Timestamp>: <Message>. At this time only the Timestamp can be altered. It mus be a format supported by Python’s strftime.

Custom Log Tags

There are five basic log tags that are used by default: DEBUG, INFO, WARNING, ERROR, and CRITICAL. However, wouldn’t it be nice to tag a log message for what it represents? Perhaps a tag for the data access layer and another for the business layer. For whatever the case is custom tags can be created to supplement the built-in tags. It is recommended to define the custom log tags where the logger is instantiated. Optionally, you may wish to toggle the default tag in the logger to one other than INFO. You can do that, but it would be respectful not to do that if you intend to publish your project for others to use. It is, however, handy for projects maintained by a team or some application intended to be have a “main” script.

logger.py
from logboss import Logger, LogTagTypes

# The name is simply a name for the tag and is only used for display.

# The value is used to assign an integer value to the tag within the scope of the LogTagType.<type>.
# In this example the log tag value is 15 (DEBUG + 5). The value must be between 0 and 9 because the
# next tier is INFO, which starts at 20. The value should, if not ever, rarely be used.
class MyCustomLogTags:
    dal = LogTagTypes.Debug(name='Data Access Layer', value=5)
    bl = LogTagTypes.Info(name='Business Layer')

logger = Logger()
logger.add_log_tags(MyCustomLogTags)
logger.default_log_tag = MyCustomLogTags.bl  # Now using logger.log() will use this tag by default.

You can now use these custom log tags in another script. All logger methods that accept a log tag will use the default log tag. BEWARE! Because the logger is a singleton instance, modifying the default log tag will affect other consumers of the logger. Only use if your code is controlled by some main script or is not intended for other applications to consume.

from mypackage.logger import logger, MyCustomLogTags

logger.log('This is logged with the Business Layer log tag by default!')
logger.log('This is logged with the Data Access Layer log tag.', log_tag=MyCustomLogTags.dal)
logger.info('This still uses the INFO tag and cannot be replaced.')

Logging Rules And Modes

Rules

By default the logger is disabled. This means that the logger is off for all running threads. It is highly discouraged to use the disable switch to toggle logging temporarily and should only be used if the intent is to completely silence all log events. The best way to temporarily silence logs (say for the sake of limiting verbosity), then you will want to set rules.

There are two different ways to set rules: set_rule() and rule(). The latter is the same as the former except that the latter is a context manager. Setting rules allows you to control which logs are ignored. There are a few options when creating a rule:

  • min_level: One of ‘debug’, ‘info’, ‘warning’, ‘error’, or ‘critical’, or an integer representing the minimum log tag value threshold. Specifying ‘warning’ will only allow logs with a tag value greater than or equal to a warning, for example. This option assumes to blacklist both endpoints.

  • blacklist_function: A function that accepts a single parameter - the log tag in question, and outputs either two booleans in a tuple (blacklist from the console, blacklist from persistence) or a single boolean (blacklist from both endpoints).

  • reset: Reset the rule to the default rule.

There is a configurable default_rule, which by default allows all logs to both endpoints. Using the context manager rule() the log rule is reset.

from mypackage.logger import logger

with logger.rule(min_level='warning', why='Cuz I wanna.'):
    logger.info('This will not be logged.')
    logger.warning('This will be logged.')
    logger.error('This will also be logged.')
logger.info('This will be logged now.')

logger.set_rule(min_level='warning', why='Cuz I wanna.')
# Repeat the context above.
logger.set_rule(reset=True)

Rules are handled per thread, where the main thread is in ultimate control. This means that if a rule is set in the main thread, then all threads have that same rule applied to them. This presents a limitation with asynchronous code where a thread is started but not joined until after the main thread changes the rule.

Modes

There is also a method by which you can set the mode of the logger. Possible options are:

  • all: All endpoints (both console and persistence) are permitted.

  • console: Logs will only be shown on the console or redirected output stream.

  • persistence: Logs will only be stored in memory to be parsed later.

This can very easily be replaced by the more robust Rules, but is a quick method for controlling the logging to each of the endpoints. Similar to the rules, this can be called by each thread for logging within that thread, but the main thread controls logging modes across all threads.

The default mode is all.

from mypackage.logger import logger

logger.set_mode('persistence')
logger.info('This message will not appear in stdout. It will only appear in memory.')

Redirecting Logs

Logs can be redirected from the console to a file without affecting the overall console output of the program. This should also be handled in the same location the logger is created.

logger.py
from logboss import Logger

logger = Logger()
logger.redirect('/path/to/my/text/file.txt') # The entire path is created if it doesn't exist.

# As a test...
logger.info('This will appear in the text file instead of stdout.')

Generating HTML Logs

Log Boss offers an HTML parser of the persisted logs in memory in order to visualize the logs. It comes with a few handy options:

  • log_file: The location to dump the HTML log file. The path is created if it doesn’t already exist.

  • title: The title of the log file.

  • include_code: If True, the code associated to the logs are imported and dynamically loaded into the HTML view. Do not set if compiling with PyInstaller or a similar tool.

  • datetime_range: A tuple of (start_time, end_time) that specifies which logs to compile.

  • exclude_files: A list of regular expressions that specify which files to exclude. Ignored if include_code is False.

In order to generate the HTML logs you must provide it the logs. There are two ways to do this.

Using the context manager.

from mypackage.logger import logger

with logger.generate('html', log_file='/path/to/my/log.html', title='My Application Logs'):
    # Do some stuff here.
    ...
logger.info('HTML created. This log is not included in the html.')

Explicitly calling the HTML generator.

from mypackage.logger import logger
from logboss import HtmlLogGenerator

do_my_main_tasks_that_log_stuff()
logs = logger.get_logs()
generator = HtmlLogGenerator(logs)
# Optionally include code snippets from files except from any files with "secret" in the name.
generator.generate(
    log_file='/path/to/my/log.html',
    title='My Application Logs',
    include_code=True,
    exclude_files=['.*secret.*']
)