Skip navigation links

Package org.refcodes.logger

The "refcodes-logger" artifact provides the refcodes logging framework for flexible logging of any data to any data sink (inclucing files, databases or the console provided as alternate implementations).

See: Description

Package org.refcodes.logger Description

The "refcodes-logger" artifact provides the refcodes logging framework for flexible logging of any data to any data sink (inclucing files, databases or the console provided as alternate implementations).

The "refcodes-logger" artifact supports straight forward, composite (clustering) or partitioning functionality as implementations of the Logger type. The "REFCODES.ORG" runtime logger RuntimeLogger integrates with "SLF4J" seamlessly and also acts as an alternative to to "SLF4J".

The RuntimeLogger implementations are being configured with a Logger implementation. In your code, you use the RuntimeLogger type which, depending on you configuring it, logs to a "SimpleDB" cluster, the console (with "ANSI" escape sequence support) or any I/O device as well as to an "SLF4J" logger.

Being an alternative to "SLF4J", the RuntimeLogger's architecture settles upon the much more generic Logger; which actually can be used to log high volume logs of any data type and not being restricted to runtime logs. The RuntimeLogger implementations add functionality not found in other logging frameworks (logging out the class- and method-names of the logging methods without any configuration or additional lines of code)


The Logger is the most plain logger definition allowing you to log any kinds of data Records; being of type Record - and which can contain any number of Columns, so that you are enabled to log big data of any structured information (e.g. high volume HTTP-Requests entering your RESTful microservice).


The RuntimeLogger defines a plain simple interface for logging out runtime information generated by software systems. The RuntimeLoggerImpl implementation takes care of logging out the class and the method generating a log line.

The RuntimeLoggerImpl actually takes a Logger instance; which implementation to take is up to you:

(you find the various Logger implementations in the refcodes-logger-alt artifact's modules)

Use the factory (the factory is implemented as a singleton) RuntimeLoggerFactorySingleton retrieving RuntimeLogger instances configured by a "runtimelogger-config.xml" file (using apache configurations notation). On how to configure, take a look at it's super-class RuntimeLoggerFactoryImpl.

A SLF4J binding exists in the refcodes-logger-ext-slf4j" artifact which binds the refcodes-logger framework to SLF4J enabling to log all your SLF4J logs out by the refcodes-logger framework, e.g. to a SimpleDbLoggerImpl.

The RuntimeLogger may also be configured with one of the below mentioned loggers.

Composite logger:

The CompositeLoggerImpl uses the composite patter to forward logger functionality to a number encapsulated logger instances. Depending on the performance (and availability) of an encapsulated logger, the calls to the composite's #log(Record) method are executed by the next encapsulated logger ready for execution.

An invocation of the AbstractCompositeLogger.log(org.refcodes.tabular.Record) method is forwarded to exactly one of the encapsulated Logger instances. The actual instance being called depends on its availability (in case, partitioning is needed, take a look at the PartedLoggerImpl and its sub-classes).

Using the CompositeLoggerImpl, a huge number of Record instances can be logged in parallel by logging them to different physical data sinks (represented by the encapsulated logger instances), thereby avoiding a bottleneck which a single physical data sink would cause for logging.

Internally a log line queue (holding Record instances to be logged) as well a daemon thread per encapsulated logger (taking elements from the log line queue) are used to decouple the encapsulated logger instances from the CompositeLoggerImpl.

A given number of retries are approached in case there is an overflow of the log line queue; this happens when the queue is full and there are none encapsulated logger instances to take the next Record.

To avoid a building up of the log line queue, eventually causing an out of memory, log lines not being taken into the log line queue (as it is full) within the given number of retries, them log lines are dismissed. In such a case a warning with a log-level WARN is printed out.

Query logger:

The CompositeQueryLoggerImpl extends the CompositeLoggerImpl with the query logger functionality. In contrast to the #log(Record) method, which is forwarded to exactly one of the encapsulated query logger instances, the #findLogs() method calls are forwarded to all of the encapsulated query logger instances. (in case, partitioning is needed, take a look at the PartedQueryLoggerImpl)

Trim logger:

The CompositeTrimLoggerImpl extends the CompositeQueryLoggerImpl with the trim logger functionality. In contrast to the AbstractCompositeLogger.log(org.refcodes.tabular.Record) method, which is forwarded to exactly one of the encapsulated trim logger instances, the AbstractCompositeTrimLogger.deleteLogs(org.refcodes.criteria.Criteria) and AbstractCompositeTrimLogger.clear() method calls are forwarded to all of the encapsulated TrimLogger instances (in case, partitioning is needed, take a look at the PartedTrimLoggerImpl).

Parted logger:

The PartedLoggerImpl is a partitioning logger which encapsulates Logger instances or CompositeLoggerImpl instances (or sub-classes of it) representing partitions.

This means: A partition is regarded to be a dedicated physical data sink or a CompositeLoggerImpl containing Logger instances attached to physical data sinks. A physical data sink may be a database (SQL or NoSQL), a files-system or volatile memory (in memory). To be more concrete: A physical data sink may be a domain when using Amazon's SimpleDB, it may be a database table when using MySQL, it may be a HashMap or a List when using in memory storage.

The Record instances as managed by the Logger instances are mapped to the fields of the physical data sink (e.g. table columns regarding databases).

The Record instances are stored to, retrieved from or deleted from dedicated partitions depending on partitioning Criteria contained in the Record instances (or the query Criteria instances). The Criteria (e.g. the column partition Criteria in a Record) as provided to the #log(Record) method is used by the PartedLoggerImpl to select the partition to be addressed. In case of query operations the query Criteria is used to determine the targeted partition.

(in case no partition can be determined and a fallback logger has been configured, then data may get logged to the fallback logger)

In practice there can be several (composite) logger instances being the partitions of the PartedLoggerImpl, each individually addressed by the partitioning Criteria.

This approach a) helps us scale horizontally per partition when using CompositeLoggerImpl instances per partition and b) helps limiting the traffic on those horizontally scaling (composite) logger instances by partitioning the data per Criteria using the parted logger (or its sub-classes): Partitioning simply means switching to the partition defined by the Criteria to perform the according logger operation.

Not having the PartedLoggerImpl (or a sub-class of it) would cause all the traffic for all Criteria to hit just a single (composite) Logger, limiting the possibility to scale endlessly (this one logger would be the bottleneck, even when being massively scaled horizontally): In particular this applies when looking at the extended versions of the PartedLoggerImpl such as the PartedQueryLoggerImpl and the PartedTrimLoggerImpl where query requests are passed only to the partition which contains the required data: Increasing query traffic is parted and does not hit increasingly a single (composite) logger.

A Record to be assigned to a partition must provide a column, the so called partition column, whose value is used to determine which partition is to be addressed. The partition identifying column is passed upon construction to this PartedLoggerImpl. Specializations may hide this parameter from their constructors and pass their partitioning column from inside their constructor to the super constructor.

The PartedQueryLoggerImpl extends the PartedLoggerImpl with the functionality of a query logger. Any query operations, such as #findLogs(Criteria), are targeted at that partition containing the queried data. For this to work, the query must obey some rules:

The query is to contain an EqualWithCriteria addressing the partition in an unambiguous way; by being part of a root level AndCriteria or an unambiguously nested AndCriteria hierarchy. More than one partition gets detected when unambiguous OrCriteria are applied to the partition criteria. In such cases, the query is addressed to the potential partitions.

In case it was not possible to identify any partitions, then as a fallback, all partitions are queried.

Query results are taken from from the invoked partitions (in normal cases this would be a single partition) round robin. the first result is taken from the first queried partition's result set ( Record s), the next result from the next queried partition and so on to start over again with the first queried partition. Round robin has been used to prevent invalidation of physical data sinks's result sets as of timeouts.

The PartedTrimLoggerImpl extends the parted query logger with the functionality of a trim logger. Delete operations with a query such as #deleteLogs(Criteria) are applied to the partitions in the same manner as done for #findLogs(Criteria).

Skip navigation links

Copyright © 2017. All rights reserved.