Logging

Horizon emits logs of two main categories:

  • technical logs, which are emitted on the default log location. They are used for troubleshooting the app and monitored for bugs;

  • events, which are stored in database and can be viewed in-app. They allow auditing actions on the platform.

Default log location

The default log location varies dependending on your deployment mode:

  • RPM

  • Kubernetes

Horizon defines a default rolling file appender named RUN. This appender keeps the technical logs for 30 days into files with the following naming convention:

horizon.log-<yyyy-MM-dd>.log

Those files are available under the /opt/horizon/var/log directory.

By default, Horizon logs are written to stdout. It is currently not possible to write logs to any other destination.

Log format

By default, Horizon logs are formatted to be human readable using the following format:

%date{yyyy-MM-dd HH:mm:ss} - [%logger] - [%traceID] - [%level] - %message%n%xException{full}

This format can be customized:

  • RPM

  • Kubernetes

In the /opt/horizon/etc/horizon-logback.xml file, update the appender’s <encoder> key:

<encoder>
    <pattern>%date{yyyy-MM-dd HH:mm:ss ZZZZ} | %message</pattern>
</encoder>

Add the following keys to your values.yaml file:

logback:
  pattern: "%date{yyyy-MM-dd HH:mm:ss ZZZZ} | %message"
All the available patterns can be found in the logback docs.

It’s also possible to configure Horizon to emit JSON structured logs, which can be easier to parse by machines. To do so:

  • RPM

  • Kubernetes

Edit the /opt/horizon/etc/horizon-logback.xml file to either:

  • send the logs to a syslog server:

    An example for a syslog server on 192.168.1.2 and the logs processed by the LOCAL6 facility
    <conversionRule conversionWord="syslogStart" converterClass="ch.qos.logback.classic.pattern.SyslogStartConverter"/>
    <appender name="JSON_SYSLOG" class="net.logstash.logback.appender.LogstashUdpSocketAppender">
        <host>192.168.1.2</host>
        <port>514</port>
        <layout class="net.logstash.logback.layout.LogstashLayout">
          <prefix class="ch.qos.logback.classic.PatternLayout">
            <pattern>%syslogStart{LOCAL6}</pattern>
          </prefix>
          <fieldNames>
            <timestamp>time</timestamp>
            <logger>logger</logger>
            <thread>thread</thread>
            <level>severity</level>
            <stackTrace>exception</stackTrace>
          </fieldNames>
          <customFields>{"app":"horizon", "hostname":"${HOSTNAME}"}</customFields>
        </layout>
    </appender>
  • or send the logs to the local console:

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <fieldNames>
              <timestamp>time</timestamp>
              <logger>logger</logger>
              <thread>thread</thread>
              <level>severity</level>
              <stackTrace>exception</stackTrace>
            </fieldNames>
            <customFields>{"app":"horizon", "hostname":"${HOSTNAME}"}</customFields>
        </encoder>
      </appender>

Then, update any logger with the appender ref and ensure that the log level is not OFF:

<logger name="event" level="INFO">
    <appender-ref ref="JSON_SYSLOG"/>
</logger>

Update the values.yaml file to set the log format:

logFormat: json

Horizon should now start producing JSON logs such as :

{
    "time": "2023-08-16T16:12:54.481+02:00",
    "@version": "1",
    "message": "[Actor pkimanager] - Registering PKI Queue 'slowed-queue' (cluster wide: 'false')",
    "logger": "actors.pki.PKIManagerActor",
    "thread": "application-blocking-io-dispatcher-43",
    "severity": "INFO",
    "level_value": 20000,
    "HOSTNAME": "horizon.evertrust",
    "application.home": "/opt/horizon",
    "kamonSpanId": "c5a74b959971c7ee",
    "kamonTraceId": "b1ccb54c9eb7e493",
    "kamonSpanName": "/ui",
    "app": "horizon",
    "hostname": "horizon.evertrust"
}

Additional loggers

Sometimes, for debugging purposes, you’ll be asked to enable a specific logger or change the logging level of an existing one. To do so:

  • RPM

  • Kubernetes

Edit the /opt/horizon/etc/horizon-logback.xml file, and add or edit the logger you wish to change:

<logger name="<logger name>" level="<log level>">
    <appender-ref ref="<appender name>"/>
</logger>

Override the logback.loggers array in the values.yaml file:

logback:
  loggers:
    - name: <logger name>
      level: <log level>

Log events to the default log location

Events are produced by Horizon and typically stored in database. For compliance reasons (for example when sending logs to an external processor), you might want to also log events to the default log location.

  • RPM

  • Kubernetes

In the /opt/horizon/etc/horizon-logback.xml file, change the level of the events logger from OFF to INFO:

<logger name="events" level="INFO">
    <appender-ref ref="<appender name>"/>
</logger>

Override the logback.loggers array in the values.yaml file to add :

logback:
  loggers:
    - name: json_events
      level: info

Sending logs to an external processor

Horizon logs can be sent to an external source (such as a SIEM) using logback.

  • RPM

  • Kubernetes

In the /opt/horizon/etc/horizon-logback.xml file, edit the appender named SYSLOG` to change the IP address for the syslogHost to redirect to your own syslog server. As an example, if your syslog server is on 192.168.1.2 and the Horizon logs must be processed by the LOCAL6 facility, the syslog appender should look like this:

<appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">
    <syslogHost>192.168.1.2</syslogHost>
    <facility>LOCAL6</facility>
    <suffixPattern>%msg%n</suffixPattern>
</appender>

Then, update any logger with the SYSLOG appender ref and ensure that the log level is set to "INFO":

<logger name="event" level="INFO">
    <appender-ref ref="SYSLOG"/>
</logger>

On Kubernetes, logging should done by containers to stdout and managed at the cluster level by a log collector such as Grafana Alloy or Vector.

As an example of a mature and battle-tested log processing pipeline, take a look at our Cloud log management document.