Logging & Metrics
Ververica Platform makes it easy to integrate your third-party logging and metrics services with Apache Flink® and establish a consistent monitoring infrastructure across your streaming application landscape.
As a platform administrator you can do this by pre-configuring logging profiles and setting default configurations for Apache Flink® metrics reporters.
Logging Profiles
Logging profiles are named Apache Log4j 2 configuration Twig templates that are available for usage in the logging section of Deployment resources.
Templates are rendered as a log4j2.xml
as part of Flink jobs that use the respective profile.
Configuration
Ververica Platform ships a default logging profile named default (see Default Logging Profile in the appendix).
Administrators can overwrite the default profile or add additional profiles in the Ververica Platform configuration as shown in the example below.
vvp:
flinkLoggingProfiles:
- name: default
template: |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Configuration xmlns="http://logging.apache.org/log4j/2.0/config" strict="true">
...
</Configuration>
- name: custom
template: |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Configuration xmlns="http://logging.apache.org/log4j/2.0/config" strict="true">
...
</Configuration>
Note that you are required to manually provide any additional code dependencies for your logging configuration to work as part of the Flink Docker image.
If you want to reference system properties or environment variables, such as ${sys:log.file}
, you need to escape the dollar sign (e.g., ${_:$}{sys:log.file}
).
The provided templates are required to be a string. We recommend using the literal string notation using the |
symbol as in the example above when configuring the platform via YAML.
Placeholders
When configuring a template, you have the following Twig variables available:
{{ namespace }}
: Namespace of the Deployment resource.{{ jobId }}
: ID of the Job resource.{{ rootLoggerLogLevel }}
: Log level of the root logger.{{ userConfiguredLoggers }}
: A key value map of user configured log levels (key: logger, value: log level).
Depending on the Deployment Mode additional Twig variables are available:
- Application mode:
{{ deploymentId }}
: ID of Deployment resource.{{ deploymentName }}
: Name of Deployment resource.
- Session mode:
{{ sessionClusterID }}
: ID of Session Cluster resource.{{ sessionClusterName }}
: Name of Session Cluster resource.
You can use these variables to customize your configuration template as needed. Please check the Default Logging Profile in the appendix for a full example.
Using the rootLoggerLogLevel
and userConfiguredLoggers
Twig variables guarantees that the logging section of Deployment resources is reflected in the rendered logging configuration.
Example
In the following example, we add a logging profile called kafka that logs to the console and a Apache Kafka® cluster using the KafkaAppender.
vvp:
flinkLoggingProfiles:
- name: kafka
template: |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Configuration xmlns="http://logging.apache.org/log4j/2.0/config" strict="true">
<Appenders>
<Appender name="StdOut" type="Console">
<Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n" type="PatternLayout"/>
</Appender>
<Kafka name="Kafka" topic="log-example">
<PatternLayout pattern="%date %message"/>
<Property name="bootstrap.servers">localhost:9092</Property>
</Kafka>
</Appenders>
<Loggers>
{%- for name, level in userConfiguredLoggers -%}
<Logger level="{{ level }}" name="{{ name }}"/>
{%- endfor -%}
<!-- Avoid recursive logging -->
<Logger name="org.apache.kafka" level="INFO" />
<Root level="{{ rootLoggerLogLevel }}">
<AppenderRef ref="StdOut"/>
<AppenderRef ref="Kafka"/>
</Root>
</Loggers>
</Configuration>
Note that you are required to manually provide any additional code dependencies for your logging configuration to work as part of the Apache Flink® Docker image.
Metrics
Apache Flink® comes with a comprehensive and flexible metrics system, which covers system metrics provided by the framework itself as well as user-defined metrics.
These metrics can be exposed to an external system using so called metrics reporters. These reporters will be instantiated on each job- and taskmanager during startup.
Out of the box, Ververica Platform bundles metrics reporters for
Metrics reports are configured via the Flink configuration. See the Apache Flink® documentation for the specific configuration options of each of the reporters.
Example
The following snippet configures a Deployment to expose metrics to Prometheus.
spec:
template:
spec:
flinkConfiguration:
metrics.reporters: prometheus
metrics.reporter.prometheus.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prometheus.port: 9249
We recommend using global or namespaced Deployment Defaults to configure metrics reporters for all your Deployments in a single place.
In addition to the Flink metrics, you can also monitor the status section of the Ververica Platform Deployment status and Job status resources.
Web User Interface
For each Deployment, the web user interface links to a metrics and logging dashboard as well as the Flink UI. These links are customizable in the configuration.
vvp:
ui:
linkTemplates:
flinkUi: <link_template>
jobLogs: <link_template>
metrics: <link_template>
deploymentLogs: <link_template>
The link templates can contain placeholders such as <%= jobId %>
. The following placeholders are available:
template | result |
---|---|
<%= namespace %> | Namespace |
<%= deploymentId %> | Deployment ID |
<%= jobId %> | Latest job ID |
<%= flinkJobId %> | Latest job ID without hyphens |
<%= deploymentName %> | Deployment name |
<%= jobStartDate %> | Job start date |
<%= jobEndDate %> | Job end date |
jobStartDate
and jobEndDate
can be used for building flink job metrics in Grafana:
vvp:
ui:
linkTemplates:
metrics: http://localhost:3000/path/to/your/dashboard?from=<%= jobStartDate %>&to=<%= jobEndDate %>
Appendix
Default Logging Profile
Ververica Platform ships a default logging profile named default. The default configuration logs to the console and to a local rolling file whose location is expected to be specified via the system property log.file
(for display in the Flink UI).
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Configuration xmlns="http://logging.apache.org/log4j/2.0/config" strict="true">
<Appenders>
<Appender name="StdOut" type="Console">
<Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n" type="PatternLayout"/>
</Appender>
<Appender name="RollingFile" type="RollingFile" fileName="${sys:log.file}" filePattern="${sys:log.file}.%i">
<Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n" type="PatternLayout"/>
<Policies>
<SizeBasedTriggeringPolicy size="5 MB"/>
</Policies>
<DefaultRolloverStrategy max="1"/>
</Appender>
</Appenders>
<Loggers>
<Logger level="INFO" name="org.apache.hadoop"/>
<Logger level="INFO" name="org.apache.kafka"/>
<Logger level="INFO" name="org.apache.zookeeper"/>
<Logger level="INFO" name="akka"/>
<Logger level="ERROR" name="org.jboss.netty.channel.DefaultChannelPipeline"/>
<Logger level="OFF" name="org.apache.flink.runtime.rest.handler.job.JobDetailsHandler"/>
{%- for name, level in userConfiguredLoggers -%}
<Logger level="{{ level }}" name="{{ name }}"/>
{%- endfor -%}
<Root level="{{ rootLoggerLogLevel }}">
<AppenderRef ref="StdOut"/>
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>