Logo
SQL EAP
  • Getting Started
  • Installation & Upgrades
  • Platform Operations
  • Application Operations
    • Apache Flink® Deployments
      • Deployment Templates
      • Apache Flink® Configuration
      • Apache Flink® Pod Templates
      • Secret Values
      • Jobs
      • Status
      • Savepoints
      • Event Log
    • Application Lifecycle Management
    • Autopilot
    • Artifact Management
    • REST API & CI/CD
  • SQL Development
  • Streaming Ledger
  • Resources
  • Release Notes
Ververica Platform
  • Docs »
  • Application Operations »
  • Apache Flink® Deployments »
  • Apache Flink® Pod Templates

Apache Flink® Pod Templates¶

In this page we will cover how to customize the Kubernetes Pod Templates of the Flink Pods managed by Ververica Platform as part of your Deployment.

The Deployment Template section of your Deployment provides a kubernetes attribute that allows you to specify Kubernetes-specific options for your Deployment.

  • Overview
  • Annotations
  • Labels
  • Node Selector
  • Affinity
  • Tolerations
  • Image Pull Secrets
  • Volume Mounts
    • Example: Mounting an NFS and Secret
  • Environment Variables
    • Example: Using a Custom Log Configuration File
  • Pod Security Context

Overview¶

You can specify these options for all Pods (jobmanager, taskmanagers) created for Flink jobs.

kind: Deployment
spec:
  template:
    spec:
      kubernetes:
        pods:
          annotations:
            key: value
          labels:
            key: label
          nodeSelector:
            key: value
          affinity:
            KubernetesAffinity
          tolerations:
            - KubernetesToleration
          imagePullSecrets:
            - name: name
          volumeMounts:
            - name: name
              volume: KubernetesVolume
              volumeMount: KubernetesVolumeMount
          envVars:
            - name: name
              value: value
          securityContext:
            KubernetesPodSecurityContext

Annotations¶

You can attach annotations to Pods created for Flink jobs via pods.annotations.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            annotations:
              key: value

Provided annotations will be added to the metadata section of created Pods.

Labels¶

You can attach labels to Pods created for Flink jobs via pods.labels.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            labels:
              key: value

All provided labels will be added to the metadata.labels section of created Pods and are subject to the [restrictions enforced by the Kubernetes API](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).

Note

The label keys app, system, deploymentId, jobId and component are reserved by Ververica Platform.

Node Selector¶

You can attach a node selector to Pods created for Flink jobs in order to constrain Pods to run on particular nodes via pods.nodeSelector.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            nodeSelector:
              key: value

Affinity¶

You can attach an affinity to Pods created for Flink jobs in order to constrain Pods to run on particular nodes via pods.affinity.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            affinity:
              KubernetesAffinity

KubernetesAffinity is expected to be of type Affinity. The provided affinity will only be validated when the actual job is created and not when creating/modifying the resource.

Tolerations¶

You can attach tolerations to Pods created for Flink jobs in order ensure that Pods are not scheduled onto inappropriate nodes via pods.tolerations.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            tolerations:
              - KubernetesToleration

KubernetesToleration is expected to be of type Toleration. The provided tolerations will only be validated when the actual job is created and not when creating/modifying the resource.

Image Pull Secrets¶

You can assign image pull secrets that should be used during Flink cluster start up. The image pull secrets are not managed by Ververica Platform and should be provisioned separately. They are only referenced by name in Ververica Platform to attach them to the Pods.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            imagePullSecrets:
              - name: KubernetesSecretName

KubernetesSecretName is expected to be a valid name of Kuberenetes secret.

Note

Ververica Platform only checks that name is valid, but does not check if the referenced secret exists.

Volume Mounts¶

You can attach volumes and volume mounts to pods created for Flink jobs, for instance an NFS mount for use as a state backend via pods.volumeMounts.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            volumeMounts:
              - name: name
                volume: KubernetesVolume
                volumeMount: KubernetesVolumeMount

KubernetesVolume is expected to be of type Volume and KubernetesVolumeMount is expected to be of type VolumeMount. The provided volumes and volume mounts will only be validated when the actual job is created and not when creating/modifying the resource.

Example: Mounting an NFS and Secret¶

kind: Deployment
spec:
  template:
    spec:
      kubernetes:
        pods:
          volumeMounts:
            - name: my-volume
              volume:
                name: my-volume
                nfs:
                  server: 10.1.2.3
                  path: /ververica-platform-foo
              volumeMount:
                name: my-volume
                mountPath: /foo/bar
            - name: my-secret
              volume:
                name: my-secret
                secret:
                  secretName: my-secret
              volumeMount:
                name: my-secret
                mountPath: /var/run/secrets/some-secret

Environment Variables¶

You can set environment variables for Pods created for Flink jobs. These can be useful to dynamically configure image-specific behaviour, such as using a custom log42j.xml or overwriting Flink-specific variables.

You can also use the valueFrom field for more advanced settings. Please check the official EnvVar reference documentation for more details.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            envVars:
              - name: NAME
                value: "Demo Flink job"
              - name: POD_IP
                valueFrom:
                  fieldRef:
                    fieldPath: status.podIP

Example: Using a Custom Log Configuration File¶

In this example, we will mount a custom log4j2.xml configuration and use it in our Deployment.

For this example, we have the following content in a file called custom-log4j2.xml:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Configuration xmlns="http://logging.apache.org/log4j/2.0/config" strict="true">
  <Appenders>
    <Appender name="STDOUT" type="Console">
      <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n" type="PatternLayout"/>
    </Appender>
  </Appenders>
  <Loggers>
    <Root level="INFO">
      <AppenderRef ref="STDOUT"/>
    </Root>
  </Loggers>
</Configuration>

We create a Kubernetes ConfigMap with the contents of this file as follows:

kubectl create configmap custom-log4j-config --from-file=log4j2.xml=custom-log42j.xml

We can now mount the ConfigMap in our Flink Deployment and use use the mounted file by setting the environment variable LOG4J_CONF.

kind: Deployment
spec:
  template:
    spec:
      kubernetes:
        pods:
          volumeMounts:
            - name: logconfig
              volume:
                name: logconfig
                configMap:
                  name: custom-log4j-config
              volumeMount:
                name: logconfig
                mountPath: /opt/flink/conf
          envVars:
            - name: LOG4J_CONF
              value: /opt/flink/conf/log42j.xml

Pod Security Context¶

You can attach a PodSecurityContext to pods created for Flink jobs in order define privilege and access control settings for a Pod.

kind: Deployment
  spec:
    template:
      spec:
        kubernetes:
          pods:
            securityContext:
              KubernetesPodSecurityContext

KubernetesPodSecurityContext is expected to be of type PodSecurityContext. The provided pod security context will only be validated when the actual job is created and not when creating/modifying the resource.

Next Previous

© Copyright 2020, Ververica GmbH.

Apache Flink, Apache Hadoop, Apache Kafka, Flink®, Hadoop®, Kafka®, Apache®, the squirrel logo, and the Apache feather logo are either registered trademarks or trademarks of The Apache Software Foundation.

Report an issue with this documentation page | Imprint

Other Versions v: sql-eap
Tags
v2.10
v2.9
v2.8
v2.7
v2.6
v2.5
v2.4
v2.3
v2.2
v2.1
v2.0
v1.4
v1.3
v1.2
v1.1
v1.0
sql-eap