Configure Apache Flink
In this page we will cover how to configure common Flink features with Application Manager.
All stateful life-cycle operations (such as suspending a deployment or executing a stateful upgrade) require savepoints to be configured. Please provide an entry in the
flinkConfiguration map with the key
The provided directory needs to be accessible by all nodes of your cluster. For instance, you can use a custom volume mount (Adding Volume Mounts to Pods) or S3 bucket (State in S3).
Please consult the official Flink documentation on savepoints for more details.
State in S3
In order to store Flink application state such as checkpoints or savepoints in AWS S3, you have to provide credentials as part of the
The default Flink images of Application Manager ship with
PrestoS3FileSystem. Please refer to the Flink documentation on AWS deployment for more details.
High availability (HA) of Flink applications requires a ZooKeeper installation and a persistent storage backend as described in the official Flink documentation on High Availability.
For convenience, we repeat the required options here as part of the
high-availability.zookeeper.quorum: zk-node1:2181, zk-node2:2181, zk-node3:2181
Application Manager will automatically scope all state to the executing job by setting the
high-availability.cluster-id to the respective job ID. Currently, you cannot overwrite this behaviour as it can lead to undefined side effects between jobs. Therefore, the above options are sufficient to configure HA with Application Manager.