Universal Blob Storage¶
Ververica Platform provides centralized configuration of blob storage for its services.
In order to enable universal blob storage configure a base URI for your blob storage
provider. Add the following snippet to your Helm
vvp: blobStorage: baseUri: s3://my-bucket/vvp
The provided base URI will be picked up by all services that can make use of blob storage, for example Application Manager or Artifact Management.
|Storage Provider||Scheme||Artifact Management||State Snapshots|
|Flink 1.14||Flink 1.13||Flink 1.12|
|Apache Hadoop® HDFS||
(✓): With custom Flink image
Additional Provider Configuration¶
Some supported storage providers have additional options that can be configured in the
blobStorage section of the
values.yaml file, scoped by provider.
The following is a complete listing of supported additional storage provider configuration options:
blobStorage: s3: endpoint: "" region: "" oss: endpoint: ""
Ververica Platform supports using a single set of credentials to access your configured blob storage, and will automatically distribute these credentials to Flink jobs that require them.
These credentials can be either specified directly in
values.yaml, or added to a Kubernetes
secret out-of-band and referenced in
values.yaml by name.
Option 1: values.yaml¶
The following is a complete listing of the credentials that can be given for each storage provider, with example values:
blobStorageCredentials: azure: connectionString: DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=vvpArtifacts;AccountKey=VGhpcyBpcyBub3QgYSB2YWxpZCBBQlMga2V5LiAgVGhhbmtzIGZvciB0aG9yb3VnaGx5IHJlYWRpbmcgdGhlIGRvY3MgOikgIA==; s3: accessKeyId: AKIAEXAMPLEACCESSKEY secretAccessKey: qyRRoU+/4d5yYzOGZVz7P9ay9fAAMrexamplesecretkey hdfs: # Apache Hadoop® configuration files (core-site.xml, hdfs-site.xml) # and optional Kerberos configuration files. Note that the keytab # has to be base64 encoded. core-site.xml: | <?xml version="1.0" ?> <configuration> ... </configuration> hdfs-site.xml: | <?xml version="1.0" ?> <configuration> ... </configuration> krb5.conf: | [libdefaults] ticket_lifetime = 10h ... keytab: BQIAA...AAAC keytab-principal: flink
Option 2: Pre-create Kubernetes Secret¶
To use a pre-created Kubernetes secret, its keys must match the pattern
s3.secretAccessKey. To configure Ververica Platform to use this
secret, add the following snippet to your Helm
blobStorageCredentials: existingSecret: my-blob-storage-credentials
The values in a Kubernetes secret must be base64-encoded.
Example: Apache Hadoop® HDFS¶
For UBS with Apache Hadoop® HDFS we recommend to pre-create a Kubernetes secret with the required configuration files in order to avoid duplication of the configuration files in the Ververica Platform values.yaml file.
kubectl create secret generic my-blob-storage-credentials \ --from-file hdfs.core-site.xml=core-site.xml \ --from-file hdfs.hdfs-site.xml=hdfs-site.xml \ --from-file hdfs.krb5.conf=krb5.conf \ --from-file hdfs.keytab=keytab \ --from-file hdfs.keytab-principal=keytab-principal
After you have created the Kubernetes secret, you can reference it in the values.yaml as an existing secret. Note that the Kerberos configuration is optional.
When running on AWS EKS or AWS ECS your Kubernetes Pods inherit the roles attached to the underlying EC2 instances.
If these roles already grant access to the required S3 resources you only need to configure
vvp.blobStorage.baseUri without configuring any
Apache Hadoop® Versions¶
UBS with Apache Hadoop® HDFS uses a Hadoop 2 client for communication with the HDFS cluster. Hadoop 3 preserves wire compatibility with Hadoop 2 clients and you are able to use HDFS blob storage with both Hadoop 2 and Hadoop 3 HDFS clusters.
But note that there may be incompatabilities between Hadoop 2 and 3 with respect to the configuration files core-site.xml and hdfs-site.xml. As an example, Hadoop 3 allows to configure durations with a unit suffix such as 30s which results in a configuration parsing error with Hadoop 2 clients. It’s generally possible to work around these issues by limiting configuration to Hadoop 2 compatible keys/values.
Apache Flink® Hadoop Dependency¶
When using HDFS UBS, Ververica Platform dynamically adds the Hadoop dependency flink-shaded-hadoop-2-uber to the classpath. You can use the following annotation to skip this step:
kind: Deployment spec: template: metadata: annotations: ubs.hdfs.hadoop-jar-provided: true
This is useful if you your Docker image provides a Hadoop dependency. If you use this annotation without a Hadoop dependency on the classpath, your Flink application will fail.
The following services make use of the universal blob storage configuration.
Apache Flink® Jobs¶
Flink jobs are configured to store blobs at the following locations:
User-provided configuration has precedence over universal blob storage.
Artifacts are stored in the following location:
The SQL Service depends on blob storage for storing deployment information and JAR files of user-defined functions.
Before a SQL query can be deployed it needs to be optimized and translated to a Flink job. SQL Service stores the Flink job and all JAR files that contain an implementation of a user-defined function which is used by the query at the following locations:
|UDF JAR Files||
After a query has been deployed, Application Manager maintains the same blobs as for regular Flink jobs, i.e., checkpoints, savepoints, and high-availability files.
The JAR files of UDF Artifacts that are uploaded via the UI are stored in the following location:
Connectors, Formats, and Catalogs¶
The JAR files of Custom Connectors and Formats and Custom Catalogs that are uploaded via the UI are stored in the following location: