Artifacts
The artifact section of a Deployment specifies which Flink code to execute. Currently, there are two supported types of artifacts, JARs and SQL scripts.
JAR Artifacts
Apache Flink® jobs must be packaged as a JAR before they can be submitted for execution by Flink. After packaging, you can reference JARs in the artifact section of your Deployment configuration:
- Application Mode
- Session Mode
kind: Deployment
spec:
template:
spec:
artifact:
kind: jar
jarUri: https://repo1.maven.org/maven2/org/apache/flink/flink-examples-streaming_2.12/{flinkRelease}/flink-examples-streaming_2.12-{flinkRelease}-TopSpeedWindowing.jar
additionalDependencies:
- s3://bucket/object
entryClass: org.apache.flink.streaming.examples.windowing.TopSpeedWindowing
mainArgs: --windowSize 10 --windowUnit minutes
flinkVersion: {flinkVersion}
flinkImageRegistry: registry.ververica.com
flinkImageRepository: v{shortVersion}/flink
flinkImageTag: {ververicaFlinkRelease}-scala_2.12
where:
-
jarUri
(required): The URI of the JAR to execute. Please refer to the Artifact Management page for details on which URI schemes are supported. -
additionalDependencies
(optional): A list of additional dependencies that should be downloaded and made part of the user code classpath of the Flink job. Each additional dependency is provided as a URI. -
entryClass
(optional): The class to execute. Required, if no Main-Class manifest entry manifest present. The main method of the entry class should setup a FlinkExecutionEnvironment
e.g. StreamExecutionEnvironment and execute the Flink job. -
mainArgs
(optional): Arguments that are fed into the main method of the entry point class. The argument is a single string that is tokenized before being provided to the entry class. Single parameters containing whitespace can be represented by wrapping them in single or double quotes. Double quoted values are escaped by means of backslashes, e.g.--json "{\"key\": \"value\"}"
results in two parameters--json
and{"key": "value"}
. -
flinkVersion
(optional): Themajor.minor
version of the Flink version. If not provided, it will fall back to the configured default of your Ververica Platform installation. Note that the provided version must match the actual version deployed at runtime (as determined by the Flink docker image). Therefore, please make sure to update or clearflinkImageTag
whenever you updateflinkVersion
. -
flinkImageRegistry
(optional): The Docker image registry to use for the Flink image. If not provided, it will fall back to the configured default of your Ververica Platform installation. -
flinkImageRepository
(optional): The Docker image repository to use for the Flink image. If not provided, it will fall back to the configured default of your Ververica Platform installation. -
flinkImageTag
(optional): The Docker image tag to use for the Flink image. If not provided, it will fall back to the configured default of your Ververica Platform installation for the givenflinkVersion
.
In order to specify an image by digest, prefix the provided value
with @
, e.g. @sha256:2d034c...54f765
.
kind: Deployment
spec:
template:
spec:
artifact:
kind: jar
jarUri: https://repo1.maven.org/maven2/org/apache/flink/flink-examples-streaming_2.12/{flinkRelease}/flink-examples-streaming_2.12-{flinkRelease}-TopSpeedWindowing.jar
entryClass: org.apache.flink.streaming.examples.windowing.TopSpeedWindowing
mainArgs: --windowSize 10 --windowUnit minutes
where:
-
jarUri
(required): The URI of the JAR to execute. Please refer to the Artifact Management page for details on which URI schemes are supported. -
entryClass
(optional): The class to execute. Required, if no Main-Class manifest entry manifest present. The main method of the entry class should setup a FlinkExecutionEnvironment
e.g. StreamExecutionEnvironment and execute the Flink job. -
mainArgs
(optional): Arguments that are fed into the main method of the entry point class. The argument is a single string that is tokenized before being provided to the entry class. Single parameters containing whitespace can be represented by wrapping them in single or double quotes. Double quoted values are escaped by means of backslashes, e.g.--json "{\"key\": \"value\"}"
results in two parameters--json
and{"key": "value"}
.
In session mode, the Flink Docker image is specified in the SessionCluster resource and additionalDependencies
are not supported.
Limitations
When executing a Flink JAR artifact, you are currently limited to a
single call to the execute()
or executeAsync()
method of the
Flink execution environment, e.g. StreamExecutionEnvironment.
SQL Script Artifacts
- Application Mode
- Session Mode
kind: Deployment
spec:
template:
spec:
artifact:
kind: sqlscript
sqlScript: >
CREATE TEMPORARY TABLE `vvp`.`default`.`source` (`string_column` STRING) WITH ('connector' = 'datagen');
CREATE TEMPORARY TABLE `vvp`.`default`.`sink` (`string_column` STRING) WITH ('connector' = 'blackhole');
INSERT INTO `sink` SELECT * FROM `source`;
additionalDependencies:
- s3://bucket/object
flinkVersion: {flinkVersion}
flinkImageRegistry: registry.ververica.com
flinkImageRepository: v{shortVersion}/flink
flinkImageTag: {ververicaFlinkRelease}-scala_2.12
where:
-
sqlScript
(required): The SQL Script to execute. Please refer to the SQL User Guide page for details about SQL development. -
additionalDependencies
(optional): A list of additional dependencies that should be downloaded and made part of the user code classpath of the Flink job. Each additional dependency is provided as a URI. Please refer to the Artifact Management page for details on which URI schemes are supported. -
flinkVersion
(optional): Themajor.minor
version of the Flink version. If not provided, it will fall back to the configured default SQL version of your Ververica Platform installation. Note that the provided version must match the actual version deployed at runtime (as determined by the Flink docker image). Therefore, please make sure to update or clearflinkImageTag
whenever you updateflinkVersion
. -
flinkImageRegistry
(optional): The Docker image registry to use for the Flink image. If not provided, it will fall back to the configured default of your Ververica Platform installation. -
flinkImageRepository
(optional): The Docker image repository to use for the Flink image. If not provided, it will fall back to the configured default of your Ververica Platform installation. -
flinkImageTag
(optional): The Docker image tag to use for the Flink image. If not provided, it will fall back to the configured default SQL version of your Ververica Platform installation for the givenflinkVersion
.
In order to specify an image by digest, prefix the provided value with @
, e.g. @sha256:2d034c...54f765
.
kind: Deployment
spec:
template:
spec:
artifact:
kind: sqlscript
sqlScript: \>
CREATE TEMPORARY TABLE `vvp`.`default`.`source` (`string_column` STRING) WITH ('connector' = 'datagen');
CREATE TEMPORARY TABLE `vvp`.`default`.`sink` (`string_column` STRING) WITH ('connector' = 'blackhole');
INSERT INTO `sink` SELECT * FROM `source`;
additionalDependencies:
- s3://bucket/object
flinkVersion: {flinkVersion}
flinkImageRegistry: registry.ververica.com
flinkImageRepository: v{shortVersion}/flink
flinkImageTag: {ververicaFlinkRelease}-scala_2.12
where:
-
sqlScript
(required): The SQL Script to execute. Please refer to the SQL User Guide page for details about SQL development. -
additionalDependencies
(optional): A list of additional dependencies that should be downloaded and made part of the user code classpath of the Flink job. Each additional dependency is provided as a URI. Please refer to the Artifact Management page for details on which URI schemes are supported. -
flinkVersion
(optional): Themajor.minor
version of the Flink version. If not provided, it will fall back to the configured default SQL version of your Ververica Platform installation. Note that the provided version must match the actual version deployed at runtime (as determined by the Flink docker image). Therefore, please make sure to update or clearflinkImageTag
whenever you updateflinkVersion
.
In session mode, the Flink Docker image is specified in the SessionCluster resource.
Limitations
In order to satisfy compatibility constraints between Ververica Platform's SQL service and Flink, there are restrictions on which exact Flink versions can be used in conjunction with SQL script artifacts. Please check out the SQL Scripts & Deployments page for more details on these restrictions.