Skip to main content
Version: 2.13

Ververica Platform 2.4.0

Release Date: 2021-03-10



  • Ververica Platform 2.4 has dropped support for Helm 2, following its Deprecation Timeline. Please see the Upgrade section for more details.

  • The Helm chart now supports running custom initContainers and containers in the Ververica Platform Pod, via the parameters extraInitContainers and extraContainers, respectively.

Ververica Platform 2.4.0 supports Apache Flink® 1.12 and Apache Flink® 1.11 under SLA. Apache Flink® 1.10 images are no longer provided in this version but are still supported on a best-effort basis. Apache Flink® 1.9 support has now been dropped.

Vulnerability Fixes

The following security vulnerabilities have been fixed compared to 1.11.3-[stream|spring]2:

CVE-2017-18640, CVE-2020-25649

Vulnerability Fixes

The following security vulnerabilities have been fixed compared to 1.11.3-[stream|spring]1:


Memory Allocator

The memory allocator used by all of our Apache Flink® images has been changed to jemalloc. This change has been shown to reduce memory fragmentation, preventing OOMKilled containers when using the RocksDb state backend. The open source Apache Flink® community made a similar decision.

In Ververica Platform 2.4, we upgrade the Flink SQL version from Apache Flink® 1.11.2 to Apache Flink® 1.12.2. Apache Flink® 1.12 brings new language features like temporal table joins, many improvements to the SQL connectors, performance enhancements, and bug fixes. Please see the the Apache Flink® release blog post for details.

After the upgrade to Ververica Platform 2.4, you will see the following changes related to the upgraded Flink SQL version:

  • New SQL Deployments will always use Apache Flink® 1.12.2.

  • Existing SQL Deployments will become immutable until they are manually upgraded to Apache Flink® 1.12.2. This condition will be relaxed for future upgrades of the Flink SQL version. In the future, SQL Deployments will only become partially immutable as described in Managing SQL Deployments.

  • SQL Deployments now require flinkVersion, flinkImageRegistry, flinkImageRepository and flinkImageTag, both in application and session mode. The flinkVersion needs to be set to 1.12. With the default platform configurations, this boils down to the following additional values for all SQL Deployments:

flinkVersion: '1.12'
flinkImageRepository: flink
flinkImageTag: 1.12.2-stream1-scala_2.12
  • You can safely upgrade all your custom connectors and user-defined functions to Apache Flink® 1.12.2. For existing Deployments (still running on Flink 1.11.2) the previous artifacts are still available internally until all Deployments have been upgraded to Apache Flink® 1.12.2. This is not required, if you you only rely on @Public APIs. If you rely on @PublicEvolving APIs too, an upgrade is recommended.
  • You need to upgrade your Preview Session Cluster to Apache Flink® 1.12.2 to continue to run SQL queries from the Editor.

Previews for Changelog Results

The SQL Editor can preview SQL queries that produce results containing inserts, updates as well as deletions. The previous releases only supported insert-only queries.

Custom Connectors via REST API & Web User Interface

Custom connectors and formats can now be added via the REST API and web user interface.

  • PUT /sql/v1beta1/namespaces/{ns}/connectors/{name}

  • PUT /sql/v1beta1/namespaces/{ns}/formats/{name}

All connectors and formats (custom and packaged) can be listed via the REST API and web user interface.

  • GET /sql/v1beta1/namespaces/{ns}/connectors

  • GET /sql/v1beta1/namespaces/{ns}/formats

In addition, you can list all tables that are used by a specific connector or format:

  • GET /sql/v1beta1/namespaces/{ns}/connectors/{name}:list-tables

  • GET /sql/v1beta1/namespaces/{ns}/formats/{name}:list-tables

Please check out the Swagger specification for all supported endpoints.

Additional Packaged Connector

All packaged connectors have been updated to Apache Flink® 1.12.2. The following new connectors and formats are available:


  • The upsert-kafka connector is a new packaged connector. It supports reading from and writing to (compacted) Apache Kafka® topics with upsert semantics.
  • The kinesis connector is a new packaged connector for reading from and writing to Amazon Kinesis Streams.
  • The filesystem connector is supported as a source connector on an experimental / best-effort basis only at the moment.


  • The debezium-json format is now also supported as sink format.
  • The canal-json format is now also supported as sink format.
  • The debezium-avro-confluent format is a now support as source and sink format.
  • The raw format is a new packaged as source and sink format.

Please see Packaged Connectors for a list of all packaged connectors and formats.

SQL Source Table Sink & Source REST API & Web User Interface

For SQL Deployments, the status of a Job now includes information about the source and sink tables of that query. Specifically, it contains a list of all source and sink tables, their schema and options.


  kind: Job
state: started
- catalogName: vvp
databaseName: default
name: stations
temporary: false
- name: station_key
type: VARCHAR(2147483647) NOT NULL
- name: update_time
type: TIMESTAMP(3)
- name: city
type: VARCHAR(2147483647)
properties.bootstrap.servers: vvp-kafka.kafka.svc:9092
key.format: raw
topic: stations
connector: upsert-kafka
value.format: json
- catalogName: vvp
databaseName: default
name: stations_faker
temporary: true
- name: station_key
type: VARCHAR(2147483647) NOT NULL
- name: update_time
type: TIMESTAMP(3)
- name: city
type: VARCHAR(2147483647)
fields.station_key.expression: "#{number.numberBetween '0','1000'}" "#{}"
rows-per-second: 100
connector: faker
fields.update_time.expression: "#{date.past '10','5','SECONDS'}"

This information is displayed on the Overview tab of every SQL Deployment.

Other SQL Changes

  • The legacy properties connector.type and format.type are now deprecated for packaged connectors and will be removed in the future. Instead, connector and format should be used, respectively.

  • Custom packaged connectors need to be updated to add definesFormat: true to the format property in connector-meta.yaml if the connector supports formats. The requiresFormat property must be removed as it is no longer necessary.

  • The legacy type inference support for aggregate functions is not supported anymore. All other user-defined functions were already using the new type inference in Apache Flink® 1.11.

Session Cluster Deployments

So far, Ververica Platform only supported Deployments in application mode. With this release, Deployments can also be executed in session mode. In session mode, a Deployment is deployed into a shared Session Cluster instead of a dedicated cluster. With session mode, there is no per Deployment resource overhead. Therefore, session mode is often used to run many, small, similar Deployments in a shared cluster.

Please see Deployment Modes for a comparison of application mode and session mode.

Improvements to REST API

  • Deployments are now name-addressable, thus names must now follow certain requirements.

  • Deployments' fields now serve as a unique identifier.

  • Deployments now have a new field for human-readable names, which is used for displaying Deployments in the frontend (metadata.displayName).

  • Deployments can now reference Deployment Targets by name (spec.deploymentTargetName).

  • Deployments, Deployment Targets, and Session Clusters now support PUT operations for idempotent upserts.

Migration & Compatibility

  • For existing Deployments, metadata.displayName will be created using the initial value of Additionally, will be converted into a valid version if the initial name was invalid, adding a random suffix (5 alpha-numeric characters) to ensure uniqueness. In case of a an upgrade to Ververica Platform 2.4 and subsequent rollback to 2.3, the original name will have been replaced by the converted name. For example, Top Speed Windowing would have been converted to top-speed-windowing-oy20u.

  • Deployments can still be accessed by ID to maintain backward-compatibility.

  • Deployment Targets can still be referenced in Deployments by ID to maintain backward-compatibility.

  • Creating a Deployment via a POST operation will silently convert invalid names as long as a metadata.displayName is not provided.

Fully-Customizable Kubernetes Pod Templates

The two Kubernetes Pod Templates used for Flink JobManagers and Flink TaskManager are now fully customizable.

  kind: Deployment
jobManagerPodTemplate: <V1PodTemplateSpec>
taskManagerPodTemplate: <V1PodTemplateSpec>

With these new options you can now, for example, ...

  • ...add custom sidecars & init containers
  • ...specify separate tolerations, labels, or node selectors for JobManager and TaskManager
  • ...specify different resource limits and requests for TaskManagers

Please see Flink Pod Templates (Recommended) for details and examples.

Other Changes

  • From this release onwards, the minimum memory required for Flink JobManagers is 650m, which reflects the increased Apache Flink® requirements in newer versions. If you have an existing Deployment from before this release with a configured memory size below this threshold, you will not be able to make changes to it until you increase the memory request.

  • The Apache Flink® UI is now accessible in read-only mode for all viewers of a Namespace. Please check Authorization for more details.

  • legacy persistence has now been dropped. It was deprecated in Ververica Platform 2.1.0 and has been unsupported since Ververica Platform 2.2.0. If you were still using legacy persistence, upgrade your persistence mode prior to upgrading to Ververica Platform 2.4.0 as described in the Release Notes of Ververica Platform 2.1.0.

  • The Deployments page now has an additional Overview tab that shows a summary of the most important information of all the other tabs.

The following security vulnerabilities have been fixed compared to 2.3.3:

CVE-2014-0229, CVE-2014-3627, CVE-2016-5001, CVE-2016-6811, CVE-2017-3161, CVE-2017-3162, CVE-2017-15713, CVE-2018-8009, CVE-2018-8029, CVE-2018-11768, CVE-2019-10172, CVE-2020-7774, CVE-2020-9492, CVE-2020-25649, CVE-2020-28052, CVE-2020-36221, CVE-2020-36222, CVE-2020-36223, CVE-2020-36224, CVE-2020-36225, CVE-2020-36226, CVE-2020-36227, CVE-2020-36228, CVE-2020-36229, CVE-2020-36230, CVE-2021-27212



Due to Helm breaking changes and limitations, upgrading must be done with Helm version 3.2 or later. If you are on a version less that 3.2, please read the below notes before upgrading. For more information, see helm/helm#6850 and helm/helm#7649.

We recommend upgrading via Helm using the following commands:

    $ helm repo add ververica
$ helm upgrade [RELEASE] ververica/ververica-platform --version 5.0.0 --values custom-values.yaml

If you are upgrading from Helm 2:

  • You'll first need to migrate the release to Helm 3. It is recommended to use the helm-2to3 plugin to migrate the release metadata.
  • Once migrated, continue with the below section.

If you are upgrading from Helm < 3.2:

  • You'll need to manually annotate two RBAC resources with Helm metadata in each namespace the Ververica Platform uses. The complete set of those namespaces is the namespace the Helm chart is installed in plus all the namespaces where jobs are managed (set in the Helm chart values field rbac.additionalNamespaces).

    • For both the role/$RELEASE_NAME-ververica-platform and rolebinding/$RELEASE_NAME-ververica-platform resources as RESOURCE, in each of the above namespaces as NAMESPACE:
        RELEASE_NAMESPACE=vvp # or wherever you installed the Ververica Platform
RELEASE_NAME=vvp # or whatever release name you installed the Ververica Platform under
kubectl -n $NAMESPACE label $RESOURCE
  • Finally, you should be able to upgrade via the above-mentioned recommended method using Helm 3.2+.