Ververica Platform 2.2.0¶
Release Date: 2020-08-04
- Apache Flink® 1.11.1
- Apache Flink® 1.10.1
- Apache Flink® 1.9.3
- Other Changes
- Universal Blob Storage with Apache Hadoop®
- Support for SQL Server
- Suspend with Draining
- Bootstrap Tokens
- Checkpointing in flink-conf.yaml
- UBS with AWS S3 on EKS
- Bundled Apache Hadoop® S3 File System
- Deployment Overview Page
- Community Edition Feedback Widget
- Vulnerability Fixes (outside of Apache Flink®)
Ververica Platform 2.2.0 introduces an autopilot for Apache Flink® streaming applications. When activated for a Deployment the autopilot will adjust the parallelism of your Deployment to varying load in order to maintain a back-pressure free configuration that matches a specified target utilization of your pipeline. Please check out Autopilot for configuration options, assumptions and future work.
You can now use HDFS 2 or HDFS 3 for Universal Blob Storage including authentication via Kerberos Keytabs. Besides HDFS, AWS S3, and Azure Blob Storage, as well as any S3-compatible object storage services like MinIO have been supported for a while.
You can configure Ververica Platform to store its metadata in Microsoft SQL Server as well as Azure SQL.
The full list of supported remote RDBMS now includes: MariaDB, Microsoft SQL Server, MySQL, and PostgreSQL.
Please see Persistence Configuration for more details.
When a Deployment is suspended or a stateful upgrade is triggered, Ververica Platform submits a stop command to Apache Flink®. This command will atomically trigger a savepoint and stop the job. You can now instruct Ververica Platform to additionally drain the pipeline. Draining will result in all future event time timers of the Flink job to fire before the stop command is executed. Checkout Draining for details.
Ververica Platform now supports a Bootstrap Token, specified during installation or upgrade, which can be used as an API token with administrator privileges. This is useful for performing certain bootstrapping tasks such as creating an initial Namespace and assigning its members.
For Universal Blob Storage on AWS S3, we do not require hard coded credentials anymore if the credentials are inherited from the roles attached to EC2 instances underlying your Kubernetes Cluster.
If you have instance-level authentication configured, you can skip providing additional credentials during installation.
From Flink 1.10 onwards all Flink images include both the
HadoopS3Filesystem and the
In our experience the
PrestoS3FileSystem should be used by default unless you are using the
StreamingFileSink, which requires the
You can switch the file system implementation on a use case basis via the scheme.
We have slightly reworked the Deployment page in the web user interface. It now displays the full Deployment configuration including logging configuration and pod templates.