Additional memory is allocated to the gateway container hosting the SQL service.
If memory is scarce in your environment and you don’t use the SQL functionality yet, feel free to limit the requested memory as follows:
Ververica Platform 2.3.0 continues to support Apache Flink® 1.10 and 1.11 under SLA.
We have released new Flink 1.10.2 and 1.11.2 builds that enable Ververica Platform’s support for Flink SQL and fix vulnerabilities in Flink’s dependencies.
With this release Ververica Platform becomes and end-to-end platform for the development and operations of Flink SQL.
Please check out the SQL Quick Start and SQL Development User Guide for how to get started.
Here are some highlights:
A Web-Based SQL Editor
Ververica Platform comes with a purpose-built SQL editor as part of its web user interface.
It provides auto-completion, continuous validation, a schema explorer and last but not least query previews.
Flink SQL Jupyter Extension
In addition, you can develop and submit Apache Flink® SQL queries from IPython notebooks via a publicly available Jupyter extension.
The Jupyter Extension is released & versioned independently of the platform and has not yet reached feature parity with the editor.
Specifically, result previews will only be added in the next release of the extension.
An Operational Framework for Flink SQL
SQL queries are configured and deployed just like regular Deployments, providing the same powerful lifecycle management for SQL queries (recovery, upgrades) as for JAR-based deployments.
See Managing SQL Deployments for more.
Catalog & Connector Management
Ververica Platform comes with a built-in catalog to store tables and functions metadata, reducing the need for an external catalog service.
In addition, the platform is able to connect to external catalogs like Apache Hive®’s Metastore.
Please see Connectors for a list of all packaged connectors and formats.
User-Defined Functions (UDF) Management
Apache Flink® requires Java or Scala user-defined functions to be packaged as JAR files.
Ververica Platform simplifies the management (registration, update, deletion) of UDF JARs and the registration of functions in the catalog.
As always, all of the above functionality is exposed via a public REST API.
You can now use Ververica Platform to manage Apache Flink® Session Clusters.
In this release, the primary use case of Session Clusters in Ververica Platform is running preview queries submitted via the SQL editor.
Full support for Session Clusters is planned for Ververica Platform 2.4.0 including:
So far, we have distributed our fork of Apache Flink® via Docker images (Stream Edition) and archives (Spring Edition).
These Docker images only contain the binary distribution of Apache Flink® and, hence, do not cover all of the components that we support commercially: connectors, for example, are not part of the binary distribution.
In order to support you more efficiently, we will, from now, publish all dependencies for each release of our distribution of Apache Flink in a public Maven repository hosted under maven.ververica.com.
Please check Packaging your Application for how to integrate our repository into your build system.
The navigation side-bar has been reworked, specifically:
Secret Values was moved to Administration
Namespaces was moved to the namespace picker and is now called Manage Namespaces.
A top-level SQL section was added.
A Session Cluster page was added under Deployments.
Deployment Form Improvements
You can now create your first Deployment Target per namespace right in the Deployment creation form instead of the Deployment Targets page.
Similarly, you can upload JAR artifacts directly from the Deployment creation form, not only via the Artifacts page.