Most stream processing use cases can be solved with continuous SQL queries.
Common tasks include data transformations, enrichment, joins, and aggregations, as well as moving events from one system to another and continuously updating views with low latency.
The benefits of SQL for such use cases are manifold.
SQL provides a large and well-known tool box to solve many tasks and makes stream processing accessible to a much wider audience.
SQL queries can be written and deployed in a fraction of the time that is required to implement an equivalent stream processing job in Java or Scala.
Moreover, query optimizers and highly-optimized execution engines ensure that most SQL queries outperform manually implemented stream processing jobs.
Ververica Platform’s SQL integration allows you to develop, deploy and operate continuous SQL queries on Apache Flink®.
Each Namespace manages its own
A web-based SQL editor that is integrated into Ververica Platform’s web user interface.
It features a catalog explorer, auto-completion, continuous validation, schema inference for sink tables and query previews.
You can find it under “SQL” in the Ververica Platform web user interface.
A Jupyter extension that allows you to write and execute Flink SQL in IPython notebooks.
The Jupyter extension is independently versioned and published on PyPi.
It has not yet reached feature parity with the SQL editor.
Ververica Platform comes with a SQL editor that is integrated in its web user interface. Via a publicly available Jupyter extension users can develop and submit Apache Flink® SQL queries from IPython notebooks (see SQL Clients above).
An Operational Framework for Flink SQL
SQL queries are configured and deployed just like regular Deployments, providing the same powerful lifecycle management for SQL queries (recovery, upgrades) as for JAR-based deployments.
See Managing SQL Deployments for more.
Catalog & Connector Management
Ververica Platform comes with a built-in catalog to store tables and functions metadata, reducing the need for an external catalog service.
In addition, it comes with a few packaged catalogs like Apache Hive®’s Metastore, and allows for creating custom catalogs.
Please see Connectors for a list of all packaged connectors and formats.
User-Defined Functions (UDF) Management
Apache Flink® requires Java or Scala user-defined functions to be packaged as JAR files.
Ververica Platform simplifies the management (registration, update, deletion) of UDF JARs and the registration of functions in the catalog.
As always all of the above functionality is exposed via a public REST API.