If you have installed the optional logging or metrics integrations, please check the logging setup
and the metrics setup to ensure you have the correct services port-forwarded to access them on your local machine.
In this guide, you will create and manage a Ververica Platform Deployment based on a JAR that packages an Apache Flink® DataStream, DataSet or Table API program.
Please see Getting Started - Flink SQL for information on how to use Flink SQL in Ververica Platform.
Deployments are the core resource to manage Apache Flink® jobs within Ververica Platform.
A Deployment specifies the desired state of an application and its configuration.
At the same time, Ververica Platform tracks and reports each Deployment’s status and derives other resources from it.
Whenever the Deployment specification is modified, Ververica Platform will ensure the running application will eventually reflect this change.
You can either use the web user interface or the REST API to manage Deployments.
Web User Interface
Click +CreateDeployment in the top right of the “Deployments” page. For your first Deployment we recommend to use the Standard view.
Name: Provide a name such as Top Speed Windowing
Deployment Target: Create a new Deployment Target.
A Deployment Target links a Deployment to a Kubernetes namespace, which your Flink applications will be deployed into.
In this case you can use the vvp-jobs namespace that we created earlier.
Parallelism: Set the parallelism to 1
Jar URI: Provide a URI to the JAR containing your Flink program.
If you do not have have an artifact at hand, you can use
Ververica Platform will now create a highly available Flink cluster, which runs your Flink application in the vvp-jobs Kubernetes namespace.
Checkpointing and Savepoints, as well as Flink Jobmanager failover, have automatically been configured by the platform based on our Universal Blob Storage configuration.
Once the Deployment has reached the RUNNING state you can also checkout the Flink UI.
One of the core features of Ververica Platform is application lifecycle management for stateful stream processing applications.
As part of this, Ververica Platform takes care of consistently migrating your distributed state when you make changes to your application.
For example, you can change your Deployment by changing the parallelism, i.e. rescaling your Flink job.
Web User Interface
In the Deployment overview page, click ConfigureDeployment, change the parallelism to 2, and click
Change the value of spec.template.parallelism in vvp-resources/deployment.yaml to 2.
Then PATCH the existing Deployment with the changed resource. For this, you need the metadata.name of your Deployment.
If you did not change the default in vvp-resources/deployment.yaml, the name is top-speed-windowing.
Afterwards, you can PATCH your Deployment with the modified version of vvp-resources/deployment.yaml to scale up the Deployment.
Under the hood, Ververica Platform now performs an application upgrade according to the configured Upgrade and Restore Strategy.
For your Deployment, these default to STATEFUL and LATEST_STATE and Ververica Platform has triggered a graceful shutdown of your Flink application while taking a consistent snapshot of its state via a savepoint.
You can see a list of all past savepoints and retained checkpoints for this Deployment in the Snapshots tab.
It then restarts your application from the latest snapshot, which is the one taken during shutdown.