Skip to main content
Version: 2.12

Getting Started - Installation

In this getting started guide you will install Ververica Platform and integrate it with MinIO for Universal Blob Storage. Optionally, we also show you how to link the platform to a metrics and logging system.

Setting the Stage

Kubernetes

Ververica Platform runs on top of Kubernetes. In order to get started locally we recommend using minikube, but any other Kubernetes Cluster will do, too.

Minikube relies on either virtualization support by your operating system and a hypervisor (e.g. Virtualbox), or a container runtime (e.g. Docker). Please check the official installation guide for details.

note

Though the Ververica Platform itself runs on any Kubernetes cluster version 1.11+, other parts of the Playground require version 1.16+.

Minikube on Mac OS (homebrew)

   brew install kubectl minikube

Minikube on Windows (Chocolatey)

   choco install kubernetes-cli minikube

Minikube on Linux

There are packages available for most distros and package managers. Please check the kubectl installation guide as well as the minikube installation guide for details.

Spinning up a Kubernetes Cluster

First, you start minikube. The platform (including a small Apache Flink® application) requires at least 8G of memory and 4 CPUs:

   minikube start --memory=8G --cpus=4

If this went well, you can continue and check if all system pods are ready:

   kubectl get pods -n kube-system

Depending on your exact version of minikube, the output should look more or less similar to:

  NAME                               READY   STATUS    RESTARTS  AGE
coredns-5644d7b6d9-56zhg 1/1 Running 1 2m
coredns-5644d7b6d9-fdnts 1/1 Running 1 2m
etcd-minikube 1/1 Running 1 2m
kube-addon-manager-minikube 1/1 Running 1 2m
kube-apiserver-minikube 1/1 Running 1 2m
kube-controller-manager-minikube 1/1 Running 1 2m
kube-proxy-9w92r 1/1 Running 1 2m
kube-scheduler-minikube 1/1 Running 1 2m
storage-provisioner 1/1 Running 2 2m

If all pods are running, you are good to go.

Helm

"Helm helps you manage Kubernetes applications — Helm charts help you define, install, and upgrade even the most complex Kubernetes application." From the Helm website

We distribute Ververica Platform as a Helm chart. To install Helm please follow the instructions on the official installation guide or use one of the one-liners below.

note

Please ensure that you install Helm v3+. This can be verified by running helm version after installation.

Helm on Mac OS (homebrew)

   brew install helm

Helm on Windows (Chocolatey)

   choco install kubernetes-helm

Helm on Linux

As before, there is a package available for most distros and package managers. For details check the official installation guide.

Setting Up the Playground

This guide is based on the Ververica Platform playground repository which contains scripts and Helm values files to make for a smooth getting-started experience. Please clone the repository before continuing; all commands below are meant to be executed from the repository root directory.

   git clone --branch release-2.12 https://github.com/ververica/ververica-platform-playground.git
cd ververica-platform-playground

Anatomy of this Playground

For this playground, you will create two Kubernetes namespaces: vvp and vvp-jobs.

vvp will host the control plane of Ververica Platform and other services, while the Apache Flink® jobs managed by the platform will run in the vvp-jobs namespace.

In addition to Ververica Platform, we will set up MinIO in the vvp namespace, which will be used for artifact storage and Apache Flink® checkpoints & savepoints (see Universal Blob Storage).

Playground overview

Installing the Components

TL;DR

You can skip the detailed installation steps by running the single command setup script below. Just download the appropriate edition. For download links and more details about licenses see the Ververica Platform downloads page.

Community Edition is the free-to-use release of the Platform for not-for-profit use, including personal and student projects and open source development. It is fully capable but omits some enterprise features and imposes some resource limits on deployments, see Community Edition.

Enterprise Edition is for full scale commerical use and includes full SLA-driven support.

note

Enterprise Edition offers a 30-day free trial license if you want to evaluate the Platform. Visit the Ververica downloads page for details.

   ./setup.sh --edition community

Kubernetes Namespaces

Before installing any of the components you need to create the Kubernetes namespaces vvp and vvp-jobs:

   kubectl create namespace vvp
kubectl create namespace vvp-jobs

MinIO

Install MinIO with Helm, using the official Helm chart from the stable repository.

If you have never added the stable Helm repository, do this now:

   helm repo add stable https://charts.helm.sh/stable

Then install MinIO with:

   helm --namespace vvp \
install minio stable/minio \
--values values-minio.yaml

Ververica Platform Installation

Then, install Ververica Platform using helm.

The required configurations differ slightly depending on whether you are installing Community or Enterprise Edition:

You can freely download, install, and start using Community Edition just by accepting the Community License during installation. To continue using the Platform after 14 days you need to register and update your installation with the license details. Registration is free, see Community Edition for details.

To install the platform, run the following commands:

 helm repo add ververica https://charts.ververica.com
helm install vvp ververica/ververica-platform \
--namespace vvp \
--values values-vvp.yaml \
--values values-license.yaml
 helm repo add ververica https://charts.ververica.com
helm --namespace vvp \
install vvp ververica/ververica-platform \
--values values-vvp.yaml

When you run the command above you will be asked to accept the Ververica Platform Community Edition license agreement. Please read it carefully and accept it by setting acceptCommunityEditionLicense to true:

 helm --namespace vvp \
install vvp ververica/ververica-platform \
--values values-vvp.yaml \
--set acceptCommunityEditionLicense=true

When you register, you'll receive your Community Edition license by email. To add it your installation, add your license text to the values file values-license.yaml under vvp.license.data. The final values-license.yaml file should look similar to:

vvp:
license:
data:
kind: "License"
apiVersion: "v1"
metadata:
id: "479b511e-67c8-4a84-9360-ab7fd51a6996"
createdAt: "2023-03-27T17:12:46.456836Z"
annotations:
signature: "<omitted>"
licenseSpec: "ewogICJsaWNlbnNlSWQiIDogIjQ3OWI1MTFlLTY3YzgtNGE4NC05MzYwLWFiN2ZkNTFhNjk5NiIsCiAgImxpY2Vuc2VkVG8iIDogIlBpb3RyIEvFgm9wb3Rvd3NraSIsCiAgImV4cGlyZXMiIDogIjIwMjMtMDktMjNUMDA6MDA6MDBaIiwKICAicGFyYW1zIiA6IHsKICAgICJxdW90YS50eXBlIiA6ICJMSU1JVEVEIiwKICAgICJxdW90YS5jcHUiIDogIjEwIiwKICAgICJvdmVydXNlLmNwdSIgOiAiMCIsCiAgICAidHJpYWwiIDogImZhbHNlIiwKICAgICJub3RGb3JSZXNhbGUiIDogInRydWUiLAogICAgInRva2VuIiA6ICJleUpwYm5OMFlXeHNZWFJwYjI1SlpDSTZJbU16TWpWaVptWmhMVEppT0RFdE5EWTRaUzA1Tm1KaUxXVTJOakJtT0RnMFl6ZzVNQ0lzSW5ScGJXVnpkR0Z0Y0NJNklqSXdNak10TURNdE1qZFVNVGc2TlRJNk1UZ3VNalU1TnpRekt6QXlPakF3SW4wPSIKICB9Cn0="
spec:
licenseId: "479b511e-67c8-4a84-9360-ab7fd51a6996"
licensedTo: "My Company Inc."
expires: "2023-09-23T00:00:00Z"
params:
quota.type: "LIMITED"
quota.cpu: "10"
overuse.cpu: "0"
trial: "false"
notForResale: "true"
token: "<token-string>"

If you didn't already accept the Ververica Platform Community Edition license agreement as described above in the installation steps, make sure you do so now.

After updating values-license.yaml you'll need to restart the Platform.

In order to access the Web UI or the REST API set up a port forward to the Ververica Platform Kubernetes service:

kubectl --namespace vvp port-forward services/vvp-ververica-platform 8080:80

The Web UI and API are both now available under localhost:8080.

The Web UI will show that you do not have any Deployments yet.

No deployments

Logging and Metrics Integrations (Optional)

Ververica Platform can be integrated with logging and metrics collection and querying/visualization systems to help monitor and debug your Flink applications.

The setup.sh script included in the playground repository accepts flags --with-logging and --with-metrics that enable additional demo components for logging and metrics respectively.

note

The --with-logging and --with-metrics flags can be used separately or together, and can be applied after the initial installation simply by running setup.sh again.

  • --with-logging installs a demo logging stack:

    • Elasticsearch®, for indexing logs.
    • Fluentd, a log collector and aggregator.
    • Kibana, a web interface over Elasticsearch®, for querying Flink application logs.
  • --with-metrics installs a demo monitoring stack:

    • Prometheus, a metrics collection and storage system, via the Prometheus Operator.
    • Grafana, a time series visualization web application.

This setup uses Global Deployment Defaults to ensure each Flink job is configured to use the built-in Prometheus metrics reporter and that each Kubernetes pod running Flink gets an annotation that makes it discoverable by the Prometheus server.

Metrics

Viewing Application Metrics

After installing or upgrading the platform using ./setup.sh --with-metrics, run the following command to port-forward Grafana to your local network:

   kubectl --namespace vvp port-forward services/grafana 3000:80

The Web UI should now be available under localhost:3000.

When viewing one of your Deployments in the Web UI, click the Metrics button to be linked to a sample monitoring dashboard in Grafana for that Deployment. It may take a few minutes for metrics to appear.

Exploring the Configuration

To understand this setup, check out the following files:

  • values-prometheus-operator.yaml Configuration for the Prometheus Operator Helm chart, which:

    • disables components we don't need
    • adds Playground-specific resource discovery
  • prometheus-operator-resources/service-monitor.yaml Configuration for Prometheus metrics scraping, which:

    • adds a Kubernetes Service definition to expose Flink Pod metrics
    • adds a Prometheus Operator ServiceMonitor to declare scraping configuration of that Service
  • values-grafana.yaml Configuration for the Grafana Helm chart, which:

    • adds a preconfigured datasource and dashboard
    • disables auth to make for a convenient demonstration
  • values-vvp.yaml Configuration for the Ververica Platform, which:

    • sets the Prometheus metrics reporter for all Deployments in the globalDeploymentDefaults section
  • values-vvp-add-metrics.yaml Additional configuration for the Ververica Platform, which:

    • enables the Metric button on a Deployment or Job in the Web UI that links to Grafana (on localhost:3000)

Logging

Viewing Application Logs

After installing or upgrading the platform using ./setup.sh --with-logging, run the following command to port-forward Kibana to your local network:

kubectl --namespace vvp port-forward services/kibana 5601:5601

The Web UI should now be available under localhost:5601.

When viewing one of your Deployments in the Web UI, click the Logs button to be linked to Kibana with a pre-filled query to only show logs from that Deployment.

Exploring the Configuration

To understand this setup, check out the following files:

  • values-elasticsearch.yaml Configuration for the Elasticsearch®, Helm chart, which:

    • configures a single-node Elasticsearch®, cluster
  • values-fluentd.yaml Configuration for the Fluentd Helm chart, which:

    • configures the connection to Elasticsearch®, to write logs
  • values-kibana.yaml Configuration for the Kibana Helm chart, which:

    • configures the connection to Elasticsearch® to read logs
    • imports dashboards for logging
  • values-vvp-add-logging.yaml Configuration for the Ververica Platform Helm chart, which:

    • enables the Logs button on a Deployment or Job in the Web UI that links to Kibana (on localhost:5601)

Next Steps

Now, you can either continue with Getting Started Flink SQL or Getting Started Flink Operations.

Cleaning Up

Run the script ./teardown.sh to clean up all applications deployed with Helm created in this tutorial and delete the namespaces created in the first step.

Alternatively, do this manually with the following command:

kubectl delete namespace vvp vvp-jobs