Installation and Configuration
Prerequisites
- Ververica Platform 3.1 or later deployed on Kubernetes
- The Kubernetes namespaces where you plan to create CRs must exist before you enable the operator
Install the CRD
The VvpDeployment CRD installs automatically as part of the Ververica Platform 3 Helm chart. No separate CRD installation is required.
The Ververica Platform 2 operator required the CRD to be installed from a separate chart. In Ververica Platform 3, the CRD ships with the main Helm chart.
Update the CRD After Upgrade
Helm installs the CRD automatically on the initial helm install, but does not update CRDs on subsequent helm upgrade operations. After each upgrade, apply the CRD manually to pick up any schema changes:
kubectl apply -f charts/ververica-platform-crd/crds/vvpdeployments.yaml
If you installed from a packaged chart, extract it first:
helm pull ververica/ververica-platform --version <version> --untar
kubectl apply -f ververica-platform/charts/ververica-platform-crd/crds/
rm -rf ververica-platform/
kubectl apply is idempotent — this step is safe to run even if the CRD has not changed. If you skip this step, the CRD schema stays at the version from the initial install and you might see validation errors when using newer CR fields.
helm uninstall does not delete the CRD, so your existing CRs are preserved across uninstall and reinstall cycles.
Enable the Operator
The operator ships as part of the Ververica Platform 3 Helm chart and is disabled by default. To enable it, add the following to your Helm values. All three parameters are required:
vvp-k8s-operator:
enabled: true
instanceId: "my-operator"
watchedNamespaces:
- my-deployment-namespace
| Parameter | Description |
|---|---|
enabled | Activates the operator pod. Default: false. |
instanceId | Unique identifier for this operator instance. Each CR must carry a matching ververica.platform/owner label. The webhook rejects CRs with a missing or mismatched label. |
watchedNamespaces | List of Kubernetes namespaces to watch for VvpDeployment CRs. This is the Kubernetes namespace where you create CRs, not the Ververica Platform namespace configured inside the CR. |
Multiple Operator Instances
The Ververica Platform 3 operator supports multiple operator instances watching the same Kubernetes namespace. Each instance uses its instanceId to identify and reconcile only its own CRs. This allows multiple Ververica Platform installations to share a namespace without conflict.
The Ververica Platform 2 operator allowed only one operator instance per namespace. The Ververica Platform 3 operator removes this restriction.
RBAC
The Helm chart creates a ClusterRole with the necessary permissions and a RoleBinding in each watched namespace, bound to the operator's service account. The operator's permissions are limited to the minimum required:
- The main
ClusterRolegrants access only toVvpDeploymentcustom resources in theververica.platformAPI group. It cannot read, modify, or list any other Kubernetes resources such as Pods, Deployments, ConfigMaps, or Secrets. - A separate
ClusterRolegrants read and patch access to the operator's own webhook configuration, used only to inject the CA bundle for TLS. - A namespaced
Rolein the release namespace grants access to a single TLS Secret for certificate storage.
The RoleBinding-per-namespace design ensures the operator acts only on CRs in the namespaces listed in watchedNamespaces. No wildcard or cluster-admin permissions are used.
Webhook and TLS
The operator includes a validating admission webhook that rejects invalid CRs before they reach the cluster, providing immediate feedback through kubectl.
TLS is handled automatically: on startup, the operator generates a self-signed CA and certificate, stores them in a Kubernetes Secret, and patches the webhook configuration with the CA bundle. No manual certificate management is required.
After helm upgrade, the webhook configuration is re-rendered from the chart template, which resets the caBundle to empty. The operator automatically re-patches the caBundle on startup — within seconds of the pod becoming ready. There may be a brief window after upgrade where the webhook is unavailable and CRs cannot be created or updated.
API Key Authentication
The webhook validates that every VvpDeployment CR carries a valid Ververica Platform API token before admitting it to the cluster. This ensures that only authenticated users can create or modify operator-managed deployments through kubectl.
API key validation applies to CREATE and UPDATE operations only. DELETE operations are not validated.
To configure API key authentication:
-
Create a Ververica Platform API token through the Ververica Platform UI or API. The token must have workspace and namespace role bindings.
-
Store the token in a Kubernetes Secret in the same namespace as the CR:
apiVersion: v1
kind: Secret
metadata:
name: my-vvp-apikey
namespace: my-deploy-namespace
type: Opaque
data:
apikey: <base64-encoded Ververica Platform API token> -
Reference the Secret in the CR's
metadata.annotations:metadata:
annotations:
apikeySecretName: my-vvp-apikey
The webhook reads the token from the referenced Secret on every CREATE and UPDATE and validates it against the Ververica Platform api-gateway's Access Control Service. If the token is missing, invalid, or the gateway is unreachable, the CR is rejected.
Helm configuration options:
| Helm value | Default | Effect |
|---|---|---|
webhook.apiKeyValidation.enabled | true | When set to false, the webhook skips API key validation entirely. |
webhook.apiKeyValidation.required | true | When set to false, CRs without the apikeySecretName annotation are allowed through. CRs that include the annotation are still validated. |
When testing without API keys configured, set webhook.apiKeyValidation.enabled: false in your Helm values to disable the check.
Namespace Topology
The operator bridges three distinct namespace concepts. Do not confuse them:
| Concept | Example | Description |
|---|---|---|
| Kubernetes namespace | my-flink-deployments | Where VvpDeployment CRs live. Configured in watchedNamespaces. |
| Ververica Platform namespace | default | The Ververica Platform internal namespace used in AppManager API paths (for example, /api/v1/namespaces/default/deployments). Set in spec.deployment.metadata.namespace. |
| VVP workspace | defaultworkspace | The workspace identifier passed as an HTTP header in API calls. Not directly visible in the CR. |
These are independent. A single Kubernetes namespace can contain CRs targeting multiple Ververica Platform namespaces, and a single Ververica Platform namespace can be targeted by CRs in multiple Kubernetes namespaces.
Verify the Installation
After enabling the operator, confirm the pod is running:
kubectl get pods -n <release-namespace> -l app=vvp-operator
Webhook Rejection Errors
The webhook validates CRs on CREATE and UPDATE operations and rejects requests that fail validation. The following table lists common rejection messages and how to resolve them:
| Error message | Cause | Resolution |
|---|---|---|
Kubernetes namespace '<ns>' is not in the operator's watchedNamespaces | The CR's Kubernetes namespace is not listed in watchedNamespaces. | Create the CR in a watched namespace, or add the namespace to watchedNamespaces in Helm values and redeploy. |
Owner label '<label>' does not match operator instanceId '<id>' | The ververica.platform/owner label on the CR does not match the operator's instanceId. | Set the label to match the instanceId: ververica.platform/owner: <instanceId>. |
Ververica Platform namespace '<ns>' not found | The Ververica Platform namespace in spec.deployment.metadata.namespace does not exist in Ververica Platform. | Check the namespace name. Ververica Platform namespaces are separate from Kubernetes namespaces and are configured in the Ververica Platform UI. |
Deployment target '<name>' not found in Ververica Platform namespace '<ns>' | The deployment target does not exist in the specified Ververica Platform namespace. | Verify the deployment target name in the Ververica Platform UI. |
Session cluster '<name>' not found in Ververica Platform namespace '<ns>' | The session cluster specified in the CR does not exist. | Verify the session cluster name, or remove the field if you are not using session mode. |
Invalid CR fields: ... | Cross-field validation failed (for example, missing PUT-mode required fields or an unsupported artifact kind). | Read the error details. They list the specific fields that failed validation. See Field Constraints. |
Deployment '<id>' is already managed by CR '<ns>/<name>' | Another CR already manages the same Ververica Platform deployment. | Each Ververica Platform deployment can only be managed by one CR. Update the existing CR instead. |
Cannot delete CR ... deployment must not be RUNNING or TRANSITIONING | A CR deletion was attempted while the deployment was still active. | Set the deployment to CANCELLED first. See Delete a Deployment. |
Cannot delete CR ... sync is disabled | A CR deletion was attempted while sync is disabled. | Re-enable sync before deleting. Contact your platform administrator if sync was not intentionally disabled. |
Missing required annotation 'apikeySecretName' | The CR has no apikeySecretName annotation and API key validation is required. | Add the annotation pointing to a Secret containing a valid Ververica Platform API token, or set webhook.apiKeyValidation.required: false for testing. |
Secret '<name>' not found in namespace '<ns>' | The Secret referenced by apikeySecretName does not exist in the CR's Kubernetes namespace. | Create the Secret in the same namespace as the CR. |
Secret '<name>' missing 'apikey' key | The Secret exists but has no apikey data field. | Add the apikey key containing a base64-encoded Ververica Platform API token. |
API key is invalid or unauthorized | The token was rejected by the api-gateway because it is unknown or lacks the required role bindings. | Verify the token exists in Ververica Platform and has workspace and namespace role bindings. |
Unable to validate API key: gateway unavailable | The api-gateway is unreachable. The webhook fails closed for security. | Verify the api-gateway pod is running and healthy. |