Migration from Ververica Platform 2
This page covers the differences between the Ververica Platform 2 and Ververica Platform 3 Kubernetes Operators and provides a step-by-step migration guide.
The CRD kind (VvpDeployment) and API version (ververica.platform/v1) remain the same. However, the CR structure, operator behavior, and Helm configuration have changed. Existing Ververica Platform 2 CRs are not directly compatible with Ververica Platform 3 and must be updated before you enable the Ververica Platform 3 operator.
Behavioral Changes
Operator-Managed Deployments Are Now Write-Protected
In Ververica Platform 2, you could edit operator-managed deployments through the Ververica Platform UI or API. The operator would silently revert your changes at the next reconciliation cycle, which caused confusion.
In Ververica Platform 3, the UI and API block direct modifications to operator-managed deployments. All changes must be made by updating the CR. The Ververica Platform UI displays a badge on operator-managed deployments to indicate they are read-only.
Multiple Operators Can Share a Namespace
The Ververica Platform 2 operator allowed only one operator instance per watched Kubernetes namespace, enforced through a ConfigMapLock (ververica-operator-leader-election).
The Ververica Platform 3 operator removes this restriction. Multiple operator instances can watch the same namespace. Each instance uses its instanceId to filter and reconcile only its own CRs (those with a matching ververica.platform/owner label). If you were using separate namespaces to work around the single-operator limitation, you can now consolidate to a single namespace.
Status Model Replaced with Conditions
Ververica Platform 2 reported CR health through status.customResourceStatus.customResourceState (SYNCING or IDLING) and mirrored Ververica Platform deployment status fields directly onto the CR.
Ververica Platform 3 replaces this with Kubernetes-standard conditions and simplified status fields. The following table maps Ververica Platform 2 status fields to their Ververica Platform 3 equivalents:
| Ververica Platform 2 field | Ververica Platform 3 equivalent |
|---|---|
status.customResourceStatus.customResourceState: SYNCING | Normal operation — check Healthy=True |
status.customResourceStatus.customResourceState: IDLING | Not applicable under normal operation |
status.customResourceStatus.statusState | status.actualState |
status.customResourceStatus.observedSpecState | status.observedSpecState |
status.customResourceStatus.deploymentId | status.deploymentId |
status.customResourceStatus.vvpNamespace | status.vvpNamespace |
status.deploymentStatus | Not mirrored — check Ververica Platform directly |
status.deploymentSystemMetadata | Not mirrored — check Ververica Platform directly |
If you have scripts or monitoring that check customResourceState, update them to check the Healthy condition instead. See Conditions for the full reference.
Webhook Validates CRs Before Admission
The Ververica Platform 2 operator had no admission webhook. Invalid CRs were accepted by Kubernetes and failed silently at reconciliation time, with errors only visible in operator logs.
The Ververica Platform 3 operator includes a validating admission webhook that rejects invalid CRs immediately. You receive clear error messages at kubectl apply time rather than having to inspect operator logs. The webhook validates:
- Required fields are present
syncingModeis a valid value- Referenced API key secrets exist and contain a valid token
Autopilot Integration
The Ververica Platform 2 operator had no Autopilot awareness. It overwrote Autopilot's scaling decisions with the CR's parallelism on every reconcile cycle, effectively defeating Autopilot for operator-managed deployments.
The Ververica Platform 3 operator understands that Autopilot operates at the Flink job runtime level and does not modify the deployment spec. The operator always projects the CR's parallelism to the deployment spec — this is safe because Autopilot's scaling decisions take effect at the Flink job level, not at the deployment spec level. The ParallelismAligned condition provides observability into Autopilot's runtime scaling.
If you were working around this Ververica Platform 2 limitation (for example, by omitting parallelism from CRs or disabling Autopilot for operator-managed deployments), those workarounds are no longer necessary. See Autopilot Integration for details.
Events
The Ververica Platform 2 operator mirrored Ververica Platform job and deployment events to Kubernetes core/v1 Events only.
The Ververica Platform 3 operator retains this Kubernetes event mirroring and additionally emits operator-specific lifecycle events to the Ververica Platform Events API. These events (deployment created, config projected, savepoint triggered, and others) are visible in the Ververica Platform UI Events tab. See Observability for the full event reference.
CR Structure Changes
The VvpDeployment CR structure has changed between Ververica Platform 2 and Ververica Platform 3:
| Ververica Platform 2 field | Ververica Platform 3 equivalent | Notes |
|---|---|---|
spec.template (VvpResourceTemplate) | spec.deployment (VvpDeploymentBody) | Renamed to mirror AppManager API structure |
spec.template.userMetadata | spec.deployment.metadata | Renamed (userMetadata → metadata) |
spec.initialSavepointSpec | Removed | No equivalent in Ververica Platform 3. Set the desired state in spec.deployment.spec instead. |
| Savepoint/restart triggering | spec.savepointNonce / spec.restartNonce | New nonce-based mechanism replacing the Ververica Platform 2 approach |
Migration Steps
Follow these steps to migrate from the Ververica Platform 2 operator to the Ververica Platform 3 operator:
-
Disable the Ververica Platform 2 operator. Scale down the Ververica Platform 2 operator deployment to zero replicas. Do not delete the CRs yet.
kubectl scale deployment <vvp2-operator-deployment> --replicas=0 -n <namespace> -
Update CRs to the Ververica Platform 3 structure. Update your CR YAML files according to the CR Structure Changes table. Pay particular attention to renaming
spec.templatetospec.deploymentandspec.template.userMetadatatospec.deployment.metadata. -
Enable the Ververica Platform 3 operator. Configure the Ververica Platform 3 Helm values with
enabled: true,instanceId, andwatchedNamespaces, then apply:helm upgrade <release-name> ververica/ververica-platform \
--set vvp-k8s-operator.enabled=true \
--set vvp-k8s-operator.instanceId=my-operator \
--set vvp-k8s-operator.watchedNamespaces[0]=my-flink-deployments -
Apply the updated CRs. Apply your updated CR YAML files. The Ververica Platform 3 operator reconciles them and takes ownership of the existing Ververica Platform deployments.
kubectl apply -f my-deployment.yaml -
Verify. Check that all CRs are healthy:
kubectl get vvpdeployments -o wideThe
HEALTHYcolumn should showTruefor all CRs.