Skip to main content

Audit Logs

Ververica Platform records user activity through Audit Logs, providing a complete capture of actions you perform through the UI or API. Each action appears twice in the audit log. The first entry is marked ISSUED when the platform receives the action, and the second is marked EXECUTED when the platform processes it. Both entries share a traceId you can use to link them together and correlate with application-level logs.

Log Schema

Ververica Platform writes each audit log entry in JSON format. Each entry contains the following fields:

FieldDescription
schemaVersionThe version of the audit log schema.
timestampThe time when the action occurred.
traceIdA unique identifier linking the ISSUED and EXECUTED entries for a single action. You can also use it to correlate with application-level logs.
actorThe entity that triggered the action. For unauthenticated requests, this is the source IP address. For authenticated requests, this is the platform user identifier.
executionLevelEither ISSUED (the platform received the action but has not processed it yet) or EXECUTED (the platform processed the action).
actionNameThe name of the action, for example createDeployment or deleteSessionCluster.
resourceIdentifiersA JSON representation of the parameters identifying the resource the action was performed on.
resultEither SUCCESS or FAILURE. Only present on EXECUTED entries.
stateA JSON representation of the response for actions that completed with SUCCESS. Only present on EXECUTED entries.

Logged Events

The following events are captured by the audit logging system:

CategoryEvents
WorkspacesCREATE_WORKSPACE, DELETE_WORKSPACE
MembersADD_MEMBER, UPDATE_MEMBER, DELETE_MEMBER, CHANGE_ROLE_MEMBER
Secret ValuesADD_SECRET, DELETE_SECRET
Private ConnectionsADD_PRIVATE_CONNECTION, DELETE_PRIVATE_CONNECTION, CHANGE_PRIVATE_CONNECTION, EDIT_AWS_IAM_ROLE
Deployment DefaultsCHANGE_PARAMETER
DeploymentsCREATE_DEPLOYMENT, UPDATE_DEPLOYMENT, START_DEPLOYMENT, CANCEL_DEPLOYMENT, DELETE_DEPLOYMENT, CREATE_DEPLOYMENT_DRAFT, UPDATE_DEPLOYMENT_DRAFT
JobsSTART_JOB, STOP_JOB
ArtifactsUPLOAD_ARTIFACT, DELETE_ARTIFACT, CREATE_UDF_ARTIFACT, UPDATE_UDF_ARTIFACT
Session ClustersCREATE_SESSION_CLUSTER, DELETE_SESSION_CLUSTER
AutopilotCREATE_SCHEDULED_PLAN, APPLY_SCHEDULED_PLAN, UPDATE_AUTOPILOT_POLICY, SWITCH_AUTOPILOT_POLICY_MODE
SavepointsCREATE_SAVEPOINT, DELETE_SAVEPOINT
Folders and VariablesCREATE_FOLDER, UPDATE_FOLDER, CREATE_VARIABLE, UPDATE_VARIABLE
AuthenticationLOGIN, LOGOUT, WORKSPACE_ACCESS

Enable Audit Logs

Audit log collection is disabled by default. To enable it, set the following in your api-gateway Helm chart values:

audit:
enabled: true

Configure Storage

Ververica Platform writes audit logs to a local directory on the cluster. The following parameters control log storage behavior:

ParameterDefaultDescription
audit.logDirectory/var/log/vvp-auditDirectory where Ververica Platform writes audit log files.
audit.auditLogMaxFileSize100MBMaximum size of a single log file before rotation.
audit.auditLogMaxSize1GBMaximum combined size of all log files.
audit.auditLogMaxRetentionDays7Number of days Ververica Platform retains log files before deleting them automatically. Maximum is 365 days.

To customize storage settings, add them to your api-gateway Helm chart values alongside audit.enabled:

audit:
enabled: true
logDirectory: /var/log/vvp-audit
auditLogMaxFileSize: 100MB
auditLogMaxSize: 1GB
auditLogMaxRetentionDays: 30
note

Regardless of the configured retention period, Ververica Platform retains a separate internal copy of audit logs for 30 days to meet mandatory security requirements. This internal copy is not accessible to end users.

Export Audit Logs to Kafka

The recommended approach for forwarding audit logs to Kafka is to attach a FluentBit sidecar to the api-gateway pod. FluentBit tails the local audit log files and streams entries to a Kafka topic. It uses built-in filesystem buffering to handle backpressure.

To enable FluentBit-based export, provide your broker addresses and topic name in your api-gateway Helm chart values:

audit:
enabled: true
fluentbit:
enabled: true
kafka:
brokers: '<your-kafka-brokers>'
topic: '<your-audit-log-topic>'
note

Audit log delivery to Kafka operates on a best-effort basis. If the Kafka broker is temporarily unavailable, FluentBit buffers events in local storage and delivers them once the connection is restored. You might lose events only if both the broker and the local buffer are exhausted simultaneously.

The following table lists the FluentBit configuration options and their defaults:

ParameterDefaultDescription
audit.fluentbit.imagecr.fluentbit.io/fluent/fluent-bit:3.1.9Container image for the FluentBit sidecar.
audit.fluentbit.flushSeconds90Interval in seconds between Kafka flush attempts.
audit.fluentbit.storage.totalSize1800MTotal filesystem buffer size for backpressure handling.
audit.fluentbit.storage.maxChunksUp128Maximum number of buffer chunks held in memory at once.
audit.fluentbit.storage.backlogMemLimit32MMemory limit for processing buffered backlog chunks.

For advanced Kafka connection settings such as TLS or SASL authentication, modify the OUTPUT section of the FluentBit configmap directly using rdkafka.* properties. See the FluentBit Kafka output documentation for available options.

Limitations

  • Audit logs capture platform-level actions you perform through the Ververica Platform UI or API. Ververica Platform does not record events from Flink jobs themselves.
  • This release does not include advanced log querying, a built-in filtering UI, or per-log-entry access control.
  • Audit log delivery to Kafka is best-effort. You might lose events if both the Kafka broker and the local FluentBit buffer are unavailable at the same time.