Skip to main content

Modify deployment configurations

After a SQL draft or JAR/Python job is deployed, you can still modify the deployment configuration. The deployment configuration contains all the configuration parameters that are used for the job startup. This section describes how to modify deployment configuration.

note

See Connecting with external file systems for information on the s3i and s3 schemas when communicating with external S3 services via your own S3 connector (e.g. for sources and sinks).

Procedure

  1. On the Dashboard page, open the console for the workspace you want to manage.

  2. In the Console navigation pane, click Deployments.

  3. Click the name of the deployment you want to configure.

  4. Display the Configuration tab; this has some expandable panels (Basic, Resources, Parameters, Logging).

  5. Expand your chosen panel and click Edit at the righthand end of the title bar to access the options for editing.

  6. Modify the parameters.

  7. Click Save.

Basic section

Deployment typeDescription
SQL deploymentYou can write SQL code and configure the Engine Version, Additional Dependencies, and Description parameters. For more information about the parameters, see Develop an SQL draft. Note: After you click Edit in the upper-right corner of the Basic section, a message appears. If you want to modify the deployment configuration, click OK in the message. Then, you are redirected to the SQL Editor page to edit and deploy the deployment.
JAR deploymentYou can configure the Engine Version, JAR Uri, Entry Point Class, Entry Point Main Arguments, Additional Dependencies, Description, and Deploy to Session Cluster (Not recommended) parameters.
Python deploymentYou can configure the Engine Version, Python Uri, Entry Module, Entry Point Main Arguments, Python Libraries, Python Archives, Additional Dependencies, Description, and Deploy to Session Cluster (Not recommended) parameters.

Resources section

ParameterDescription
ParallelismThe global parallelism of the deployment.
Job Manager CPUVerverica recommends that you configure 1 CPU core and 4 GiB of memory for the JobManager.
Job Manager MemoryUnit: GiB. Example: 4 GiB. Minimum value: 1 GiB.
Task Manager CPUVerverica recommends that you configure 1 CPU core and 4 GiB of memory for a TaskManager.
Task Manager MemoryUnit: GiB. Example: 4 GiB. Minimum value: 1 GiB.
Task Manager SlotsThe number of slots for a TaskManager.

Parameters section

ParameterDescription
Checkpointing IntervalThe interval at which checkpoints are scheduled. If you do not configure this parameter, the checkpointing feature is disabled.
Min Interval Between CheckpointsThe minimum interval between two checkpoints. If the maximum parallelism of checkpoints is 1, this parameter specifies the minimum interval between the two checkpoints.
State Expiration TimeIf the state data expires the time that is specified by this parameter, the system automatically removes the expired state data. This way, disk space is released.
Flink Restart PolicySee below.
Number of Restart AttemptsThe number of times that the system attempts to restart a deployment when the deployment fails.
Delay Between Restart AttemptsThe interval at which the system attempts to restart a deployment.

If a task fails and the checkpointing feature is disabled, the JobManager cannot be restarted. If the checkpointing feature is enabled, the JobManager is restarted. Valid values are:

  • Failure Rate: The JobManager is restarted if the number of failures within the specified interval exceeds the upper limit. If you select Failure Rate from the Flink Restart Policy drop-down list, you must set the Failure Rate Interval, Max Failures per Interval, and Delay Between Restart Attempts parameters.
  • Fixed Delay: The JobManager is restarted at a fixed interval. If you select Fixed Delay from the Flink Restart Policy drop-down list, you must set the Number of Restart Attempts and Delay Between Restart Attempts parameters.
  • No Restarts: The JobManager is not restarted. This is the default value.

Logging section

ParameterDescription
Log ArchivingBy default, Allow Log Archives is turned on. After you turn on Allow Log Archives in the Logging section, you can view the logs of a historical job on the Logs tab. For more information, see View the logs of a historical job. Turn on log archives for the deployment.
Log Archives ExpiresBy default, the archived log files are valid for seven days.
Root Log LevelSee below.
Log LevelsEnter the log name and log level.
Logging ProfileYou can set this parameter to default or Custom Template.

Root log level

You can specify the following log levels. The levels are listed in ascending order of urgency.

  1. TRACE: records finer-grained information than DEBUG logs.
  2. DEBUG: records the status of the system.
  3. INFO: records important system information.
  4. WARN: records the information about potential issues.
  5. ERROR: records the information about errors and exceptions that occur.