Skip to main content

MCP Server (Experimental)

The vvctl CLI includes a built-in Model Context Protocol (MCP) server that lets AI coding assistants manage your Ververica Cloud infrastructure directly. The server enables Large Language Models (LLMs) to interact with Managed Service, Self-Managed, and BYOC deployment options through the creation and management of deployments and SQL scripts.

All MCP traffic uses local stdio (vvctl mcp start), so no extra ports are opened.

warning

This feature is currently experimental. You can use it, but Ververica does not recommend it for production environments.

Key Features

  • SQL script drafts management
  • Deployment management for Java, Python, and SQL
  • Artifact management
  • Secrets management
  • Deployment logs for debugging
  • Script executions
  • Context management

Architecture

The MCP server communicates over stdio — the AI assistant spawns vvctl mcp start as a subprocess, sends JSON-RPC requests to stdin, and reads responses from stdout. No network server is required.

AI Assistant  ←── stdio (stdin/stdout) ──→  vvctl mcp start

├── Settings (config/contexts/users/servers)
└── Ververica Cloud REST API

Protocol: MCP v2024-11-05

Prerequisites

Before configuring the MCP server:

  • vvctl installed and on your PATH. See the installation guide.
  • An active Ververica Cloud account with an API token or email/password credentials.
  • An existing vvctl configuration. Run vvctl login at least once before starting the MCP server.

Get Started

To use the MCP server, configure it in your AI client using the vvctl mcp start command.

Standard Configuration

This configuration works in most MCP clients:

{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}

See the per-client instructions below for specific setup steps.

Amp

Add the server through the Amp VS Code extension Settings screen or by updating your settings.json file:

"amp.mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}

Amp CLI

Add the server using the amp mcp add command:

amp mcp add ververica -- vvctl mcp start
Claude Code

Add the MCP server to your project or global settings:

# Project-scoped (recommended)
claude mcp add ververica -- vvctl mcp start

# Global (available in all projects)
claude mcp add --scope user ververica -- vvctl mcp start

Or add it manually to .claude/settings.json or ~/.claude/settings.json:

{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}
Claude Desktop

Follow the MCP installation guide and use the standard configuration provided above.

Cline

Follow the instructions in Configuring MCP Servers.

Add the following to your cline_mcp_settings.json file:

{
"mcpServers": {
"ververica": {
"type": "stdio",
"command": "vvctl",
"timeout": 30,
"args": ["mcp", "start"],
"disabled": false
}
}
}
Codex

Use the Codex CLI to add the MCP server:

codex mcp add ververica vvctl mcp start

Or create or edit ~/.codex/config.toml and add:

[mcp_servers.ververica]
command = "vvctl"
args = ["mcp", "start"]

For more information, see the Codex MCP documentation.

Copilot

Use the Copilot CLI to interactively add the MCP server:

/mcp add

Or create or edit ~/.copilot/mcp-config.json and add:

{
"mcpServers": {
"ververica": {
"type": "local",
"command": "vvctl",
"tools": ["*"],
"args": ["mcp", "start"]
}
}
}

For more information, see the Copilot CLI documentation.

Cursor
  1. Go to Cursor Settings > MCP > Add new MCP Server.
  2. Enter a name for the server.
  3. Select the command type.
  4. Enter the command vvctl mcp start.

You can also verify the configuration or add command-line arguments by clicking Edit.

Alternatively, edit .cursor/mcp.json in your project root:

{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}
Factory

Use the Factory CLI to add the MCP server:

droid mcp add ververica "vvctl mcp start"

Or type /mcp within Factory Droid to open an interactive UI for managing MCP servers.

For more information, see the Factory MCP documentation.

Gemini CLI

Follow the MCP installation guide and use the standard configuration provided above.

Goose
  1. Go to Advanced settings > Extensions > Add custom extension.
  2. Enter a name for the extension.
  3. Select the STDIO type.
  4. Set the command to vvctl mcp start.
  5. Click Add Extension.
Kiro

Follow the Kiro MCP documentation. For example, add the following to .kiro/settings/mcp.json:

{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}
LM Studio
  1. Go to Program in the right sidebar.
  2. Select Install > Edit mcp.json.
  3. Use the standard configuration provided above.
opencode

Follow the MCP Servers documentation. For example, add the following to ~/.config/opencode/opencode.json:

{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"ververica": {
"type": "local",
"command": ["vvctl", "mcp", "start"],
"enabled": true
}
}
}
Qodo Gen
  1. Open the Qodo Gen chat panel in VS Code or IntelliJ.
  2. Select Connect more tools > + Add new MCP.
  3. Paste the standard configuration provided above.
  4. Click Save.
VS Code

Follow the MCP installation guide and use the standard configuration provided above.

You can also install the MCP server using the VS Code CLI:

code --add-mcp '{"name":"ververica","command":"vvctl","args":["mcp","start"]}'

After installation, the MCP server is available for use with GitHub Copilot in VS Code.

Warp
  1. Go to Settings > AI > Manage MCP Servers > + Add.
  2. Use the standard configuration provided above.

Or use the slash command /add-mcp in the Warp prompt and paste the standard configuration. For more information, see adding an MCP server.

Windsurf

Follow the Windsurf MCP documentation and use the standard configuration provided above.

Verify the Connection

Once configured, ask your AI assistant:

"List my Ververica workspaces"

The assistant should call the list_workspaces tool and return your workspace list. If authentication fails, ask it to run login with your credentials or API token first.

Authentication

The MCP server includes tools to manage contexts, letting the LLM switch between workspaces or accounts. This is useful when working across multiple environments.

For initial authentication, the MCP server supports API token and email/password login through the login tool. You can also pass credentials using the VV_API_TOKEN, VV_EMAIL, and VV_PASSWORD environment variables.

Tool Reference

The MCP server exposes 66 tools across 12 domains.

note

Tools that accept a workspace parameter require the workspace ID (UUID). Tools that accept a namespace parameter default to the VV_NAMESPACE environment variable, or "default" if omitted.

Account

ToolDescription
loginLog in using email/password or API token
logoutLog out and clear the current session
show_profileShow the authenticated user's profile (ID, name, email)

login parameters

ParameterTypeRequiredDescription
emailstringNoEmail address
passwordstringNoPassword
tokenstringNoAPI token (use instead of email/password)

Provide either token alone, or email and password.

Workspaces

ToolDescription
list_workspacesList all workspaces accessible to the authenticated user
list_enginesList available Apache Flink engine versions for a workspace

list_engines requires a workspace parameter (workspace ID).

Deployments

ToolDescription
list_deploymentsList all deployments in a workspace
get_deploymentGet full details of a deployment
create_jar_deploymentCreate a deployment from a JAR artifact
create_sql_deploymentDeploy a SQL draft, file, or inline query
create_python_deploymentCreate a deployment from a Python artifact
start_deploymentStart a job for a deployment
stop_deploymentStop the running job for a deployment
delete_deploymentDelete a deployment

create_sql_deployment parameters

ParameterTypeRequiredDescription
workspacestringYesWorkspace ID
namespacestringNoNamespace
draftstringNoDraft ID to deploy
filestringNoPath to SQL file
querystringNoInline SQL query
commentstringNoComment for draft metadata
labelstring[]NoLabels in key=value format
deployment_targetstringNoDeployment target name or ID
skip_validationbooleanNoSkip validation before deploy
flink_configurationstringNoPath to Flink config file (YAML or JSON)
resourcesstringNoPath to resources config file (YAML or JSON)
enginestringNoEngine version override
modestringNoProcessing mode: STREAM or BATCH

Provide one of draft, file, or query.

stop_deployment parameters

ParameterTypeRequiredDescription
workspacestringYesWorkspace ID
namespacestringNoNamespace
deployment_idstringYesDeployment ID (UUID)
stop_strategystringNoNONE, SAVEPOINT, or DRAIN (default: NONE)

Drafts

ToolDescription
list_draftsList SQL deployment drafts in a workspace
get_draftGet a draft by ID
create_draftCreate a new SQL draft from a file or inline query
validate_draftValidate a SQL draft without deploying
execute_draftExecute a SQL draft, file, or inline query
delete_draftDelete a deployment draft

create_draft parameters

ParameterTypeRequiredDescription
workspacestringYesWorkspace ID
namespacestringNoNamespace
namestringYesDraft name
pathstringNoPath to SQL file
querystringNoInline SQL query
folderstringNoFolder ID
deployment_targetstringNoDeployment target ID
enginestringNoEngine version override
modestringNoSTREAM or BATCH

Provide either path or query.

validate_draft parameters

ParameterTypeRequiredDescription
workspacestringYesWorkspace ID
namespacestringNoNamespace
draftstringNoDraft ID
filestringNoPath to SQL file
querystringNoInline SQL query
flink_configurationstringNoPath to Flink config file
enginestringNoEngine version
modestringNoSTREAM or BATCH

Returns: Validation result with valid flag and message.

Jobs

ToolDescription
list_jobsList jobs for a deployment
get_jobGet details for a specific job

list_jobs parameters

ParameterTypeRequiredDescription
workspacestringYesWorkspace ID
namespacestringNoNamespace
deployment_idstringYesDeployment ID
statusstringNoFilter by status: RUNNING, FAILED, CANCELLED, or FINISHED

Task Managers

ToolDescription
list_task_managersList task managers for a running job
get_task_managerGet detailed information about a task manager

Use list_task_managers first to discover taskmanager_id values needed by log tools.

Logs

The log tools cover both running and stopped (archived) deployments. For running deployments, use the live log tools. For stopped deployments, list the archived logs first, then fetch their contents.

Running deployment logs

ToolDescription
get_startup_logFetch the latest JobManager startup log for a deployment
get_jobmanager_logGet JobManager log content; supports partial reads via offset and length
get_jobmanager_log_lengthGet the total byte length of a JobManager log
get_jobmanager_stdoutGet the stdout output of a JobManager
get_taskmanager_logGet a TaskManager log; supports partial reads via offset and length
get_taskmanager_log_lengthGet the total byte length of a TaskManager log
get_taskmanager_stdoutGet the stdout output of a TaskManager

Archived logs (stopped deployments)

ToolDescription
list_archived_jobmanager_logsList archived JobManager log files for a stopped job
get_archived_jobmanager_logGet the contents of an archived JobManager log; supports pagination via page_size and page_index
list_archived_taskmanagersList archived TaskManagers for a stopped job
list_archived_taskmanager_logsList archived log files for a specific TaskManager
get_archived_taskmanager_logGet the contents of an archived TaskManager log; supports pagination via page_size and page_index

Artifacts

ToolDescription
list_artifactsList artifacts in a workspace
get_artifactGet metadata for an artifact by filename
create_artifactUpload a local artifact file to a workspace
delete_artifactDelete an artifact by filename

Secrets

note

Secret values are never returned by any tool.

ToolDescription
list_secretsList secrets in a workspace and namespace
create_secretCreate a secret
delete_secretDelete a secret by name

Resource Queues

ToolDescription
list_resource_queuesList resource queues for a workspace
get_resource_queueGet a resource queue by name
create_resource_queueCreate a resource queue
update_resource_queueUpdate a resource queue's CPU allocation
delete_resource_queueDelete a resource queue

Agents

ToolDescription
list_agentsList all registered agents
get_agentGet an agent by ID
create_agentRegister a new agent
show_agent_valuesGet the Helm values YAML for an agent
install_agent_planGenerate a Helm install plan for deploying an agent to Kubernetes
uninstall_agent_planGenerate commands to uninstall an agent from Kubernetes
delete_agentDelete an agent by ID

Configuration

These tools manage the local vvctl configuration file, which stores servers, users, and contexts (similar to kubectl config).

ToolDescription
view_configRender the full configuration
get_serversList configured servers
set_serverAdd or update a server entry
delete_serverRemove a server entry
get_usersList configured users
set_userAdd or update user credentials
delete_userRemove a user entry
get_contextsList configured contexts
current_contextGet the active context
use_contextSwitch to a different context
set_contextCreate or update a context
delete_contextRemove a context

Common Workflows

Investigate a Failed Deployment

User: "My deployment abc123 in workspace ws-456 failed. What happened?"

The AI assistant will typically:

  1. get_deployment(workspace="ws-456", deployment_id="abc123")
  2. list_jobs(workspace="ws-456", deployment_id="abc123", status="FAILED")
  3. get_startup_log(workspace="ws-456", deployment_id="abc123")
  4. get_jobmanager_log(workspace="ws-456", job_id="<from step 2>")

Check Logs for a Running Deployment

User: "Show me the logs for deployment abc123"

The AI assistant will typically:

  1. list_jobs(workspace="ws-456", deployment_id="abc123", status="RUNNING")
  2. get_jobmanager_log(workspace="ws-456", job_id="<job_id>")
  3. list_task_managers(workspace="ws-456", job_id="<job_id>")
  4. get_taskmanager_log(workspace="ws-456", job_id="<job_id>", taskmanager_id="<tm_id>")

Review Logs for a Stopped Deployment

User: "Get me the logs from the last run of deployment abc123"

The AI assistant will typically:

  1. list_jobs(workspace="ws-456", deployment_id="abc123")
  2. list_archived_jobmanager_logs(workspace="ws-456", job_id="<job_id>")
  3. get_archived_jobmanager_log(workspace="ws-456", job_id="<job_id>", log_name="<name>")
  4. list_archived_taskmanagers(workspace="ws-456", job_id="<job_id>")
  5. list_archived_taskmanager_logs(workspace="ws-456", job_id="<job_id>", taskmanager_id="<tm_id>")
  6. get_archived_taskmanager_log(...)

Deploy a SQL Query

User: "Deploy this SQL to workspace ws-456: SELECT * FROM orders WHERE amount > 100"

The AI assistant will use one of these approaches:

Option A — Direct deployment:

  1. create_sql_deployment(workspace="ws-456", query="SELECT * FROM orders WHERE amount > 100")

Option B — Draft, validate, then execute:

  1. create_draft(workspace="ws-456", name="orders-filter", query="...")
  2. validate_draft(workspace="ws-456", draft="<draft_id>")
  3. execute_draft(workspace="ws-456", draft="<draft_id>")

Set Up a New Context

User: "Configure vvctl for our production environment"

The AI assistant will typically:

  1. set_server(name="prod", host="https://api.ververica.com")
  2. set_user(name="prod-user", token="<token>")
  3. set_context(name="production", server="prod", user="prod-user")
  4. use_context(name="production")

Tool Summary

DomainCountTools
Account3login, logout, show_profile
Workspaces2list_workspaces, list_engines
Deployments7list_deployments, get_deployment, create_jar_deployment, create_sql_deployment, create_python_deployment, start_deployment, stop_deployment, delete_deployment
Drafts6list_drafts, get_draft, create_draft, validate_draft, execute_draft, delete_draft
Jobs2list_jobs, get_job
Task Managers2list_task_managers, get_task_manager
Logs13get_startup_log, get_jobmanager_log, get_jobmanager_log_length, get_jobmanager_stdout, get_taskmanager_log, get_taskmanager_log_length, get_taskmanager_stdout, list_archived_jobmanager_logs, get_archived_jobmanager_log, list_archived_taskmanagers, list_archived_taskmanager_logs, get_archived_taskmanager_log
Artifacts4list_artifacts, get_artifact, create_artifact, delete_artifact
Secrets3list_secrets, create_secret, delete_secret
Resource Queues5list_resource_queues, get_resource_queue, create_resource_queue, update_resource_queue, delete_resource_queue
Agents7list_agents, get_agent, create_agent, show_agent_values, install_agent_plan, uninstall_agent_plan, delete_agent
Configuration12view_config, get_servers, set_server, delete_server, get_users, set_user, delete_user, get_contexts, current_context, use_context, set_context, delete_context
Total66