MCP Server (Experimental)
The vvctl CLI includes a built-in Model Context Protocol (MCP) server that lets AI coding assistants manage your Ververica Cloud infrastructure directly. The server enables Large Language Models (LLMs) to interact with Managed Service, Self-Managed, and BYOC deployment options through the creation and management of deployments and SQL scripts.
All MCP traffic uses local stdio (vvctl mcp start), so no extra ports are opened.
This feature is currently experimental. You can use it, but Ververica does not recommend it for production environments.
Key Features
- SQL script drafts management
- Deployment management for Java, Python, and SQL
- Artifact management
- Secrets management
- Deployment logs for debugging
- Script executions
- Context management
Architecture
The MCP server communicates over stdio — the AI assistant spawns vvctl mcp start as a subprocess, sends JSON-RPC requests to stdin, and reads responses from stdout. No network server is required.
AI Assistant ←── stdio (stdin/stdout) ──→ vvctl mcp start
│
├── Settings (config/contexts/users/servers)
└── Ververica Cloud REST API
Protocol: MCP v2024-11-05
Prerequisites
Before configuring the MCP server:
vvctlinstalled and on yourPATH. See the installation guide.- An active Ververica Cloud account with an API token or email/password credentials.
- An existing
vvctlconfiguration. Runvvctl loginat least once before starting the MCP server.
Get Started
To use the MCP server, configure it in your AI client using the vvctl mcp start command.
Standard Configuration
This configuration works in most MCP clients:
{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}
See the per-client instructions below for specific setup steps.
Amp
Add the server through the Amp VS Code extension Settings screen or by updating your settings.json file:
"amp.mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
Amp CLI
Add the server using the amp mcp add command:
amp mcp add ververica -- vvctl mcp start
Claude Code
Add the MCP server to your project or global settings:
# Project-scoped (recommended)
claude mcp add ververica -- vvctl mcp start
# Global (available in all projects)
claude mcp add --scope user ververica -- vvctl mcp start
Or add it manually to .claude/settings.json or ~/.claude/settings.json:
{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}
Claude Desktop
Follow the MCP installation guide and use the standard configuration provided above.
Cline
Follow the instructions in Configuring MCP Servers.
Add the following to your cline_mcp_settings.json file:
{
"mcpServers": {
"ververica": {
"type": "stdio",
"command": "vvctl",
"timeout": 30,
"args": ["mcp", "start"],
"disabled": false
}
}
}
Codex
Use the Codex CLI to add the MCP server:
codex mcp add ververica vvctl mcp start
Or create or edit ~/.codex/config.toml and add:
[mcp_servers.ververica]
command = "vvctl"
args = ["mcp", "start"]
For more information, see the Codex MCP documentation.
Copilot
Use the Copilot CLI to interactively add the MCP server:
/mcp add
Or create or edit ~/.copilot/mcp-config.json and add:
{
"mcpServers": {
"ververica": {
"type": "local",
"command": "vvctl",
"tools": ["*"],
"args": ["mcp", "start"]
}
}
}
For more information, see the Copilot CLI documentation.
Cursor
- Go to Cursor Settings > MCP > Add new MCP Server.
- Enter a name for the server.
- Select the command type.
- Enter the command
vvctl mcp start.
You can also verify the configuration or add command-line arguments by clicking Edit.
Alternatively, edit .cursor/mcp.json in your project root:
{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}
Factory
Use the Factory CLI to add the MCP server:
droid mcp add ververica "vvctl mcp start"
Or type /mcp within Factory Droid to open an interactive UI for managing MCP servers.
For more information, see the Factory MCP documentation.
Gemini CLI
Follow the MCP installation guide and use the standard configuration provided above.
Goose
- Go to Advanced settings > Extensions > Add custom extension.
- Enter a name for the extension.
- Select the STDIO type.
- Set the command to
vvctl mcp start. - Click Add Extension.
Kiro
Follow the Kiro MCP documentation. For example, add the following to .kiro/settings/mcp.json:
{
"mcpServers": {
"ververica": {
"command": "vvctl",
"args": ["mcp", "start"]
}
}
}
LM Studio
- Go to Program in the right sidebar.
- Select Install > Edit mcp.json.
- Use the standard configuration provided above.
opencode
Follow the MCP Servers documentation. For example, add the following to ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"ververica": {
"type": "local",
"command": ["vvctl", "mcp", "start"],
"enabled": true
}
}
}
Qodo Gen
- Open the Qodo Gen chat panel in VS Code or IntelliJ.
- Select Connect more tools > + Add new MCP.
- Paste the standard configuration provided above.
- Click Save.
VS Code
Follow the MCP installation guide and use the standard configuration provided above.
You can also install the MCP server using the VS Code CLI:
code --add-mcp '{"name":"ververica","command":"vvctl","args":["mcp","start"]}'
After installation, the MCP server is available for use with GitHub Copilot in VS Code.
Warp
- Go to Settings > AI > Manage MCP Servers > + Add.
- Use the standard configuration provided above.
Or use the slash command /add-mcp in the Warp prompt and paste the standard configuration. For more information, see adding an MCP server.
Windsurf
Follow the Windsurf MCP documentation and use the standard configuration provided above.
Verify the Connection
Once configured, ask your AI assistant:
"List my Ververica workspaces"
The assistant should call the list_workspaces tool and return your workspace list. If authentication fails, ask it to run login with your credentials or API token first.
Authentication
The MCP server includes tools to manage contexts, letting the LLM switch between workspaces or accounts. This is useful when working across multiple environments.
For initial authentication, the MCP server supports API token and email/password login through the login tool. You can also pass credentials using the VV_API_TOKEN, VV_EMAIL, and VV_PASSWORD environment variables.
Tool Reference
The MCP server exposes 66 tools across 12 domains.
Tools that accept a workspace parameter require the workspace ID (UUID). Tools that accept a namespace parameter default to the VV_NAMESPACE environment variable, or "default" if omitted.
Account
| Tool | Description |
|---|---|
login | Log in using email/password or API token |
logout | Log out and clear the current session |
show_profile | Show the authenticated user's profile (ID, name, email) |
login parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
email | string | No | Email address |
password | string | No | Password |
token | string | No | API token (use instead of email/password) |
Provide either token alone, or email and password.
Workspaces
| Tool | Description |
|---|---|
list_workspaces | List all workspaces accessible to the authenticated user |
list_engines | List available Apache Flink engine versions for a workspace |
list_engines requires a workspace parameter (workspace ID).
Deployments
| Tool | Description |
|---|---|
list_deployments | List all deployments in a workspace |
get_deployment | Get full details of a deployment |
create_jar_deployment | Create a deployment from a JAR artifact |
create_sql_deployment | Deploy a SQL draft, file, or inline query |
create_python_deployment | Create a deployment from a Python artifact |
start_deployment | Start a job for a deployment |
stop_deployment | Stop the running job for a deployment |
delete_deployment | Delete a deployment |
create_sql_deployment parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
workspace | string | Yes | Workspace ID |
namespace | string | No | Namespace |
draft | string | No | Draft ID to deploy |
file | string | No | Path to SQL file |
query | string | No | Inline SQL query |
comment | string | No | Comment for draft metadata |
label | string[] | No | Labels in key=value format |
deployment_target | string | No | Deployment target name or ID |
skip_validation | boolean | No | Skip validation before deploy |
flink_configuration | string | No | Path to Flink config file (YAML or JSON) |
resources | string | No | Path to resources config file (YAML or JSON) |
engine | string | No | Engine version override |
mode | string | No | Processing mode: STREAM or BATCH |
Provide one of draft, file, or query.
stop_deployment parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
workspace | string | Yes | Workspace ID |
namespace | string | No | Namespace |
deployment_id | string | Yes | Deployment ID (UUID) |
stop_strategy | string | No | NONE, SAVEPOINT, or DRAIN (default: NONE) |
Drafts
| Tool | Description |
|---|---|
list_drafts | List SQL deployment drafts in a workspace |
get_draft | Get a draft by ID |
create_draft | Create a new SQL draft from a file or inline query |
validate_draft | Validate a SQL draft without deploying |
execute_draft | Execute a SQL draft, file, or inline query |
delete_draft | Delete a deployment draft |
create_draft parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
workspace | string | Yes | Workspace ID |
namespace | string | No | Namespace |
name | string | Yes | Draft name |
path | string | No | Path to SQL file |
query | string | No | Inline SQL query |
folder | string | No | Folder ID |
deployment_target | string | No | Deployment target ID |
engine | string | No | Engine version override |
mode | string | No | STREAM or BATCH |
Provide either path or query.
validate_draft parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
workspace | string | Yes | Workspace ID |
namespace | string | No | Namespace |
draft | string | No | Draft ID |
file | string | No | Path to SQL file |
query | string | No | Inline SQL query |
flink_configuration | string | No | Path to Flink config file |
engine | string | No | Engine version |
mode | string | No | STREAM or BATCH |
Returns: Validation result with valid flag and message.
Jobs
| Tool | Description |
|---|---|
list_jobs | List jobs for a deployment |
get_job | Get details for a specific job |
list_jobs parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
workspace | string | Yes | Workspace ID |
namespace | string | No | Namespace |
deployment_id | string | Yes | Deployment ID |
status | string | No | Filter by status: RUNNING, FAILED, CANCELLED, or FINISHED |
Task Managers
| Tool | Description |
|---|---|
list_task_managers | List task managers for a running job |
get_task_manager | Get detailed information about a task manager |
Use list_task_managers first to discover taskmanager_id values needed by log tools.
Logs
The log tools cover both running and stopped (archived) deployments. For running deployments, use the live log tools. For stopped deployments, list the archived logs first, then fetch their contents.
Running deployment logs
| Tool | Description |
|---|---|
get_startup_log | Fetch the latest JobManager startup log for a deployment |
get_jobmanager_log | Get JobManager log content; supports partial reads via offset and length |
get_jobmanager_log_length | Get the total byte length of a JobManager log |
get_jobmanager_stdout | Get the stdout output of a JobManager |
get_taskmanager_log | Get a TaskManager log; supports partial reads via offset and length |
get_taskmanager_log_length | Get the total byte length of a TaskManager log |
get_taskmanager_stdout | Get the stdout output of a TaskManager |
Archived logs (stopped deployments)
| Tool | Description |
|---|---|
list_archived_jobmanager_logs | List archived JobManager log files for a stopped job |
get_archived_jobmanager_log | Get the contents of an archived JobManager log; supports pagination via page_size and page_index |
list_archived_taskmanagers | List archived TaskManagers for a stopped job |
list_archived_taskmanager_logs | List archived log files for a specific TaskManager |
get_archived_taskmanager_log | Get the contents of an archived TaskManager log; supports pagination via page_size and page_index |
Artifacts
| Tool | Description |
|---|---|
list_artifacts | List artifacts in a workspace |
get_artifact | Get metadata for an artifact by filename |
create_artifact | Upload a local artifact file to a workspace |
delete_artifact | Delete an artifact by filename |
Secrets
Secret values are never returned by any tool.
| Tool | Description |
|---|---|
list_secrets | List secrets in a workspace and namespace |
create_secret | Create a secret |
delete_secret | Delete a secret by name |
Resource Queues
| Tool | Description |
|---|---|
list_resource_queues | List resource queues for a workspace |
get_resource_queue | Get a resource queue by name |
create_resource_queue | Create a resource queue |
update_resource_queue | Update a resource queue's CPU allocation |
delete_resource_queue | Delete a resource queue |
Agents
| Tool | Description |
|---|---|
list_agents | List all registered agents |
get_agent | Get an agent by ID |
create_agent | Register a new agent |
show_agent_values | Get the Helm values YAML for an agent |
install_agent_plan | Generate a Helm install plan for deploying an agent to Kubernetes |
uninstall_agent_plan | Generate commands to uninstall an agent from Kubernetes |
delete_agent | Delete an agent by ID |
Configuration
These tools manage the local vvctl configuration file, which stores servers, users, and contexts (similar to kubectl config).
| Tool | Description |
|---|---|
view_config | Render the full configuration |
get_servers | List configured servers |
set_server | Add or update a server entry |
delete_server | Remove a server entry |
get_users | List configured users |
set_user | Add or update user credentials |
delete_user | Remove a user entry |
get_contexts | List configured contexts |
current_context | Get the active context |
use_context | Switch to a different context |
set_context | Create or update a context |
delete_context | Remove a context |
Common Workflows
Investigate a Failed Deployment
User: "My deployment abc123 in workspace ws-456 failed. What happened?"
The AI assistant will typically:
get_deployment(workspace="ws-456", deployment_id="abc123")list_jobs(workspace="ws-456", deployment_id="abc123", status="FAILED")get_startup_log(workspace="ws-456", deployment_id="abc123")get_jobmanager_log(workspace="ws-456", job_id="<from step 2>")
Check Logs for a Running Deployment
User: "Show me the logs for deployment abc123"
The AI assistant will typically:
list_jobs(workspace="ws-456", deployment_id="abc123", status="RUNNING")get_jobmanager_log(workspace="ws-456", job_id="<job_id>")list_task_managers(workspace="ws-456", job_id="<job_id>")get_taskmanager_log(workspace="ws-456", job_id="<job_id>", taskmanager_id="<tm_id>")
Review Logs for a Stopped Deployment
User: "Get me the logs from the last run of deployment abc123"
The AI assistant will typically:
list_jobs(workspace="ws-456", deployment_id="abc123")list_archived_jobmanager_logs(workspace="ws-456", job_id="<job_id>")get_archived_jobmanager_log(workspace="ws-456", job_id="<job_id>", log_name="<name>")list_archived_taskmanagers(workspace="ws-456", job_id="<job_id>")list_archived_taskmanager_logs(workspace="ws-456", job_id="<job_id>", taskmanager_id="<tm_id>")get_archived_taskmanager_log(...)
Deploy a SQL Query
User: "Deploy this SQL to workspace ws-456: SELECT * FROM orders WHERE amount > 100"
The AI assistant will use one of these approaches:
Option A — Direct deployment:
create_sql_deployment(workspace="ws-456", query="SELECT * FROM orders WHERE amount > 100")
Option B — Draft, validate, then execute:
create_draft(workspace="ws-456", name="orders-filter", query="...")validate_draft(workspace="ws-456", draft="<draft_id>")execute_draft(workspace="ws-456", draft="<draft_id>")
Set Up a New Context
User: "Configure vvctl for our production environment"
The AI assistant will typically:
set_server(name="prod", host="https://api.ververica.com")set_user(name="prod-user", token="<token>")set_context(name="production", server="prod", user="prod-user")use_context(name="production")
Tool Summary
| Domain | Count | Tools |
|---|---|---|
| Account | 3 | login, logout, show_profile |
| Workspaces | 2 | list_workspaces, list_engines |
| Deployments | 7 | list_deployments, get_deployment, create_jar_deployment, create_sql_deployment, create_python_deployment, start_deployment, stop_deployment, delete_deployment |
| Drafts | 6 | list_drafts, get_draft, create_draft, validate_draft, execute_draft, delete_draft |
| Jobs | 2 | list_jobs, get_job |
| Task Managers | 2 | list_task_managers, get_task_manager |
| Logs | 13 | get_startup_log, get_jobmanager_log, get_jobmanager_log_length, get_jobmanager_stdout, get_taskmanager_log, get_taskmanager_log_length, get_taskmanager_stdout, list_archived_jobmanager_logs, get_archived_jobmanager_log, list_archived_taskmanagers, list_archived_taskmanager_logs, get_archived_taskmanager_log |
| Artifacts | 4 | list_artifacts, get_artifact, create_artifact, delete_artifact |
| Secrets | 3 | list_secrets, create_secret, delete_secret |
| Resource Queues | 5 | list_resource_queues, get_resource_queue, create_resource_queue, update_resource_queue, delete_resource_queue |
| Agents | 7 | list_agents, get_agent, create_agent, show_agent_values, install_agent_plan, uninstall_agent_plan, delete_agent |
| Configuration | 12 | view_config, get_servers, set_server, delete_server, get_users, set_user, delete_user, get_contexts, current_context, use_context, set_context, delete_context |
| Total | 66 |