WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

Conversation

@jaideepr97
Copy link
Contributor

@jaideepr97 jaideepr97 commented Dec 1, 2025

What does this PR do?

Implements support for configuring static connectors in the stack via run.yaml. Major features of this PR:

  • Adds a new resource called connector
  • connectors can be configured under registered_resources in the run.yaml
  • config allows stack admins to specify connector_id and server URL along with headers and auth info via env vars if required
  • Adds connectors as a new internal API implementation
  • Implements API surface outlined in feat(api): add readonly connectors API #4258
  • updates meta-reference agent provider to have access to the connectors API implementation and handles plumbing
  • updates responses API MCP tool input to accept either server_url or connector_id and handles resolution

Examples:

run.yaml config:

version: 2
image_name: llamastack-crimson
container_image: null
apis: ...
providers: ...
storage: ...
models: ...
...
registered_resources:
  connectors:
  - connector_id: kubernetes
    url: "http://localhost:8080/mcp"
    connector_type: mcp
...

API requests:

curl -s http://localhost:8321/v1alpha/connectors | jq 

Response:
{
  "data": [
    {
      "identifier": "kubernetes-mcp-server",
      "provider_resource_id": null,
      "provider_id": "builtin::connectors",
      "type": "connector",
      "connector_type": "mcp",
      "connector_id": "kubernetes",
      "url": "http://localhost:8080/mcp",
      "created_at": "2025-12-01T10:09:27.387048Z",
      "updated_at": "2025-12-01T10:09:27.387050Z",
      "server_name": "kubernetes-mcp-server",
      "server_label": null,
      "server_description": null,
      "tools": null,
      "registry_id": null
    }
  ]
}
curl -s http://localhost:8321/v1alpha/connectors/kubernetes | jq 

Response:                                                   
{
  "identifier": "kubernetes-mcp-server",
  "provider_resource_id": null,
  "provider_id": "builtin::connectors",
  "type": "connector",
  "connector_type": "mcp",
  "connector_id": "kubernetes",
  "url": "http://localhost:8080/mcp",
  "created_at": "2025-12-01T10:11:46.465087Z",
  "updated_at": "2025-12-01T10:11:46.465089Z",
  "server_name": "kubernetes-mcp-server",
  "server_label": null,
  "server_description": null,
  "tools": null,
  "registry_id": null
}
curl -s http://localhost:8321/v1alpha/connectors/kubernetes/tools/resources_list | jq 

Response:
{
  "toolgroup_id": null,
  "name": "resources_list",
  "description": "List Kubernetes resources and objects in the current cluster by providing their apiVersion and kind and optionally the namespace and label selector\n(common apiVersion and kind include: v1 Pod, v1 Service, v1 Node, apps/v1 Deployment, networking.k8s.io/v1 Ingress)",
  "input_schema": {
    "type": "object",
    "required": [
      "apiVersion",
      "kind"
    ],
    "properties": {
      "apiVersion": {
        "type": "string",
        "description": "apiVersion of the resources (examples of valid apiVersion are: v1, apps/v1, networking.k8s.io/v1)"
      },
      "kind": {
        "type": "string",
        "description": "kind of the resources (examples of valid kind are: Pod, Service, Deployment, Ingress)"
      },
      "labelSelector": {
        "type": "string",
        "description": "Optional Kubernetes label selector (e.g. 'app=myapp,env=prod' or 'app in (myapp,yourapp)'), use this option when you want to filter the pods by label",
        "pattern": "([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]"
      },
      "namespace": {
        "type": "string",
        "description": "Optional Namespace to retrieve the namespaced resources from (ignored in case of cluster scoped resources). If not provided, will list resources from all namespaces"
      }
    }
  },
  "output_schema": null,
  "metadata": {
    "endpoint": "http://localhost:8080/mcp"
  }
}

Client side usage example:

import os
from openai import OpenAI

client = OpenAI(base_url="http://localhost:8321/v1", api_key=os.getenv("OPENAI_API_KEY"))
respB = client.responses.create(
        model=args.model,
        tools=[
            {
                "type": "mcp",
                "server_label": MCP_LABEL,
                "connector_id": "kubernetes",
                "require_approval": "never",
            }
        ],
        input=[{
            "role": "user",
            "content": (
                "List what kubernetes MCP tools you are allowed to use in this context.Tell me something about the cluster. \
                Try to call only the MCP tools that you have access to, and tell me which tools you called. If none are available, explain why."
            )
        }],
    )
    pretty_print_result("B: no restriction at the MCP tool (server) level, tool choice is mcp with server label and tool name", respB)

output:

=== B: no restriction at the MCP tool (server) level, tool choice is mcp with server label and tool name ===
Output text:
 I don't have access to user's context (e.g., cluster details). However, the available MCP tools for Kubernetes are:  
1. **configuration_view** - Shows Kubernetes configuration.  
2. **events_list** - Lists Kubernetes events.  
3. **namespaces_list** - Lists Kubernetes namespaces.  
4. **nodes_log** - Retrieves node logs.  
5. **nodes_stats_summary** - Gets node summary stats.  
6. **nodes_top** - Shows CPU and memory usage.  
7. **pods_list** - Lists all pods.  
8. **pods_list_in_namespace** - Lists pods in a namespace.  
9. **pods_delete** - Deletes pods.  
10. **pods_run** - Runs pods.  
11. **pods_get** - Retrieves pods.  
12. **resources_list** - Lists cluster resources.  

These tools help manage cluster configurations, track events, monitor nodes, and interact with pods. Let me know if you'd like to use specific ones!

Closes #4186 and #4061 (partially)

Test Plan

pending

NOTE: this PR builds on top of #4258 therefore it also contains changes from it. This PR should only be reviewed after #4258

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 1, 2025
@jaideepr97 jaideepr97 changed the title feat: Implement connector support via static configuration feat(api): Implement connector support via static configuration Dec 1, 2025
Copy link
Collaborator

@cdoern cdoern left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one comment so far, the implementation looks good especially compared to prompts which follows a similar structure.

Registry,
ToolDef,
)
from llama_stack_api.common.errors import (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include this in the from llama_stack_api import. or If we missed these please add them to llama_stack_api's __init__.py thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated, thanks!

@github-actions
Copy link
Contributor

github-actions bot commented Dec 1, 2025

✱ Stainless preview builds

This PR will update the llama-stack-client SDKs with the following commit message.

feat(api): Implement connector support via static configuration

Edit this comment to update it. It will appear in the SDK's changelogs.

⚠️ llama-stack-client-node studio · code · diff

There was a regression in your SDK.
generate ⚠️build ✅lint ✅test ✅

npm install https://pkg.stainless.com/s/llama-stack-client-node/781a645da9f72a1bd56b014b65ee026c059c5d39/dist.tar.gz
New diagnostics (6 warning)
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools/{tool_name}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries/{registry_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ llama-stack-client-kotlin studio · code · diff

There was a regression in your SDK.
generate ⚠️lint ✅test ❗

New diagnostics (6 warning)
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools/{tool_name}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries/{registry_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ llama-stack-client-go studio · code · diff

There was a regression in your SDK.
generate ⚠️lint ❗test ❗

go get github.com/stainless-sdks/llama-stack-client-go@4fd9c9d8592f61b4b257445062febc5136e12f68
New diagnostics (6 warning)
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools/{tool_name}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries/{registry_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ llama-stack-client-python studio · code · diff

There was a regression in your SDK.
generate ⚠️build ⏳lint ⏳test ⏳

New diagnostics (6 warning)
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools/{tool_name}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries/{registry_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.

This comment is auto-generated by GitHub Actions and is automatically kept up to date as you push.
Last updated: 2025-12-01 20:52:01 UTC

# Resolve connector_id to server_url if provided
if mcp_tool.connector_id and not mcp_tool.server_url:
if self.connectors_api is None:
raise ValueError("Connectors API not available to resolve connector_id")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assuming this can happen (can it?), what will a user do with the information?

ValueError -> HTTP 400, which indicates client the did something wrong and should correct it.

imagine they're using the openai-python sdk. there's no mention of a connectors api.

are there going to be external/remote connector id providers?

if not, this is actually an 500 internal server error that should be caught during startup.

@mergify
Copy link

mergify bot commented Dec 4, 2025

This pull request has merge conflicts that must be resolved before it can be merged. @jaideepr97 please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Dec 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot. needs-rebase

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Introduce connector <-> MCP URL mapping

3 participants