-
Notifications
You must be signed in to change notification settings - Fork 1.2k
feat(api): Implement connector support via static configuration #4263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
174b8e9 to
5331e80
Compare
cdoern
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment so far, the implementation looks good especially compared to prompts which follows a similar structure.
| Registry, | ||
| ToolDef, | ||
| ) | ||
| from llama_stack_api.common.errors import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
include this in the from llama_stack_api import. or If we missed these please add them to llama_stack_api's __init__.py thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated, thanks!
5331e80 to
4fa016e
Compare
Closes llamastack#4235 and llamastack#4061 (partially) Signed-off-by: Jaideep Rao <[email protected]>
Signed-off-by: Jaideep Rao <[email protected]>
Signed-off-by: Jaideep Rao <[email protected]>
4fa016e to
01a76e0
Compare
✱ Stainless preview buildsThis PR will update the Edit this comment to update it. It will appear in the SDK's changelogs.
|
⚠️ llama-stack-client-kotlin studio · code · diff
There was a regression in your SDK.
generate ⚠️→lint ✅→test ❗New diagnostics (6 warning)
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools/{tool_name}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries/{registry_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ llama-stack-client-go studio · code · diff
There was a regression in your SDK.
generate ⚠️→lint ❗→test ❗go get github.com/stainless-sdks/llama-stack-client-go@4fd9c9d8592f61b4b257445062febc5136e12f68New diagnostics (6 warning)
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools/{tool_name}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries/{registry_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
⚠️ llama-stack-client-python studio · code · diff
There was a regression in your SDK.
generate ⚠️→build ⏳→lint ⏳→test ⏳New diagnostics (6 warning)
⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools/{tool_name}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries/{registry_id}` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/{connector_id}/tools` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.⚠️ Endpoint/NotConfigured: `get /v1alpha/connectors/registries` exists in the OpenAPI spec, but isn't specified in the Stainless config, so code will not be generated for it.
This comment is auto-generated by GitHub Actions and is automatically kept up to date as you push.
Last updated: 2025-12-01 20:52:01 UTC
| # Resolve connector_id to server_url if provided | ||
| if mcp_tool.connector_id and not mcp_tool.server_url: | ||
| if self.connectors_api is None: | ||
| raise ValueError("Connectors API not available to resolve connector_id") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assuming this can happen (can it?), what will a user do with the information?
ValueError -> HTTP 400, which indicates client the did something wrong and should correct it.
imagine they're using the openai-python sdk. there's no mention of a connectors api.
are there going to be external/remote connector id providers?
if not, this is actually an 500 internal server error that should be caught during startup.
|
This pull request has merge conflicts that must be resolved before it can be merged. @jaideepr97 please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
What does this PR do?
Implements support for configuring static connectors in the stack via run.yaml. Major features of this PR:
registered_resourcesin the run.yamlserver_urlorconnector_idand handles resolutionExamples:
run.yaml config:
API requests:
Client side usage example:
output:
Closes #4186 and #4061 (partially)
Test Plan
pending
NOTE: this PR builds on top of #4258 therefore it also contains changes from it. This PR should only be reviewed after #4258