-
Notifications
You must be signed in to change notification settings - Fork 2.5k
refactor: add LANGCHAIN_GRAPH_RECURSION_LIMIT env to avoid the error … #4513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v2
Are you sure you want to change the base?
refactor: add LANGCHAIN_GRAPH_RECURSION_LIMIT env to avoid the error … #4513
Conversation
…"GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition. You can increase the limit by setting the recursion_limit config key. For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT "
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| agent = create_react_agent(chat_model, tools).configure(recursion_limit=(int(CONFIG.get("LANGCHAIN_GRAPH_RECURSION_LIMIT", '25')))) | ||
| response = agent.astream({"messages": message_list}, stream_mode='messages') | ||
|
|
||
| # 用于存储工具调用信息(按 tool_id)以及按 index 聚合分片 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code looks generally structured correctly but has a few areas where improvements can be made:
-
Imports: The
maxkb.constmodule is imported, which suggests that you have a custom module namedconst.py. Ensure this file exists and contains the necessary configuration variables. -
Tool Configuration: In the
_yield_mcp_responsefunction, there's an attempt to configure the reaction agent with a maximum recursion limit fetched from theCONFIGdictionary usingCONFIG.get("LANGCHAIN_GRAPH_RECURSION_LIMIT", '25'). Make sure that"LANGCHAIN_GRAPH_RECURSION_LIMIT"exists in yourCONFIGsettings or default it appropriately if not set. -
Code Comments: Add comments describing what each section of the code does, especially around the recursive limiting logic and how the response is being streamed.
-
Error Handling: Consider adding more detailed error handling to manage cases such as failed tool retrieval from MCP servers or issues during streaming.
Here’s a revised version of the key part of your code with these suggestions:
import re
import threading
from typing import Iterator
# Import your CONFIG constants from the appropriate module
from maxkb.const import CONFIG
from django.http import StreamingHttpResponse
from langchain_core.messages import BaseMessageChunk, BaseMessage, ToolMessage, AIMessageChunk
from langchain_mcp_adapters.client import MultiServerMCPClient
async def _yield_mcp_response(chat_model, message_list, mcp_servers, mcp_output_queue):
try:
# Initialize theMulti Server Management Console (MCP) client
client = MultiServerMCPClient(json.loads(mcp_servers))
# Fetch available tools from the MCP servers
tools = await client.get_tools()
// Create a reaction agent with specific configurations
# Set recursion limit based on the configuration
agent = create_react_agent(
chat_model,
tools,
config={
"RECURSION_LIMIT": int(CONFIG.get("LANGCHAIN_GRAPH_RECURSION_LIMIT", '25'))
}
)
# Stream responses from the agent
response = agent.astream(
{
"messages": message_list,
"stream_mode": 'messages'
},
stream_mode='message_chunks' # Adjust mode as needed for better performance
)
# Process each chunk received from the streaming generator
async for content in response.stream():
if isinstance(content, BaseMessageChunk):
yield StreamingHttpResponse(chunk_to_bytes(content.content), status=200)This revision ensures clarity about the purpose of variable assignments, provides adequate comments explaining critical parts of the code, and includes default values and logging that could help diagnose problems when running the application.
|
do not merge |
…"GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition. You can increase the limit by setting the recursion_limit config key. For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT "
What this PR does / why we need it?
Summary of your change
Please indicate you've done the following: