WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

Conversation

@staugust
Copy link
Collaborator

@staugust staugust commented Nov 24, 2025

Motivation

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Summary by Sourcery

Enhancements:

  • Introduce a helper to compute per-rank max_new_tokens based on the DCP world size and apply it throughout the scheduler’s token budgeting logic to improve scalability for larger batches.

@sourcery-ai
Copy link

sourcery-ai bot commented Nov 24, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

This PR introduces a helper to compute per-rank max_new_tokens for DCP and wires it into the scheduling policy so token budgets, prefill budgets, and preemption calculations are based on the distributed world size, enabling larger effective batch sizes per node.

Updated class diagram for DCP-aware scheduling policy

classDiagram
    class SchedulePolicy {
      float new_token_ratio
      int rem_total_tokens
      int rem_chunk_tokens
      int _get_running_request_total_token_offset(req: Req)
      void add_chunked_req(req: Req)
      void add_req_state(r: Req, insert_sort: bool)
      void add_one_req(req: Req, has_chunked_req: bool)
      bool preempt_to_schedule(req: Req, server_args: ServerArgs)
    }

    class Req {
      SamplingParams sampling_params
      list[int] output_ids
      int extend_input_len
      list[int] origin_input_ids
    }

    class SamplingParams {
      int max_new_tokens
      bool ignore_eos
    }

    class ServerArgs {
    }
    %% ...

    class DCPParallelState {
      int get_dcp_world_size()
    }

    class ScheduleHelpers {
      int compute_dcp_local_max_new_tokens(tokens: int)
    }

    SchedulePolicy --> Req : uses
    Req --> SamplingParams : has
    SchedulePolicy --> ServerArgs : uses
    SchedulePolicy ..> ScheduleHelpers : calls
    ScheduleHelpers ..> DCPParallelState : calls
Loading

Flow diagram for DCP-local max_new_tokens computation in scheduling

flowchart TD
    A["Receive request with global max_new_tokens from SamplingParams"]
    B["Call compute_dcp_local_max_new_tokens(max_new_tokens)"]
    C["Inside compute_dcp_local_max_new_tokens: world_size = get_dcp_world_size()"]
    D["Compute local_max = (tokens + world_size - 1) // world_size"]
    E["Return local_max to scheduling policy"]
    F["Clip local_max with CLIP_MAX_NEW_TOKENS"]
    G["Use clipped local_max in token budget, prefill budget, and preemption calculations"]

    A --> B
    B --> C
    C --> D
    D --> E
    E --> F
    F --> G
Loading

File-Level Changes

Change Details Files
Introduce DCP-aware max_new_tokens helper and use it in all scheduler token budget calculations.
  • Add compute_dcp_local_max_new_tokens that divides a global token budget by the DCP world size with ceiling semantics.
  • Import get_dcp_world_size in the scheduling policy to support DCP-aware computations.
  • Replace direct uses of sampling_params.max_new_tokens (and derived values) in running token offset, prefill budget, request state, and preemption calculations with the DCP-local max_new_tokens value, still clamped by CLIP_MAX_NEW_TOKENS where applicable.
python/sglang/srt/managers/schedule_policy.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gemini-code-assist
Copy link

Summary of Changes

Hello @staugust, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the SGLang scheduling policy to better support distributed collective parallelism (DCP) by adjusting how max_new_tokens is handled. By introducing a mechanism to compute local max_new_tokens for each distributed process, it aims to facilitate the use of larger overall batch sizes during inference, thereby improving efficiency in distributed setups.

Highlights

  • Distributed Token Calculation: Introduced a new utility function, compute_dcp_local_max_new_tokens, to calculate the maximum number of new tokens a local process should generate in a Distributed Collective Parallelism (DCP) setup. This function divides the total max_new_tokens by the DCP world size.
  • Integration into Scheduling Policy: Integrated the compute_dcp_local_max_new_tokens function across various parts of the scheduling policy, including functions like _get_running_request_total_token_offset, add_chunked_req, add_req_state, _update_prefill_budget, add_one_req, and preempt_to_schedule. This ensures that max_new_tokens values are correctly adjusted for distributed environments.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consider clamping non-positive values in compute_dcp_local_max_new_tokens (e.g., max(tokens, 0)) before applying the division to avoid surprising negative results when max_new_tokens - len(output_ids) is negative in edge cases.
  • compute_dcp_local_max_new_tokens is called multiple times in hot scheduling paths; you might want to cache the DCP world size (or the local max_new_tokens) instead of recomputing it on every call to reduce overhead.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider clamping non-positive values in compute_dcp_local_max_new_tokens (e.g., max(tokens, 0)) before applying the division to avoid surprising negative results when max_new_tokens - len(output_ids) is negative in edge cases.
- compute_dcp_local_max_new_tokens is called multiple times in hot scheduling paths; you might want to cache the DCP world size (or the local max_new_tokens) instead of recomputing it on every call to reduce overhead.

## Individual Comments

### Comment 1
<location> `python/sglang/srt/managers/schedule_policy.py:34-35` </location>
<code_context>
 from sglang.srt.server_args import ServerArgs
+from sglang.srt.distributed.parallel_state import get_dcp_world_size
+
+def compute_dcp_local_max_new_tokens(tokens: int):
+    return (tokens + get_dcp_world_size() -1) // get_dcp_world_size()
+

</code_context>

<issue_to_address>
**issue (bug_risk):** Guard against negative token counts passed into compute_dcp_local_max_new_tokens.

Some callers can pass a negative value here (e.g., `req.sampling_params.max_new_tokens - len(req.output_ids)` once `len(req.output_ids)` exceeds `max_new_tokens`). That makes `(tokens + world_size - 1) // world_size` negative and propagates bad values into scheduling/offset calculations. Consider clamping `tokens` to a minimum of 0 at the start of this helper (e.g., `tokens = max(tokens, 0)`) to avoid treating over-generated requests as having negative capacity.
</issue_to_address>

### Comment 2
<location> `python/sglang/srt/managers/schedule_policy.py:452-454` </location>
<code_context>
                min(compute_dcp_local_max_new_tokens(req.sampling_params.max_new_tokens), CLIP_MAX_NEW_TOKENS)
                if not truncated
                else 0

</code_context>

<issue_to_address>
**suggestion (code-quality):** Swap if/else branches of if expression to remove negation ([`swap-if-expression`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/swap-if-expression))

```suggestion
                0 if truncated else min(compute_dcp_local_max_new_tokens(req.sampling_params.max_new_tokens), CLIP_MAX_NEW_TOKENS)

```

<br/><details><summary>Explanation</summary>Negated conditions are more difficult to read than positive ones, so it is best
to avoid them where we can. By swapping the `if` and `else` conditions around we
can invert the condition and make it positive.
</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +34 to +35
def compute_dcp_local_max_new_tokens(tokens: int):
return (tokens + get_dcp_world_size() -1) // get_dcp_world_size()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Guard against negative token counts passed into compute_dcp_local_max_new_tokens.

Some callers can pass a negative value here (e.g., req.sampling_params.max_new_tokens - len(req.output_ids) once len(req.output_ids) exceeds max_new_tokens). That makes (tokens + world_size - 1) // world_size negative and propagates bad values into scheduling/offset calculations. Consider clamping tokens to a minimum of 0 at the start of this helper (e.g., tokens = max(tokens, 0)) to avoid treating over-generated requests as having negative capacity.

Comment on lines +452 to 454
min(compute_dcp_local_max_new_tokens(req.sampling_params.max_new_tokens), CLIP_MAX_NEW_TOKENS)
if not truncated
else 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Swap if/else branches of if expression to remove negation (swap-if-expression)

Suggested change
min(compute_dcp_local_max_new_tokens(req.sampling_params.max_new_tokens), CLIP_MAX_NEW_TOKENS)
if not truncated
else 0
0 if truncated else min(compute_dcp_local_max_new_tokens(req.sampling_params.max_new_tokens), CLIP_MAX_NEW_TOKENS)


ExplanationNegated conditions are more difficult to read than positive ones, so it is best
to avoid them where we can. By swapping the if and else conditions around we
can invert the condition and make it positive.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a helper function to compute per-rank max_new_tokens for Distributed Context Parallelism (DCP) and applies this logic to the token budgeting in the scheduler. The overall approach is sound and the changes are mostly correct. However, I've identified a critical bug in the calculation of tokens_left within the add_req_state method, which could lead to incorrect memory estimations. I have also provided a suggestion to improve the maintainability of the new helper function. Addressing these points will ensure the stability and correctness of the token budgeting logic.

Comment on lines +487 to 489
tokens_left = compute_dcp_local_max_new_tokens(r.sampling_params.max_new_tokens) * new_token_ratio - len(
r.output_ids
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The calculation for tokens_left appears to be incorrect. The original logic is budgeted_tokens - generated_tokens. With DCP, this should be local_budgeted_tokens - local_generated_tokens or more accurately ceil((budgeted_tokens - generated_tokens) / dcp_world_size). The current implementation ceil(budgeted_tokens / dcp_ws) * ratio - global_generated_tokens mixes local and global token counts, which will lead to incorrect memory estimation. This could cause out-of-memory errors or underutilization of resources.

Suggested change
tokens_left = compute_dcp_local_max_new_tokens(r.sampling_params.max_new_tokens) * new_token_ratio - len(
r.output_ids
)
tokens_left = compute_dcp_local_max_new_tokens(
int(r.sampling_params.max_new_tokens * new_token_ratio) - len(r.output_ids)
)

from sglang.srt.distributed.parallel_state import get_dcp_world_size

def compute_dcp_local_max_new_tokens(tokens: int):
return (tokens + get_dcp_world_size() -1) // get_dcp_world_size()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and to avoid potential side effects if get_dcp_world_size() were to become more complex in the future, it's good practice to call it only once and store its result in a local variable.

Suggested change
return (tokens + get_dcp_world_size() -1) // get_dcp_world_size()
dcp_world_size = get_dcp_world_size()
return (tokens + dcp_world_size - 1) // dcp_world_size

@staugust
Copy link
Collaborator Author

staugust commented Nov 24, 2025

tp8 DeepSeek-V3.1 最大batch size 19, dcp可以到64,没有发更高max_concurrency的压测。

batch size 压到64时, qps提升明显,整体吞吐提升了 (0.77 / 0.47 - 1) = 63.83%

<style> </style>
method max_running_reqs batch_size isl osl num_reqs duration qps prefill tok/s decode tok/s mean ttft p50 ttft p99 ttft mean tpot p50 tpot p99 tpot
tp8 32 24 4096 1024 120 254.15 0.47 1934 483.5 5765.11 5693.46 8081.72 44.04 43.835 48.27
tp8+dcp8 32 24 4096 1024 120 262.83 0.46 1870.13 467.53 14681.7 6270.54 44116.27 33.69 33.98 36.86
tp8+dcp8 96 64 4096 1024 320 414.19 0.77 3164.56 791.14 12538.46 12302.4 21283.13 68.79 68.85 79.03

启动命令:

NCCL_DEBUG=WARN \
PYTHONUNBUFFERED=1 \
TORCHINDUCTOR_FX_GRAPH_CACHE=1 \
TORCHINDUCTOR_AUTOGRAD_CACHE=1 \
SGLANG_ENABLE_JIT_DEEPGEMM=1 \
SGLANG_DISABLE_TP_MEMORY_INBALANCE_CHECK=1 \
TORCHINDUCTOR_CACHE_DIR=/home/admin/inductor_root_cache \
    nohup python3 -m sglang.launch_server \
        --model-path ${MODEL} \
        --host 0.0.0.0 \
        --port 8188 \
        --dtype auto \
        --mem-fraction-static 0.92 \
        --tp-size 8 \
        --max-running-requests 32 \
        --cuda-graph-max-bs 32 \
        --trust-remote-code \
        --enable-cache-report \
        --quantization fp8 \
        --log-level info \
        --chunked-prefill-size -1 \
        --context-length 65536 \
        --disable-radix-cache \
        --attention-backend flashinfer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants