-
Notifications
You must be signed in to change notification settings - Fork 469
chore(profiling): stack sampler cpu accuracy test for short-lived functions #15518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
|
Bootstrap import analysisComparison of import times between this PR and base. SummaryThe average import time from this PR is: 243 ± 2 ms. The average import time from base is: 245 ± 2 ms. The import time difference between this PR and base is: -2.6 ± 0.08 ms. Import time breakdownThe following import paths have shrunk:
|
Performance SLOsComparing candidate taegyunkim/cpu-loop-accuracy (a94f070) with baseline main (f643ad6) 📈 Performance Regressions (2 suites)📈 iastaspectsospath - 24/24✅ ospathbasename_aspectTime: ✅ 5.163µs (SLO: <10.000µs 📉 -48.4%) vs baseline: 📈 +25.2% Memory: ✅ 40.206MB (SLO: <41.000MB 🟡 -1.9%) vs baseline: +5.0% ✅ ospathbasename_noaspectTime: ✅ 1.078µs (SLO: <10.000µs 📉 -89.2%) vs baseline: -1.0% Memory: ✅ 40.187MB (SLO: <41.000MB 🟡 -2.0%) vs baseline: +4.8% ✅ ospathjoin_aspectTime: ✅ 6.145µs (SLO: <10.000µs 📉 -38.6%) vs baseline: ~same Memory: ✅ 40.128MB (SLO: <41.000MB -2.1%) vs baseline: +4.9% ✅ ospathjoin_noaspectTime: ✅ 2.278µs (SLO: <10.000µs 📉 -77.2%) vs baseline: ~same Memory: ✅ 40.285MB (SLO: <41.000MB 🟡 -1.7%) vs baseline: +4.9% ✅ ospathnormcase_aspectTime: ✅ 3.398µs (SLO: <10.000µs 📉 -66.0%) vs baseline: -1.1% Memory: ✅ 40.305MB (SLO: <41.000MB 🟡 -1.7%) vs baseline: +5.2% ✅ ospathnormcase_noaspectTime: ✅ 0.570µs (SLO: <10.000µs 📉 -94.3%) vs baseline: ~same Memory: ✅ 40.285MB (SLO: <41.000MB 🟡 -1.7%) vs baseline: +4.9% ✅ ospathsplit_aspectTime: ✅ 4.725µs (SLO: <10.000µs 📉 -52.7%) vs baseline: ~same Memory: ✅ 40.128MB (SLO: <41.000MB -2.1%) vs baseline: +4.9% ✅ ospathsplit_noaspectTime: ✅ 1.597µs (SLO: <10.000µs 📉 -84.0%) vs baseline: +0.5% Memory: ✅ 40.226MB (SLO: <41.000MB 🟡 -1.9%) vs baseline: +4.8% ✅ ospathsplitdrive_aspectTime: ✅ 3.579µs (SLO: <10.000µs 📉 -64.2%) vs baseline: -2.1% Memory: ✅ 40.187MB (SLO: <41.000MB 🟡 -2.0%) vs baseline: +4.7% ✅ ospathsplitdrive_noaspectTime: ✅ 0.704µs (SLO: <10.000µs 📉 -93.0%) vs baseline: +1.9% Memory: ✅ 40.226MB (SLO: <41.000MB 🟡 -1.9%) vs baseline: +4.6% ✅ ospathsplitext_aspectTime: ✅ 4.529µs (SLO: <10.000µs 📉 -54.7%) vs baseline: +0.5% Memory: ✅ 40.187MB (SLO: <41.000MB 🟡 -2.0%) vs baseline: +4.9% ✅ ospathsplitext_noaspectTime: ✅ 1.382µs (SLO: <10.000µs 📉 -86.2%) vs baseline: +0.6% Memory: ✅ 40.187MB (SLO: <41.000MB 🟡 -2.0%) vs baseline: +4.6% 📈 telemetryaddmetric - 30/30✅ 1-count-metric-1-timesTime: ✅ 3.387µs (SLO: <20.000µs 📉 -83.1%) vs baseline: 📈 +15.0% Memory: ✅ 34.996MB (SLO: <35.500MB 🟡 -1.4%) vs baseline: +5.4% ✅ 1-count-metrics-100-timesTime: ✅ 201.125µs (SLO: <220.000µs -8.6%) vs baseline: ~same Memory: ✅ 34.780MB (SLO: <35.500MB -2.0%) vs baseline: +4.8% ✅ 1-distribution-metric-1-timesTime: ✅ 3.298µs (SLO: <20.000µs 📉 -83.5%) vs baseline: +0.5% Memory: ✅ 34.800MB (SLO: <35.500MB 🟡 -2.0%) vs baseline: +3.9% ✅ 1-distribution-metrics-100-timesTime: ✅ 216.771µs (SLO: <230.000µs -5.8%) vs baseline: -0.3% Memory: ✅ 35.114MB (SLO: <35.500MB 🟡 -1.1%) vs baseline: +4.9% ✅ 1-gauge-metric-1-timesTime: ✅ 2.144µs (SLO: <20.000µs 📉 -89.3%) vs baseline: -1.3% Memory: ✅ 34.937MB (SLO: <35.500MB 🟡 -1.6%) vs baseline: +5.0% ✅ 1-gauge-metrics-100-timesTime: ✅ 136.772µs (SLO: <150.000µs -8.8%) vs baseline: +0.4% Memory: ✅ 34.839MB (SLO: <35.500MB 🟡 -1.9%) vs baseline: +5.0% ✅ 1-rate-metric-1-timesTime: ✅ 3.087µs (SLO: <20.000µs 📉 -84.6%) vs baseline: ~same Memory: ✅ 34.957MB (SLO: <35.500MB 🟡 -1.5%) vs baseline: +5.5% ✅ 1-rate-metrics-100-timesTime: ✅ 214.854µs (SLO: <250.000µs 📉 -14.1%) vs baseline: ~same Memory: ✅ 34.878MB (SLO: <35.500MB 🟡 -1.8%) vs baseline: +4.9% ✅ 100-count-metrics-100-timesTime: ✅ 20.461ms (SLO: <22.000ms -7.0%) vs baseline: +0.3% Memory: ✅ 34.780MB (SLO: <35.500MB -2.0%) vs baseline: +4.6% ✅ 100-distribution-metrics-100-timesTime: ✅ 2.302ms (SLO: <2.550ms -9.7%) vs baseline: ~same Memory: ✅ 34.760MB (SLO: <35.500MB -2.1%) vs baseline: +4.3% ✅ 100-gauge-metrics-100-timesTime: ✅ 1.406ms (SLO: <1.550ms -9.3%) vs baseline: +0.4% Memory: ✅ 34.839MB (SLO: <35.500MB 🟡 -1.9%) vs baseline: +4.9% ✅ 100-rate-metrics-100-timesTime: ✅ 2.197ms (SLO: <2.550ms 📉 -13.9%) vs baseline: -1.5% Memory: ✅ 34.819MB (SLO: <35.500MB 🟡 -1.9%) vs baseline: +5.0% ✅ flush-1-metricTime: ✅ 4.641µs (SLO: <20.000µs 📉 -76.8%) vs baseline: -0.1% Memory: ✅ 35.095MB (SLO: <35.500MB 🟡 -1.1%) vs baseline: +5.0% ✅ flush-100-metricsTime: ✅ 175.190µs (SLO: <250.000µs 📉 -29.9%) vs baseline: -0.2% Memory: ✅ 35.173MB (SLO: <35.500MB 🟡 -0.9%) vs baseline: +5.0% ✅ flush-1000-metricsTime: ✅ 2.176ms (SLO: <2.500ms 📉 -13.0%) vs baseline: -0.4% Memory: ✅ 35.979MB (SLO: <36.500MB 🟡 -1.4%) vs baseline: +4.9% 🟡 Near SLO Breach (14 suites)🟡 coreapiscenario - 10/10 (1 unstable)
|
| n = 1 | ||
| while True: | ||
| start = time.thread_time_ns() | ||
| cpu_loop(n) | ||
| dt = time.thread_time_ns() - start | ||
| if dt > cpu_nano: | ||
| break | ||
| n *= 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a little cleaner / easier to reason about
| n = 1 | |
| while True: | |
| start = time.thread_time_ns() | |
| cpu_loop(n) | |
| dt = time.thread_time_ns() - start | |
| if dt > cpu_nano: | |
| break | |
| n *= 2 | |
| n = 1 | |
| dt = 0 | |
| while dt <= cpu_nano: | |
| start = time.thread_time_ns() | |
| cpu_loop(n) | |
| dt = time.thread_time_ns() - start | |
| n *= 2 |
| # If actual time is more than 20% off, adjust proportionally | ||
| if actual_time_ns > 0: | ||
| ratio = cpu_nano / actual_time_ns | ||
| target_iterations = int(target_iterations * ratio) | ||
|
|
||
| return int(target_iterations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this comment still accurate?
| p_cal.stop(flush=True) | ||
|
|
||
| # Remove the pprof file created during calibration | ||
| pprof_file = os.environ["DD_PROFILING_OUTPUT_PPROF"] + "." + str(os.getpid()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pprof_file = os.environ["DD_PROFILING_OUTPUT_PPROF"] + "." + str(os.getpid()) | |
| pprof_file = f("{}.{}", os.environ["DD_PROFILING_OUTPUT_PPROF"], str(os.getpid())" |
| os.remove(pprof_file) | ||
|
|
||
| # Start profiler for the actual test | ||
| p = profiler.Profiler() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we reuse p_cal from above?
| start_cpu = time.thread_time_ns() | ||
| cpu_loop(iterations) | ||
| actual_cpu_time_ns += time.thread_time_ns() - start_cpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is repeated 4-5 times, let's make it a helper that returns
time.thread_time_ns() - start_cpu
| break | ||
|
|
||
| if sleep_function_name and wall_times[sleep_function_name] > 0: | ||
| expected_ratio = 0.01 / 0.1 # 0.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just this?
| expected_ratio = 0.01 / 0.1 # 0.1 | |
| expected_ratio = 0.1 |
| # | ||
| # The profiler uses sampling, so it may capture less than 100% of CPU time. | ||
| # We check that it captures at least 40% of the actual CPU time. | ||
| target_cpu_time_total_ns = actual_cpu_time_ns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a different test - should we break it out of here?
| _DD_PROFILING_STACK_V2_ADAPTIVE_SAMPLING_ENABLED="0", | ||
| ) | ||
| ) | ||
| def test_cpu_loop_accuracy(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one test is doing a lot of checks, and they don't all seem related. Should we refactor into several tests that do one specific thing?
- test_cpu_loop_accuracy_THING_1
- test_cpu_loop_accuracy_THING_2
- ...
Description
When
cpu_loop()is calibrated to run for 0.01 seconds followed bytime.sleep(0.1), the profiler seems to under-report the cpu time even when adaptive sampling is turned off. This test is to document this in our tests so that we have a baseline to improve upon.https://app.datadoghq.com/notebook/13556452/profiles-for-stack-profiler-accuracy-check
Testing
Risks
Additional Notes