-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: Massive performance overhaul for editor playback #1447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdded LZ4 compression/decompression and timestamps to WebSocket frame payloads, introduced pipelined GPU readbacks and double-buffered textures, refactored playback in-flight tracking and prefetching, and added extensive performance instrumentation and logging across rendering, playback, and socket paths. Changes
Sequence Diagram(s)sequenceDiagram
participant Camera as Camera/Editor (desktop)
participant FrameWS as frame_ws (compress + WS send)
participant WS as WebSocket
participant Client as Browser/client socket.ts
participant Renderer as Canvas Renderer
Camera->>FrameWS: produce WSFrame {data, stride, w, h, created_at}
FrameWS->>FrameWS: compress_frame_data(...) (LZ4) -> packed ArrayBuffer
FrameWS->>WS: send(ArrayBuffer)
WS->>Client: deliver(ArrayBuffer)
Client->>Client: decompressLz4(ArrayBuffer) -> metadata + pixels
Client->>Client: createImageData(pixels, width, height)
Client->>Renderer: draw ImageData
Note over FrameWS,Client: Both sides record per-frame timings/metrics
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/utils/socket.ts (1)
91-99: Potential buffer offset issue with DataView.When
decompressedis created fromlz4.decompressBlock, itsbufferproperty references the underlyingArrayBuffer, butdecompressed.byteOffsetmay not be 0. TheDataViewat line 99 usesmetadataOffsetrelative to the start ofbuffer, not relative todecompressed.byteOffset.This could cause incorrect metadata reads if the underlying buffer has a non-zero offset.
- const buffer = decompressed.buffer; const clamped = new Uint8ClampedArray(decompressed); if (clamped.length < 12) { console.error("Received frame too small to contain metadata"); return; } const metadataOffset = clamped.length - 12; - const meta = new DataView(buffer, metadataOffset, 12); + const meta = new DataView( + clamped.buffer, + clamped.byteOffset + metadataOffset, + 12, + );
🧹 Nitpick comments (16)
crates/rendering/src/lib.rs (1)
1795-1797: No-op method calls.These
copy_to_texturecalls are no-ops in bothDisplayLayerandCameraLayer. If this is scaffolding for future functionality, consider adding a TODO tracking issue. If not needed, remove them.crates/rendering/src/layers/display.rs (1)
97-97: Empty method implementation.This no-op
copy_to_textureappears to be API scaffolding. Consider documenting its intended purpose or removing if unused.crates/rendering/src/layers/camera.rs (1)
111-111: Empty method implementation.Same observation as
DisplayLayer- this is a no-op placeholder.crates/rendering/src/frame_pipeline.rs (4)
108-123: Buffer resizing recreates all three buffers.When size increases, all three buffers are recreated. This is correct but worth noting that it temporarily increases memory pressure. Consider adding a small growth factor to reduce reallocations for slight size changes.
264-289: Texture size update recreates both textures unconditionally.Consider adding a size check to avoid unnecessary recreation when dimensions haven't changed:
pub fn update_texture_size(&mut self, device: &wgpu::Device, width: u32, height: u32) { + if self.textures.0.width() == width && self.textures.0.height() == height { + return; + } + let make_texture = || {
439-443: Consider defensive handling instead of expect().While the logic ensures a pending readback exists after
submit_readback, usingexpect()creates a panic risk if the code is refactored. Consider returning an error instead:- let pending = session - .pipelined_readback - .take_pending() - .expect("just submitted a readback"); + let pending = session + .pipelined_readback + .take_pending() + .ok_or(RenderingError::BufferMapWaitingFailed)?;
388-389: Unused variablehas_previous.The
has_previousvariable is declared but only used in debug log output, not for control flow. Consider either removing it or using it in the tracing logs for consistency.- let previous_pending = session.pipelined_readback.take_pending(); - let has_previous = previous_pending.is_some(); + let previous_pending = session.pipelined_readback.take_pending();Or include it in tracing:
tracing::debug!(has_previous = previous_pending.is_some(), ...);crates/rendering/src/decoder/avassetreader.rs (2)
357-362: Consider destructuring for clearer closure pattern.The closure and tuple access could be more readable by destructuring:
- if let Some(most_recent_prev_frame) = - cache.iter().rev().find(|v| *v.0 < requested_frame) + if let Some((&_frame_num, cached)) = + cache.iter().rev().find(|(&k, _)| k < requested_frame) && let Some(sender) = sender.take() { - (sender)(most_recent_prev_frame.1.data().clone()); + (sender)(cached.data().clone()); }</review_comment_end>
404-407: Remove commented-out code.As per coding guidelines, commented-out code should be removed. If this debug information is needed, convert it to a tracing call; otherwise delete it.
- // info!( - // "sending previous frame {} for {requested_frame}", - // last_sent_frame.0 - // ); -</review_comment_end>
apps/desktop/src/utils/socket.ts (1)
16-29: Consider validating decompression output length.The size-equality heuristic at line 21 assumes uncompressed data if
dataAfterSize === uncompressedSize. While rare, LZ4 can produce compressed output equal to or larger than input for incompressible data.Additionally,
lz4.decompressBlockreturns the number of bytes written. Consider validating this return value to ensure decompression succeeded completely.function decompressLz4(compressedBuffer: ArrayBuffer): Uint8Array { const view = new DataView(compressedBuffer); const uncompressedSize = view.getUint32(0, true); const dataAfterSize = compressedBuffer.byteLength - 4; if (dataAfterSize === uncompressedSize) { return new Uint8Array(compressedBuffer.slice(4)); } const compressedData = new Uint8Array(compressedBuffer, 4); const output = new Uint8Array(uncompressedSize); - lz4.decompressBlock(compressedData, output, 0, compressedData.length, 0); + const bytesWritten = lz4.decompressBlock(compressedData, output, 0, compressedData.length, 0); + if (bytesWritten !== uncompressedSize) { + throw new Error(`LZ4 decompression size mismatch: expected ${uncompressedSize}, got ${bytesWritten}`); + } return output; }apps/desktop/src-tauri/src/frame_ws.rs (3)
49-50: Remove duplicateprintln!statements.Using both
println!andtracing::info!for the same event is redundant. Prefertracingexclusively for consistent, structured logging with proper log levels and filtering.- println!("socket connection established"); tracing::info!("Socket connection established");
161-163: Remove redundantprintln!statements throughout the file.Multiple locations use
println!alongsidetracing(lines 49, 162, 181, 211, 331, 350). Usetracingconsistently for all logging.let elapsed = now.elapsed(); - println!("Websocket closing after {elapsed:.2?}"); tracing::info!("Websocket closing after {elapsed:.2?}");
170-172: Consider propagating bind errors.The
.unwrap()onTcpListener::bindwill panic if the port is unavailable. Consider returning aResultfrom this function to allow callers to handle bind failures gracefully.crates/editor/src/editor.rs (2)
91-100: Perf counters and periodic/final logging look consistentThe counters (
frames_rendered,frames_dropped, per‑frame times, maxima) and both the final Stop log and periodic 2‑second metrics log are internally consistent and guard against divide‑by‑zero; this is a solid instrumentation pass.If you find yourself extending these metrics further, consider extracting a small helper to compute/emit the summary (avg/max) so the Stop and periodic paths stay in lockstep, but that’s optional.
Also applies to: 122-142, 280-293
323-323: Awaitingfinished_rxis fine; ignoring cancellation is acceptable hereSwitching from
.await.ok()tolet _ = finished_rx.await;keeps the semantics (we wait for the renderer to signal completion but don’t treat a dropped sender as an error), while avoiding the extraok()call. This is reasonable for a fire‑and‑forget render request where a canceled oneshot just means “renderer went away”.If you ever need to debug unexpected cancellations, you could match on the
Resultand log adebug!on the error path, but that’s optional.crates/editor/src/playback.rs (1)
96-99: RwLock usage is safe; consider async-aware lock and clearer poison handling (optional)The
Arc<RwLock<HashSet<u32>>>protectingin_flight_framesis used carefully (short read/write sections, no lock held across.await), so there are no obvious race or deadlock issues.Two optional cleanups you might consider:
- Using
tokio::sync::RwLockinstead ofstd::sync::RwLockwould make the intent clearer in this async context and avoid blocking an executor thread if contention ever grows.- The current
read().map(...).unwrap_or(false)/write().ok()patterns silently treat lock poisoning as “no entry in set”. If poisoning is unexpected here, logging in the error branch instead of silently ignoring it could make debugging easier.These are not blockers, but worth considering if you continue iterating on this path.
Also applies to: 122-124, 144-148, 170-172, 199-203, 420-424, 552-554
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (16)
.claude/settings.local.json(1 hunks)apps/desktop/package.json(2 hunks)apps/desktop/src-tauri/Cargo.toml(1 hunks)apps/desktop/src-tauri/src/camera_legacy.rs(2 hunks)apps/desktop/src-tauri/src/editor_window.rs(3 hunks)apps/desktop/src-tauri/src/frame_ws.rs(6 hunks)apps/desktop/src-tauri/src/screenshot_editor.rs(2 hunks)apps/desktop/src/utils/socket.ts(4 hunks)crates/editor/src/editor.rs(9 hunks)crates/editor/src/playback.rs(10 hunks)crates/rendering/src/composite_frame.rs(3 hunks)crates/rendering/src/decoder/avassetreader.rs(9 hunks)crates/rendering/src/frame_pipeline.rs(2 hunks)crates/rendering/src/layers/camera.rs(4 hunks)crates/rendering/src/layers/display.rs(3 hunks)crates/rendering/src/lib.rs(8 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
apps/desktop/src-tauri/src/screenshot_editor.rsapps/desktop/src-tauri/src/camera_legacy.rsapps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/playback.rscrates/rendering/src/layers/camera.rsapps/desktop/src-tauri/src/editor_window.rscrates/editor/src/editor.rscrates/rendering/src/composite_frame.rscrates/rendering/src/lib.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/frame_pipeline.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
apps/desktop/src-tauri/src/screenshot_editor.rsapps/desktop/src-tauri/src/camera_legacy.rsapps/desktop/src/utils/socket.tsapps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/playback.rscrates/rendering/src/layers/camera.rsapps/desktop/src-tauri/src/editor_window.rscrates/editor/src/editor.rscrates/rendering/src/composite_frame.rscrates/rendering/src/lib.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/frame_pipeline.rs
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use TanStack Query v5 for all client-side server state and data fetching in TypeScript files
Files:
apps/desktop/src/utils/socket.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx,js,jsx}: Use strict TypeScript; avoid any type; leverage shared types from @cap/* packages
Follow camelCase naming for variables and functions; PascalCase for components; hooks must start with 'use' prefix
Use Biome for linting and formatting; match existing formatting conventions in the codebase
Use Tailwind CSS for styling in web components; stay consistent with spacing and tokens
Use static skeletons for loading states that mirror content; avoid bouncing animations
Memoize expensive work, code-split naturally, and use Next/Image for remote assets
**/*.{ts,tsx,js,jsx}: Use 2-space indent for TypeScript files; format with Biome usingpnpm format
Use Biome for code formatting and linting; runpnpm formatregularly
Use kebab-case for file names (e.g.,user-menu.tsx); use PascalCase for components
Files:
apps/desktop/src/utils/socket.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Never add any form of comments to code (single-line //, multi-line /* /, JSDoc /* */, or any other comment syntax); code must be self-explanatory through naming, types, and structure
Directory naming must use lowercase-dashed convention
Files:
apps/desktop/src/utils/socket.ts
apps/desktop/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/desktop/**/*.ts: Use @tanstack/solid-query for server state management in SolidJS components
Use generated commands and events from tauri_specta for IPC; never manually construct IPC calls
Listen directly to generated events from tauri_specta and use typed event interfaces
Files:
apps/desktop/src/utils/socket.ts
🧠 Learnings (6)
📚 Learning: 2025-12-07T14:29:19.166Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.166Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use Tailwind CSS for styling in web components; stay consistent with spacing and tokens
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-12-07T14:29:19.165Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.165Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use strict TypeScript; avoid any type; leverage shared types from cap/* packages
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-12-07T14:29:40.721Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.721Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use 2-space indent for TypeScript files; format with Biome using `pnpm format`
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-10-10T02:06:05.438Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1165
File: apps/web-cluster/Dockerfile:15-15
Timestamp: 2025-10-10T02:06:05.438Z
Learning: Node.js version 24 and later includes built-in TypeScript support (type-stripping) and can run .ts files directly without requiring a separate TypeScript runtime or transpilation step for basic type annotations. The command `node script.ts` works natively in Node.js v24+.
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
apps/desktop/src-tauri/src/camera_legacy.rsapps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/playback.rscrates/editor/src/editor.rscrates/rendering/src/lib.rscrates/rendering/src/decoder/avassetreader.rs
📚 Learning: 2025-12-07T14:29:19.165Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.165Z
Learning: Applies to **/*.rs : Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Applied to files:
crates/editor/src/editor.rs
🧬 Code graph analysis (4)
crates/editor/src/editor.rs (1)
crates/editor/src/editor_instance.rs (1)
new(36-189)
crates/rendering/src/lib.rs (1)
crates/rendering/src/frame_pipeline.rs (1)
finish_encoder(379-492)
crates/rendering/src/layers/display.rs (2)
crates/rendering/src/composite_frame.rs (4)
new(94-117)create_frame_texture(179-198)default(41-67)bind_group(153-177)crates/rendering/src/layers/camera.rs (2)
new(19-49)render(113-121)
crates/rendering/src/decoder/avassetreader.rs (2)
crates/rendering/src/decoder/mod.rs (2)
new(20-26)data(28-30)crates/video-decode/src/avassetreader.rs (1)
frames(135-139)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Typecheck
- GitHub Check: Cursor Bugbot
- GitHub Check: Analyze (rust)
🔇 Additional comments (36)
.claude/settings.local.json (1)
34-34: Clarify the purpose of the newgit checkoutpermission and consider restricting the pattern.The addition of
"Bash(git checkout:*)"grants broad permission to run anygit checkoutcommand without restrictions. While this may be intentional, the wildcard pattern is permissive and could allow unintended operations.
Purpose verification: The PR focuses on video playback performance improvements. Why does this configuration change need git checkout permissions? Is this required for CI/CD, development workflows, or build processes?
Pattern restriction: If this permission is necessary, consider using a more restrictive pattern. For example:
"Bash(git checkout:origin/*)"to limit to remote branches"Bash(git checkout:refs/tags/*)"to limit to tags only"Bash(git checkout:main)"or"Bash(git checkout:develop)"for specific branchescrates/rendering/src/composite_frame.rs (1)
9-9: Good optimization: Persistent sampler reuse.Moving sampler creation from per-bind-group to struct initialization eliminates redundant GPU resource allocation. The sampler configuration is appropriate for video frame compositing.
Also applies to: 102-116
crates/rendering/src/lib.rs (1)
1750-1763: Well-structured performance tracing.The
tracing::debug!instrumentation with structured fields is good for performance monitoring. The conditional logging (only when exceeding thresholds) avoids log spam.Also applies to: 1927-1936
crates/rendering/src/layers/display.rs (2)
9-14: Good double-buffering implementation.The double-buffering pattern correctly prevents GPU contention by alternating between textures. The implementation mirrors
CameraLayerfor consistency across the codebase.Also applies to: 31-38
49-67: Texture recreation logic is sound.The conditional texture/view/bind_group recreation when dimensions change is correctly scoped to the
next_textureslot, avoiding disruption to the currently displayed frame.crates/rendering/src/layers/camera.rs (2)
9-15: Consistent double-buffering implementation.The double-buffering pattern matches
DisplayLayer, maintaining architectural consistency. Resource initialization is correct.Also applies to: 40-48
63-81: Correct next-buffer texture management.The logic correctly handles dimension changes by recreating only the next buffer's resources, ensuring the current frame remains intact during transitions.
crates/rendering/src/frame_pipeline.rs (4)
7-14: Well-designed pending readback structure.The
PendingReadbackstruct encapsulates all necessary state for async GPU readback completion tracking, including timing for performance analysis.
21-45: Robust polling loop with cooperative yielding.The poll loop correctly handles the three possible states:
- Success: break and continue
- Empty: poll device and yield periodically
- Closed: return error
The
yield_now()every 10 polls prevents CPU monopolization.
66-71: Good performance warning threshold.Logging a warning when GPU poll exceeds 10ms helps identify bottlenecks in production.
186-190: Ignoring oneshot send result is acceptable here.The receiver may have been dropped if the wait was cancelled. This pattern is standard for fire-and-forget completion signals.
crates/rendering/src/decoder/avassetreader.rs (2)
146-160: LGTM! Clean wrapper pattern.The refactored
CachedFrameimplementation properly encapsulatesProcessedFramewith a clean interface vianew()anddata()methods.</review_comment_end>
290-331: LGTM! Comprehensive reset instrumentation.The cache reset logic properly:
- Uses
saturating_subto avoid panics (line 287)- Determines reset necessity based on cache bounds
- Retains frames within the new cache window
- Tracks and logs detailed performance metrics (timing, cleared/retained entries, total resets)
The instrumentation will be valuable for performance analysis.
</review_comment_end>
apps/desktop/src-tauri/src/screenshot_editor.rs (2)
18-18: LGTM!Import addition for
Instantis correct and necessary for the timestamp feature.
350-358: LGTM!The
created_attimestamp is correctly captured at frame emission time, enabling latency tracking downstream.apps/desktop/package.json (2)
80-80: LGTM!Type definitions for
lz4jsare correctly placed in devDependencies.
59-59: LGTM! Thelz4jsruntime dependency (^0.2.0) correctly specifies the latest available version for client-side decompression of compressed WebSocket frames, with matching type definitions (@types/lz4js@^0.2.1) added as a devDependency.apps/desktop/src-tauri/src/camera_legacy.rs (2)
1-2: LGTM!Import for
Instantcorrectly added for timestamp support.
61-69: LGTM!The
created_attimestamp is correctly captured when constructing theWSFrame, consistent with the timestamping pattern across all frame emission paths.apps/desktop/src-tauri/src/editor_window.rs (3)
1-1: LGTM!The
time::Instantimport is cleanly integrated into the existingstdimports.
30-37: LGTM!The
created_attimestamp is correctly captured in the prewarm frame emission path.
186-194: LGTM!The
created_attimestamp is correctly captured in the on-demand frame creation path, maintaining consistency with the prewarm path.apps/desktop/src/utils/socket.ts (5)
1-14: LGTM!The
SocketMetricsinterface is well-defined with appropriate numeric types for tracking frame processing performance.
38-47: LGTM!Metrics are properly initialized with zeroed counters and the current timestamp.
55-68: LGTM!Good practice to log a session summary on close. The division is safely guarded by the
framesReceived > 0check.
128-149: LGTM!The stride handling correctly accounts for GPU texture row padding, using an efficient direct subarray when stride matches and a row-by-row copy otherwise.
151-186: LGTM!The metrics tracking and periodic logging provide valuable performance insights. Note that the compression ratio shown in periodic logs reflects only the most recent frame, not a session average—this is fine for monitoring real-time compression behavior.
apps/desktop/src-tauri/Cargo.toml (1)
110-110: LGTM!The
lz4_flexdependency is appropriate for the frame compression feature. Version 0.11 will resolve to 0.11.5 (the latest patch), which is a pure Rust implementation with good performance characteristics.apps/desktop/src-tauri/src/frame_ws.rs (2)
17-24: LGTM!The
created_at: Instantfield is a clean addition for latency tracking. The struct is appropriately defined.
151-159: LGTM!Proper guard against division by zero with
if frames_sent > 0, and good use of structured logging for session metrics.crates/editor/src/editor.rs (4)
2-2: Tracing/Instant imports correctly support the new instrumentationBringing in
Instantandtracing::{debug, info}is appropriate for the timing and perf logs added below; no issues here.Also applies to: 10-10
61-61: Channel capacity increase to 8 is a safe backpressure tweakBumping the
mpscchannel size to 8 should reduce renderer backpressure without meaningfully increasing memory usage; this looks fine.
153-179: Batched frame dropping behavior is correct for catch‑upUsing
dropped_in_batchwithtry_recvto always render the latest frame, explicitly sendingfinishedon the dropped ones, and incrementingframes_droppedonly when you actually skip frames gives the right backpressure behavior without leaving any waiters hanging. No issues from a correctness standpoint.
230-241: Per‑frame timing and metric accumulation are correct and low‑overheadCapturing
render_start/callback_startwithInstant::now(), computingrender_timeandcallback_time, and then aggregating intototal_*_usandmax_*_uswith debug‑level per‑frame logs is a good balance of detail and overhead. The periodic 2‑second info log leverages these aggregates correctly.No changes needed here.
Also applies to: 261-293
crates/editor/src/playback.rs (2)
239-246: Warmup and prefetch buffer management look correct and well‑boundedThe warmup loop (first‑frame timeout, additional warmup until
warmup_target_frames, and early exit on stop) plus the subsequent sort ofprefetch_bufferbyframe_numberlook sound and should give a good starting cache without risking unbounded waiting.Similarly, the post‑warmup trimming logic that bounds
prefetch_buffertoPREFETCH_BUFFER_SIZEand preferentially evicts the farthest‑ahead frames is reasonable given the small buffer size.No functional issues stand out here.
Also applies to: 254-303, 300-303, 304-311, 335-336, 337-355
239-246: Playback metrics instrumentation is comprehensive and low‑riskThe new counters and logs for:
- total rendered and skipped frames,
- prefetch hit/miss counts and hit‑rate,
- per‑frame timings (fetch, uniforms, render),
- periodic (~2s) metrics, and
- final summary including effective FPS
are well‑structured and integrated cleanly with
tracing. These should be very helpful for diagnosing playback behaviour without affecting correctness.Looks good as is.
Also applies to: 650-654, 655-663, 709-715, 722-745, 748-771
| fn compress_frame_data(mut data: Vec<u8>, stride: u32, height: u32, width: u32) -> Vec<u8> { | ||
| data.extend_from_slice(&stride.to_le_bytes()); | ||
| data.extend_from_slice(&height.to_le_bytes()); | ||
| data.extend_from_slice(&width.to_le_bytes()); | ||
|
|
||
| let data_len = data.len(); | ||
| let mut result = Vec::with_capacity(data_len + 4); | ||
| result.extend_from_slice(&(data_len as u32).to_le_bytes()); | ||
| result.extend_from_slice(&data); | ||
| result | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the full file to understand the context
cat -n apps/desktop/src-tauri/src/frame_ws.rs | head -100Repository: CapSoftware/Cap
Length of output: 4298
🏁 Script executed:
# Check if lz4_flex is in the dependencies
find . -name "Cargo.toml" -type f | head -20Repository: CapSoftware/Cap
Length of output: 685
🏁 Script executed:
# Search for compression-related functions or references in the file
rg -n "compress|lz4|LZ4" apps/desktop/src-tauri/src/frame_ws.rsRepository: CapSoftware/Cap
Length of output: 2622
🏁 Script executed:
cat apps/desktop/src-tauri/Cargo.toml | grep -A 50 "\[dependencies\]"Repository: CapSoftware/Cap
Length of output: 1641
🏁 Script executed:
# Also check if lz4_flex or other compression libs are mentioned anywhere
rg "lz4|compress|zstd" apps/desktop/src-tauri/Cargo.tomlRepository: CapSoftware/Cap
Length of output: 76
Misleading function name: no compression is performed.
The function compress_frame_data only serializes frame metadata (stride, height, width) and prepends the total length—no actual compression occurs. This causes misleading log messages claiming "(compressed)" and metrics showing compression ratios, which will report >100% since the output is larger than the input.
lz4_flex (0.11) is available in dependencies. Either rename the function to pack_frame_data or serialize_frame_data, or implement actual LZ4 compression using lz4_flex::compress_prepend_frame.
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | ||
| let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#, | ||
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug logging to hardcoded local path.
This block writes to a hardcoded user-specific path (/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log) which:
- Will silently fail on any other machine
- Is clearly development/debugging code not intended for production
- Contains comments violating coding guidelines ("Never add comments to code")
- Uses
.unwrap()onduration_since()which could panic
Remove this entire block. If persistent debug logging is needed, use proper tracing subscribers or configurable file appenders.
- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") {
- let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#,
- latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis());
- }
- // #endregion📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // #region agent log | |
| use std::io::Write; | |
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | |
| let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#, | |
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | |
| } | |
| // #endregion |
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/frame_ws.rs around lines 131 to 137, remove the
entire debug logging block that opens and writes to the hardcoded path
/Users/macbookuser/.../.cursor/debug.log; delete this development-only code
(including the OpenOptions, writeln! call and the
SystemTime::now().duration_since(...).unwrap()) and replace it with either no
logging or a proper tracing/log call wired to the application's configurable
logging/subscriber (or a configurable file appender) that does not use hardcoded
paths or unwraps; ensure any time calculations handle errors without panicking.
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | ||
| let _ = writeln!(f, r#"{{"hypothesisId":"WS_BROADCAST","location":"frame_ws.rs:broadcast_send","message":"WebSocket broadcast frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#, | ||
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), frame.width, frame.height, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate debug logging block.
Same issue as the other debug block—hardcoded local path, comments violating guidelines, and development-only code.
- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") {
- let _ = writeln!(f, r#"{{"hypothesisId":"WS_BROADCAST","location":"frame_ws.rs:broadcast_send","message":"WebSocket broadcast frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#,
- latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), frame.width, frame.height, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis());
- }
- // #endregion📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // #region agent log | |
| use std::io::Write; | |
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | |
| let _ = writeln!(f, r#"{{"hypothesisId":"WS_BROADCAST","location":"frame_ws.rs:broadcast_send","message":"WebSocket broadcast frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#, | |
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), frame.width, frame.height, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | |
| } | |
| // #endregion |
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/frame_ws.rs around lines 262 to 268, there is a
duplicate development-only debug logging block that writes to a hardcoded local
path and includes guideline-violating comments; remove this duplicated block
entirely and ensure only the approved centralized logging mechanism remains (or
retain a single, non-hardcoded, conditional debug log behind a feature flag/ENV
check if needed), delete the hardcoded path and accompanying comment markers,
and run cargo fmt/check to confirm no unused imports or variables remain.
| if dropped_in_batch > 0 { | ||
| frames_dropped += dropped_in_batch as u64; | ||
| debug!( | ||
| dropped_frames = dropped_in_batch, | ||
| total_dropped = frames_dropped, | ||
| "[PERF:EDITOR_RENDER] dropped frames to catch up" | ||
| ); | ||
|
|
||
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"A","location":"editor.rs:frames_dropped","message":"Renderer dropped frames due to backpressure","data":{{"dropped_in_batch":{},"total_dropped":{},"rendering_frame":{}}},"timestamp":{}}}"#, | ||
| dropped_in_batch, | ||
| frames_dropped, | ||
| current.frame_number, | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove agent log file I/O and comments from the hot render path
These three “agent log” blocks introduce multiple problems:
- They perform synchronous file I/O (
OpenOptions::open+writeln!) on every frame or drop event in the hottest part of the render loop, which can severely distort the very performance you’re measuring. - They write to a hard‑coded, machine‑specific absolute path (
/Users/macbookuser/.../.cursor/debug.log), which will fail on other environments and leaks a personal user path into the codebase. - They rely on
SystemTime::now().duration_since(UNIX_EPOCH).unwrap(), which can panic if the system clock is misconfigured. - They add new
// #region/// #endregioncomments in a.rsfile, which conflicts with the “no comments in Rust code” guideline.
This instrumentation should not ship as part of the renderer loop; the existing tracing‑based perf logs are sufficient and portable.
I recommend removing these blocks entirely and keeping just the tracing metrics. For example:
- if dropped_in_batch > 0 {
- frames_dropped += dropped_in_batch as u64;
- debug!(
- dropped_frames = dropped_in_batch,
- total_dropped = frames_dropped,
- "[PERF:EDITOR_RENDER] dropped frames to catch up"
- );
-
- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new()
- .create(true)
- .append(true)
- .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log")
- {
- let _ = writeln!(
- f,
- r#"{{"hypothesisId":"A","location":"editor.rs:frames_dropped","message":"Renderer dropped frames due to backpressure","data":{{"dropped_in_batch":{},"total_dropped":{},"rendering_frame":{}}},"timestamp":{}}}"#,
- dropped_in_batch,
- frames_dropped,
- current.frame_number,
- std::time::SystemTime::now()
- .duration_since(std::time::UNIX_EPOCH)
- .unwrap()
- .as_millis()
- );
- }
- // #endregion
- }
+ if dropped_in_batch > 0 {
+ frames_dropped += dropped_in_batch as u64;
+ debug!(
+ dropped_frames = dropped_in_batch,
+ total_dropped = frames_dropped,
+ "[PERF:EDITOR_RENDER] dropped frames to catch up"
+ );
+ }- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new()
- .create(true)
- .append(true)
- .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log")
- {
- let _ = writeln!(
- f,
- r#"{{"hypothesisId":"A","location":"editor.rs:render_start","message":"Starting GPU render","data":{{"frame_number":{}}},"timestamp":{}}}"#,
- current.frame_number,
- std::time::SystemTime::now()
- .duration_since(std::time::UNIX_EPOCH)
- .unwrap()
- .as_millis()
- );
- }
- // #endregion
-
let render_start = Instant::now();- // #region agent log
- if let Ok(mut f) = std::fs::OpenOptions::new()
- .create(true)
- .append(true)
- .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log")
- {
- let _ = writeln!(
- f,
- r#"{{"hypothesisId":"A","location":"editor.rs:render_complete","message":"GPU render complete","data":{{"frame_number":{},"render_time_us":{}}},"timestamp":{}}}"#,
- current.frame_number,
- render_time.as_micros(),
- std::time::SystemTime::now()
- .duration_since(std::time::UNIX_EPOCH)
- .unwrap()
- .as_millis()
- );
- }
- // #endregion
-
let callback_start = Instant::now();This keeps all of the useful perf metrics while avoiding the portability and perf issues and aligning with the “no comments in Rust” rule.
Also applies to: 211-228, 242-259
🤖 Prompt for AI Agents
In crates/editor/src/editor.rs around lines 181 to 209 (also remove the same
pattern at 211-228 and 242-259), remove the synchronous "agent log" block that
opens a hard-coded file path and writes using writeln!; this code performs
blocking I/O in the hot render path, references a machine-specific absolute
path, uses SystemTime::now().duration_since(...).unwrap() which can panic, and
includes noncompliant region comments. Replace each block by simply keeping the
existing tracing/debug log (frames_dropped metrics) and delete the
OpenOptions/Open+write, the timestamp unwrap, and the // #region / // #endregion
comments so the render loop only emits tracing-based metrics and no file I/O.
| let (playback_position_tx, playback_position_rx) = watch::channel(self.start_frame_number); | ||
|
|
||
| let in_flight_frames: Arc<RwLock<HashSet<u32>>> = Arc::new(RwLock::new(HashSet::new())); | ||
| let prefetch_in_flight = in_flight_frames.clone(); | ||
| let main_in_flight = in_flight_frames; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In‑flight frame tracking logic looks solid; consider a couple of small robustness tweaks
The shared Arc<RwLock<HashSet<u32>>> in‑flight set and the way it’s consulted from both prefetch and main playback paths look correct and should avoid duplicate decodes while allowing overlap.
Two concrete improvements you may want to make:
- Use saturating add for the prefetch window cap to avoid any chance of
u32overflow on very long timelines:
- let max_prefetch_frame = current_playback_frame + MAX_PREFETCH_AHEAD;
+ let max_prefetch_frame =
+ current_playback_frame.saturating_add(MAX_PREFETCH_AHEAD);- When the in‑flight wait times out and you decide to skip that frame (
frame_number = frame_number.saturating_add(1); total_frames_skipped += 1; continue;),playback_position_txandframe_request_txare not updated, so the prefetch task and any observers see a stale playback position for that skip path only. For consistency with the normal advancement and the catch‑up skip branch, consider updating both beforecontinue:
- } else {
- frame_number = frame_number.saturating_add(1);
- total_frames_skipped += 1;
- continue;
- }
+ } else {
+ frame_number = frame_number.saturating_add(1);
+ total_frames_skipped += 1;
+ let _ = frame_request_tx.send(frame_number);
+ let _ = playback_position_tx.send(frame_number);
+ continue;
+ }This keeps the in‑flight and prefetch logic fully aligned with the actual frame index in all advancement paths.
Also applies to: 118-130, 144-152, 170-173, 199-203, 420-471, 494-510, 552-574, 668-670, 719-720
🤖 Prompt for AI Agents
In crates/editor/src/playback.rs around lines 94 to 99 (and also apply the same
fixes at 118-130, 144-152, 170-173, 199-203, 420-471, 494-510, 552-574, 668-670,
719-720): replace any plain addition used to compute prefetch window bounds with
saturating_add to prevent u32 overflow on very long timelines, and when you
handle the in‑flight wait timeout branch where you increment frame_number and
total_frames_skipped then continue, also update playback_position_tx and
frame_request_tx with the new frame_number (and flush/send accordingly) before
the continue so prefetch and observers see the advanced position consistent with
the skip path.
| let mut cache_hits = 0u64; | ||
| let mut cache_misses = 0u64; | ||
| let mut total_requests = 0u64; | ||
| let mut total_decode_time_us = 0u64; | ||
| let mut total_reset_count = 0u64; | ||
| let mut total_reset_time_us = 0u64; | ||
| let last_metrics_log = Rc::new(RefCell::new(Instant::now())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Complete or remove incomplete metrics tracking infrastructure.
Several metrics variables are declared but not fully utilized:
total_decode_time_us(line 225) is never accumulated from individual request times- Lines 280-283 reset
last_metrics_logevery 2 seconds but never actually log the accumulated metrics (cache_hits,cache_misses,total_requests,total_decode_time_us)
Either complete the periodic metrics logging by actually emitting the accumulated stats, or remove the unused variables and the incomplete timer-reset block.
</review_comment_end>
Also applies to: 233-284
🤖 Prompt for AI Agents
In crates/rendering/src/decoder/avassetreader.rs around lines 222 to 228,
several metrics variables (total_decode_time_us, cache_hits, cache_misses,
total_requests, total_reset_count, total_reset_time_us, last_metrics_log) are
declared but never emitted; either fully implement periodic metrics emission or
remove the unused variables and timer-reset code. Fix by one of two options: (A)
Complete metrics logging — increment total_decode_time_us whenever you time a
decode, increment cache_hits/cache_misses/total_requests/reset counters where
appropriate, and in the existing 2s timer block emit a structured log (or
telemetry) containing cache_hits, cache_misses, total_requests, average decode
time (total_decode_time_us/total_requests), total_reset_count,
total_reset_time_us, then zero the counters and update last_metrics_log; or (B)
Remove all of these metric variables and the 2s timer/reset block if you don’t
intend to report metrics. Ensure no dead variables remain and compiler warnings
are resolved.
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"GPU_PIPELINE","location":"frame_pipeline.rs:finish_encoder","message":"Pipelined finish (waited for previous)","data":{{"submit_us":{},"wait_us":{},"total_us":{},"has_previous":true}},"timestamp":{}}}"#, | ||
| submit_time.as_micros(), | ||
| wait_time.as_micros(), | ||
| total_start.elapsed().as_micros(), | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded debug log paths and comments.
Same issue as in lib.rs - the hardcoded path /Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log and // #region agent log comments should be removed or made configurable before merging. As per coding guidelines, comments are not permitted.
Also applies to: 459-479
🤖 Prompt for AI Agents
crates/rendering/src/frame_pipeline.rs lines 408-427 (and similarly 459-479):
remove the hardcoded debug file path and the `// #region agent log` comment and
replace this ad-hoc file write with a configurable logging approach; read a log
path from a configuration or environment variable (with a sensible default such
as using the system temp dir or the application's configured log directory), or
better yet use the project's logging/tracing facility to emit structured JSON
events instead of manually opening files, and ensure errors opening/writing the
log are handled (no panics) and that the code no longer contains disallowed
comments.
| // #region agent log | ||
| use std::io::Write; | ||
| let texture_update_start = std::time::Instant::now(); | ||
| // #endregion | ||
|
|
||
| session.update_texture_size( | ||
| &self.constants.device, | ||
| uniforms.output_size.0, | ||
| uniforms.output_size.1, | ||
| ); | ||
|
|
||
| // #region agent log | ||
| let texture_update_time = texture_update_start.elapsed(); | ||
| if texture_update_time.as_micros() > 100 { | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"E","location":"lib.rs:update_texture_size","message":"Texture size update took significant time","data":{{"time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#, | ||
| texture_update_time.as_micros(), | ||
| uniforms.output_size.0, | ||
| uniforms.output_size.1, | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded debug log path and comments before merging.
The hardcoded path /Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log won't work on other machines and exposes local file structure. Additionally, the // #region agent log comments violate the coding guidelines which prohibit comments in code.
Consider either:
- Removing this debug logging entirely before merge
- Using an environment variable for the log path (e.g.,
std::env::var("CAP_DEBUG_LOG"))
- // #region agent log
- use std::io::Write;
- let texture_update_start = std::time::Instant::now();
- // #endregion
+ let texture_update_start = std::time::Instant::now();Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In crates/rendering/src/lib.rs around lines 1556 to 1588, remove the hardcoded
debug log path and the `// #region agent log` comments; either delete the entire
debug logging block before merging or replace the path with a configurable
option (e.g., read std::env::var("CAP_DEBUG_LOG") and only write if set) and
fall back to stderr/stdout when the env var is absent, and eliminate the
prohibited region comments so the logging obeys coding guidelines.
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"PREPARE_BREAKDOWN","location":"lib.rs:prepare","message":"Layer prepare breakdown","data":{{"bg_us":{},"blur_us":{},"display_us":{},"cursor_us":{},"camera_us":{},"camera_only_us":{},"text_us":{},"captions_us":{},"total_us":{}}},"timestamp":{}}}"#, | ||
| bg_time.as_micros(), | ||
| blur_time.as_micros(), | ||
| display_time.as_micros(), | ||
| cursor_time.as_micros(), | ||
| camera_time.as_micros(), | ||
| camera_only_time.as_micros(), | ||
| text_time.as_micros(), | ||
| captions_time.as_micros(), | ||
| total_time.as_micros(), | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue: Hardcoded debug log path and comments.
This block has the same problems - hardcoded path and comment markers. As per coding guidelines, comments should be removed.
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"GPU_BREAKDOWN","location":"lib.rs:produce_frame","message":"GPU render breakdown","data":{{"prepare_us":{},"encoder_create_us":{},"render_pass_us":{},"finish_encoder_us":{},"total_us":{}}},"timestamp":{}}}"#, | ||
| prepare_time.as_micros(), | ||
| encoder_create_time.as_micros(), | ||
| render_time.as_micros(), | ||
| finish_time.as_micros(), | ||
| total_time.as_micros(), | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue: Hardcoded debug log path and comments in produce_frame.
Consistent with previous comments, this debug logging block should either be removed or use a configurable path.
🤖 Prompt for AI Agents
In crates/rendering/src/lib.rs around lines 1904 to 1925, there is a hardcoded
debug log write to "/Users/macbookuser/.../.cursor/debug.log" inside
produce_frame; remove this hardcoded path and debug block or replace it with a
configurable logger: either delete the entire ad-hoc file-writing block, or read
a configured log path (or use the existing app logger) and write the JSON entry
via that logger, ensuring path/config comes from settings/env and fallbacks to
stdout/stderr if unset; also remove or convert the inline comment markers to
normal code comments if keeping the block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on January 8
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#, | ||
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Debug logging to hardcoded local path accidentally committed
Multiple debug logging blocks write to a hardcoded path /Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log throughout the codebase. This debugging code was accidentally committed and performs unnecessary filesystem I/O operations on every frame or operation. The code will silently fail on user machines since the path won't exist, but still introduces overhead from failed file open attempts. These // #region agent log blocks appear in frame_ws.rs, editor.rs, playback.rs, frame_pipeline.rs, and lib.rs.
Additional Locations (2)
| result.extend_from_slice(&(data_len as u32).to_le_bytes()); | ||
| result.extend_from_slice(&data); | ||
| result | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Compression function doesn't actually compress frame data
The compress_frame_data function is misleadingly named and doesn't perform any compression. It only appends metadata and prepends a size header. The lz4_flex dependency was added to Cargo.toml but is never imported or used. The frontend's decompressLz4 function works by coincidence because it has a fallback when dataAfterSize === uncompressedSize. This means the claimed "LZ4 compression" feature in the PR description is not actually implemented, resulting in no bandwidth savings despite the added complexity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/rendering/src/decoder/avassetreader.rs (1)
404-407: Remove commented-out code.As per coding guidelines, comments are not permitted. This dead code should be removed.
if let Some((sender, last_sent_frame)) = last_sent_frame.and_then(|l| Some((sender.take()?, l))) { - // info!( - // "sending previous frame {} for {requested_frame}", - // last_sent_frame.0 - // ); - (sender)(last_sent_frame);
♻️ Duplicate comments (2)
crates/rendering/src/decoder/avassetreader.rs (1)
222-228: Incomplete metrics tracking infrastructure.These metrics variables (
_cache_hits,_cache_misses,_total_requests, etc.) are declared and incremented but never emitted. The timer-reset block at lines 280-283 resetslast_metrics_logbut doesn't log the accumulated stats. Either complete the periodic metrics emission or remove the unused variables.crates/rendering/src/frame_pipeline.rs (1)
408-427: Remove hardcoded debug log paths and comments.The hardcoded path
/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.logand// #region agent logcomments must be removed before merging. As per coding guidelines, comments are not permitted, and hardcoded user-specific paths will fail on other systems.
🧹 Nitpick comments (2)
crates/rendering/src/frame_pipeline.rs (1)
264-289: Consider updating pipelined_readback size when textures resize.
update_texture_sizerecreates the double-buffered textures but doesn't proactively resize thepipelined_readbackbuffers. Whilesubmit_readbackwill callensure_size, proactively resizing here would maintain consistency and clear any pending readback with stale dimensions.pub fn update_texture_size(&mut self, device: &wgpu::Device, width: u32, height: u32) { let make_texture = || { device.create_texture(&wgpu::TextureDescriptor { // ... existing code ... }) }; self.textures = (make_texture(), make_texture()); self.texture_views = ( self.textures.0.create_view(&Default::default()), self.textures.1.create_view(&Default::default()), ); + + let padded = padded_bytes_per_row((width, height)); + let new_buffer_size = (padded * height) as u64; + self.pipelined_readback.ensure_size(device, new_buffer_size); }crates/recording/src/recovery.rs (1)
522-566: Refactor improves readability.The extraction of
segment_basemakes path construction clearer. The changes work correctly.Optional: Consider inlining
segment_indexfor consistency.The
segment_indexvariable at line 522 is only used once on line 523. For consistency with line 598 (which usesidirectly), you could simplify:- let segment_index = seg.index; - let segment_base = format!("content/segments/segment-{segment_index}"); + let segment_base = format!("content/segments/segment-{}", seg.index);This reduces an intermediate variable and matches the pattern used in
create_project_config.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
apps/desktop/src-tauri/src/captions.rs(31 hunks)apps/desktop/src-tauri/src/lib.rs(1 hunks)apps/desktop/src-tauri/src/screenshot_editor.rs(4 hunks)apps/desktop/src-tauri/src/update_project_names.rs(5 hunks)apps/desktop/src-tauri/src/upload.rs(1 hunks)crates/recording/src/recovery.rs(5 hunks)crates/rendering/src/decoder/avassetreader.rs(9 hunks)crates/rendering/src/frame_pipeline.rs(2 hunks)crates/utils/src/lib.rs(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- crates/utils/src/lib.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/desktop/src-tauri/src/screenshot_editor.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
apps/desktop/src-tauri/src/lib.rsapps/desktop/src-tauri/src/upload.rsapps/desktop/src-tauri/src/captions.rscrates/recording/src/recovery.rscrates/rendering/src/decoder/avassetreader.rsapps/desktop/src-tauri/src/update_project_names.rscrates/rendering/src/frame_pipeline.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
apps/desktop/src-tauri/src/lib.rsapps/desktop/src-tauri/src/upload.rsapps/desktop/src-tauri/src/captions.rscrates/recording/src/recovery.rscrates/rendering/src/decoder/avassetreader.rsapps/desktop/src-tauri/src/update_project_names.rscrates/rendering/src/frame_pipeline.rs
🧠 Learnings (3)
📚 Learning: 2025-12-07T14:29:19.165Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.165Z
Learning: Applies to **/*.rs : Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Applied to files:
apps/desktop/src-tauri/src/captions.rscrates/rendering/src/frame_pipeline.rs
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/rendering/src/decoder/avassetreader.rs
📚 Learning: 2025-12-07T14:29:40.721Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.721Z
Learning: Applies to **/*.rs : Never use `dbg!()` macro in Rust code; use proper logging instead (Clippy: `dbg_macro` = deny)
Applied to files:
crates/rendering/src/frame_pipeline.rs
🧬 Code graph analysis (2)
crates/recording/src/recovery.rs (2)
crates/project/src/meta.rs (1)
camera_path(211-219)apps/desktop/src/utils/tauri.ts (2)
VideoMeta(504-504)AudioMeta(362-362)
apps/desktop/src-tauri/src/update_project_names.rs (1)
crates/utils/src/lib.rs (1)
ensure_unique_filename(52-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Cursor Bugbot
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (rust)
🔇 Additional comments (10)
apps/desktop/src-tauri/src/lib.rs (1)
487-487: LGTM! Minor readability improvement.The inline variable interpolation
{attempts}and{e}is functionally equivalent to the previous positional placeholders and improves readability by making it clear which variable corresponds to each placeholder.apps/desktop/src-tauri/src/update_project_names.rs (1)
20-20: LGTM! Consistent formatting improvements.The changes replace positional format arguments with named interpolation (e.g.,
format!("Failed: {e}")instead offormat!("Failed: {}", e)), which improves readability and is modern Rust style. All changes are consistent and introduce no behavioral modifications.Also applies to: 54-54, 157-157, 162-162, 200-200, 263-263, 271-271, 277-277
apps/desktop/src-tauri/src/upload.rs (1)
232-234: LGTM! Correct variable used for orgId parameter.The code correctly uses
org_id(bound from theorganization_idparameter) for theorgIdquery parameter. If this previously usedvideo_id, this change fixes a bug where the wrong value would have been sent.apps/desktop/src-tauri/src/captions.rs (1)
64-64: Consistent log formatting improvements throughout the file.The refactoring from placeholder-based formatting (e.g.,
"{:?}", var) to captured variable interpolation (e.g.,"{var:?}") is the idiomatic modern Rust style and improves readability. All log statements follow proper logging practices usinglog::*andtracing::*macros.crates/rendering/src/decoder/avassetreader.rs (3)
145-159: Clean encapsulation of CachedFrame processing.The refactoring to a private tuple struct with
new()anddata()methods provides better encapsulation and ensures frames are always processed consistently before caching.
237-284: Well-instrumented cache hit/miss handling.The timing instrumentation and structured logging provide valuable performance insights. The closure-based deferred execution pattern for cache misses is appropriate.
303-330: Comprehensive reset/seek handling with proper instrumentation.The reset logic correctly determines when a decoder reset is needed based on cache bounds, and the detailed logging of reset statistics (duration, cleared vs retained entries) aids debugging. The cache retention logic properly keeps frames within the new bounds.
crates/rendering/src/frame_pipeline.rs (2)
16-80: Solid async buffer wait implementation.The polling loop with
try_recv()anddevice.poll()is appropriate for wgpu's async mapping model. Theyield_now()every 10 iterations prevents starving the async runtime. The performance warning at >10ms is helpful for identifying bottlenecks.
379-491: Well-structured pipelined finish_encoder implementation.The pipelining strategy is sound: submit current frame's readback, wait for previous frame's result (if any), and prime the pipeline for the first frame. The
expectat line 442 is safe sincesubmit_readbackwas just called at line 398-400.crates/recording/src/recovery.rs (1)
458-463: LGTM! Improved error message formatting.The inline variable formatting (
{display_output:?},{e}) is more idiomatic and readable than positional arguments.
| pub fn ensure_size(&mut self, device: &wgpu::Device, required_size: u64) { | ||
| if self.buffer_size < required_size { | ||
| let make_buffer = || { | ||
| Arc::new(device.create_buffer(&wgpu::BufferDescriptor { | ||
| label: Some("Pipelined Readback Buffer"), | ||
| size: required_size, | ||
| usage: wgpu::BufferUsages::COPY_DST | wgpu::BufferUsages::MAP_READ, | ||
| mapped_at_creation: false, | ||
| })) | ||
| }; | ||
|
|
||
| self.buffers = [make_buffer(), make_buffer(), make_buffer()]; | ||
| self.buffer_size = required_size; | ||
| self.current_index = 0; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential stale buffer reference after resize.
When ensure_size recreates all buffers, any existing pending readback still holds an Arc<wgpu::Buffer> reference to an old buffer. While the Arc keeps the buffer alive, the data in it becomes irrelevant since it was sized for a different output. Consider clearing pending when resizing, or document that callers must drain pending before output size changes.
pub fn ensure_size(&mut self, device: &wgpu::Device, required_size: u64) {
if self.buffer_size < required_size {
+ self.pending = None;
+
let make_buffer = || {
Arc::new(device.create_buffer(&wgpu::BufferDescriptor {🤖 Prompt for AI Agents
In crates/rendering/src/frame_pipeline.rs around lines 108 to 123, currently
ensure_size recreates self.buffers but leaves any existing self.pending entries
pointing at old Arc<wgpu::Buffer> objects; while those Arcs keep the buffers
alive, their contents no longer match the new size and are stale. When resizing,
clear or reset self.pending (e.g., self.pending.clear() or replace with a fresh
empty collection) before swapping in the new buffers so no pending readbacks
reference old buffers, or alternatively add a documented precondition that
callers must drain pending before calling ensure_size.
| texture, | ||
| uniforms, | ||
| prime_encoder, | ||
| )?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: GPU pipeline returns previous frame instead of current frame
The pipelined GPU readback implementation causes incorrect frame ordering during playback. On the first call, the code correctly returns frame 1, but then "primes" the pipeline by submitting another readback of the same texture (containing stale frame 1 data). On subsequent calls, finish_encoder returns the previous pending readback rather than the current frame. This causes: (1) frame 1 to be displayed twice at playback start, and (2) all subsequent frames to be returned one frame late (~33ms delay at 30fps). The prime_encoder reads from texture which still contains the previous frame's data rather than the newly rendered frame, causing the callback to receive stale frame data.
Significant improvements to the frame streaming pipeline for the desktop app, focusing on WebSocket frame transmission, compression, and performance monitoring. The main changes include adding frame compression and latency tracking on both the backend (Rust) and frontend (TypeScript), updating dependencies to support LZ4 compression, and enhancing performance logging for debugging and optimization.
Note
Speeds up editor playback and frame streaming by adding pipelined GPU readback, double‑buffered textures, WS frame packing/latency metrics, and LZ4 support with extensive performance logging.
frame_pipeline.rs), replacing ad‑hoc buffer mapping.layers/display.rs,layers/camera.rs).rendering/lib.rs).editor.rs).playback.rs).decoder/avassetreader.rs).created_attimestamps to frames and pack stride/size metadata; compute/send latency/compression metrics with periodic summaries (frame_ws.rs, call sites in editor/screenshot/camera).apps/desktop/src/utils/socket.ts).lz4_flex(Rust) andlz4js+ types (TS).Written by Cursor Bugbot for commit e62ccca. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.