WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

Conversation

@Brendonovich
Copy link
Contributor

@Brendonovich Brendonovich commented Oct 13, 2025

Summary by CodeRabbit

  • New Features

    • Secure authentication added for Workflow RPC using a generated secret.
    • Upload progress reporting added for Loom video imports.
  • Bug Fixes

    • Loom import now returns clearer 400 errors for invalid videos; improved error mappings.
  • Refactor

    • More resilient S3 credential resolution with fallbacks; improved streaming stability.
  • Chores

    • Removed public AWS env vars from builds/images and CI; updated env scripts and ignore patterns; adjusted deploy settings.
  • Documentation

    • Updated guides to reflect new environment variables and formatting changes.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Introduces secret-based RPC authentication and corresponding client/server middleware. Refactors Loom video import errors and adds an internal-access check. Adjusts S3 credential resolution with fallbacks and buffering. Prunes public AWS env vars across CI, Docker, config, and schema. Adds an SST-generated WORKFLOWS_RPC_SECRET and updates infrastructure/environment wiring.

Changes

Cohort / File(s) Change Summary
CI and workflow env pruning
\.github/workflows/ci.yml, \.github/workflows/docker-build-web.yml, \.github/workflows/publish.yml
Removed appending of NEXT_PUBLIC_CAP_AWS_BUCKET/REGION to generated .env files; no control-flow changes.
Infra and secrets
infra/sst-env.d.ts, infra/sst.config.ts
Narrowed Resource interface to DATABASE_URL only; added Pulumi types; enabled random provider; generated WORKFLOWS_RPC_SECRET; propagated WORKFLOWS_RPC_URL/SECRET to Vercel and WorkflowCluster; added Runner S3 permissions.
RPC auth client/server
apps/web-cluster/src/runner/index.ts, apps/web/lib/server.ts, packages/web-backend/src/Workflows.ts, packages/web-backend/src/Http/Errors.ts
Added SecretAuth middleware and InvalidRpcAuth error; server validates Authorization against WORKFLOWS_RPC_SECRET; client injects Authorization header; updated error mapping to handle InvalidRpcAuth.
Loom import flow and errors
packages/web-domain/src/Loom.ts, packages/web-backend/src/Loom/ImportVideo.ts, packages/web-backend/src/Loom/Http.ts
Replaced VideoURLInvalidError with VideoInvalidError (400; causes: NotFound, InvalidContentLength); added internal-access-only check by email domain; improved content-length handling and added periodic upload progress logging.
S3 credentials and client
packages/web-backend/src/S3Buckets/index.ts, packages/web-backend/package.json
Added @aws-sdk/credential-providers; introduced fallback to fromSSO or fromContainerMetadata; set requestStreamBufferSize; streamlined credential resolution.
Environment/config cleanup
packages/env/build.ts, scripts/env-cli.js, apps/web/Dockerfile, .npmrc, .dockerignore, CLAUDE.md
Removed public CAP AWS env keys; generated WORKFLOWS_RPC_SECRET in env CLI; dropped NEXT_PUBLIC_CAP_AWS_* from Dockerfile and docs; added npm force-legacy-deploy; broadened .dockerignore patterns and added **/.next.
Minor module path updates
packages/web-backend/src/Spaces/index.ts, packages/web-domain/src/S3Bucket.ts
Adjusted imports to include .ts suffixes; no logic changes.
Logging tweak
crates/recording/src/output_pipeline/core.rs
Pretty-printed task error logs with {:#}.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant WebApp as Web (apps/web)
  participant RpcClient as RPC Client Middleware
  participant Runner as RPC Server (runner)
  participant AuthMW as SecretAuthMiddleware
  participant Errors as Http Errors

  User->>WebApp: Initiates workflow RPC
  WebApp->>RpcClient: Build RPC request
  RpcClient->>RpcClient: Inject Authorization: Bearer <redacted secret>
  RpcClient->>Runner: RPC call with headers
  Runner->>AuthMW: Validate Authorization vs WORKFLOWS_RPC_SECRET
  alt valid
    AuthMW-->>Runner: ok
    Runner-->>RpcClient: RPC response
    RpcClient-->>WebApp: Result
  else invalid/missing
    AuthMW-->>Runner: raise InvalidRpcAuth
    Runner->>Errors: Map InvalidRpcAuth
    Errors-->>RpcClient: HTTP 500 (unknown cause)
    RpcClient-->>WebApp: Error
  end
Loading
sequenceDiagram
  autonumber
  actor User
  participant HTTP as Backend HTTP (Loom/Http)
  participant Import as Loom ImportVideo
  participant Loom as Loom API
  participant S3 as S3 Client

  User->>HTTP: POST /loom/importVideo
  HTTP->>HTTP: Get CurrentUser
  alt email not ending with "@cap.so"
    HTTP-->>User: Terminate (internal access only)
  else allowed
    HTTP->>Import: Execute import flow
    Import->>Loom: Verify URL and metadata
    alt NotFound/InvalidContentLength
      Import-->>HTTP: VideoInvalidError (400)
      HTTP-->>User: Error
    else OK
      Import->>S3: Stream upload with periodic progress logs
      S3-->>Import: Success
      Import-->>HTTP: Done
      HTTP-->>User: Success
    end
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

A secret hops between the nodes, so fleet,
Headers whisper, workflows meet.
Looms unwind with stricter sight,
Bytes march to S3 through the night.
Env winds trimmed, the sails set tight—
Thump-thump goes my rabbit heart: ship right! 🐇🚀

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Title Check ⚠️ Warning The title “Staging cluster deployment” is too generic and does not summarize the primary changes in this pull request, which include introducing secret‐based RPC authentication middleware, removing public AWS environment variables, and updating deployment configuration across multiple services. It fails to capture the scope and intent of the code and infrastructure changes. Please rename the pull request to clearly reflect its main changes, for example “Configure WORKFLOWS_RPC_SECRET and add RPC authentication middleware,” so that the title concisely communicates the core additions and infrastructure updates.
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ebeb56 and 21f5b0f.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (4)
  • infra/sst.config.ts (6 hunks)
  • packages/env/build.ts (0 hunks)
  • packages/web-backend/src/Loom/Http.ts (2 hunks)
  • scripts/env-cli.js (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Brendonovich Brendonovich marked this pull request as draft October 13, 2025 19:02
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (5)
packages/web-backend/package.json (1)

14-18: Align AWS SDK v3 versions to avoid duplication and type skew

Mixing 3.485.0/3.821.0 with 3.908.0 can pull multiple copies and widen bundle size. Recommend pinning all @aws-sdk/* to the same minor.

Apply:

-    "@aws-sdk/client-s3": "^3.485.0",
-    "@aws-sdk/cloudfront-signer": "^3.821.0",
-    "@aws-sdk/credential-providers": "^3.908.0",
-    "@aws-sdk/s3-presigned-post": "^3.485.0",
-    "@aws-sdk/s3-request-presigner": "^3.485.0",
+    "@aws-sdk/client-s3": "^3.908.0",
+    "@aws-sdk/cloudfront-signer": "^3.908.0",
+    "@aws-sdk/credential-providers": "^3.908.0",
+    "@aws-sdk/s3-presigned-post": "^3.908.0",
+    "@aws-sdk/s3-request-presigner": "^3.908.0",
infra/sst.config.ts (2)

3-4: Minor: unused type imports

If not used, consider removing to keep lint clean.


156-161: Vercel env wiring for RPC URL/secret

Looks good. Consider explicitly setting type: "encrypted" for the secret if the provider supports it.

packages/web-backend/src/S3Buckets/index.ts (1)

3-3: Add missing import only if used

fromContainerMetadata/fromSSO added—fine. If switching to a unified chain later, consider fromNodeProviderChain to cover env/SSO/IMDS in one go.

apps/web-cluster/src/runner/index.ts (1)

58-79: Use constant-time comparison for the auth secret

Avoid timing attacks by comparing digests with timingSafeEqual.

Within the middleware body:

-            const authHeader = Headers.get(options.headers, "authorization");
-            if (Option.isNone(authHeader) || authHeader.value !== authSecret) {
-              if (Option.isNone(authHeader))
-                yield* Effect.log("No auth header provided");
-
-              return yield* new Workflows.InvalidRpcAuth();
-            }
-
-            return yield* options.next;
+            const authHeader = Headers.get(options.headers, "authorization");
+            const provided = Option.isNone(authHeader) ? "" : authHeader.value;
+            const headerDigest = createHash("sha256").update(provided).digest();
+            const secretDigest = createHash("sha256").update(authSecret).digest();
+            if (!timingSafeEqual(headerDigest, secretDigest)) {
+              if (Option.isNone(authHeader)) yield* Effect.log("No auth header provided");
+              return yield* new Workflows.InvalidRpcAuth();
+            }
+            return yield* options.next;

Add import:

import { createHash, timingSafeEqual } from "node:crypto";
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2290d23 and 7ebeb56.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (23)
  • .dockerignore (1 hunks)
  • .github/workflows/ci.yml (0 hunks)
  • .github/workflows/docker-build-web.yml (0 hunks)
  • .github/workflows/publish.yml (0 hunks)
  • .npmrc (1 hunks)
  • CLAUDE.md (3 hunks)
  • apps/web-cluster/src/runner/index.ts (4 hunks)
  • apps/web/Dockerfile (0 hunks)
  • apps/web/lib/server.ts (6 hunks)
  • crates/recording/src/output_pipeline/core.rs (1 hunks)
  • infra/sst-env.d.ts (1 hunks)
  • infra/sst.config.ts (6 hunks)
  • packages/env/build.ts (0 hunks)
  • packages/web-backend/package.json (1 hunks)
  • packages/web-backend/src/Http/Errors.ts (2 hunks)
  • packages/web-backend/src/Loom/ImportVideo.ts (6 hunks)
  • packages/web-backend/src/S3Buckets/index.ts (3 hunks)
  • packages/web-backend/src/Spaces/index.ts (1 hunks)
  • packages/web-backend/src/Workflows.ts (1 hunks)
  • packages/web-domain/src/Loom.ts (3 hunks)
  • packages/web-domain/src/S3Bucket.ts (1 hunks)
  • scripts/env-cli.js (1 hunks)
  • scripts/setup.js (1 hunks)
💤 Files with no reviewable changes (5)
  • apps/web/Dockerfile
  • .github/workflows/publish.yml
  • .github/workflows/docker-build-web.yml
  • packages/env/build.ts
  • .github/workflows/ci.yml
🧰 Additional context used
📓 Path-based instructions (8)
crates/**/src/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

For desktop IPC, use tauri_specta derive/macros on Rust types/events; do not hand-roll bindings

Files:

  • crates/recording/src/output_pipeline/core.rs
**/*.{ts,tsx,js,jsx,rs}

📄 CodeRabbit inference engine (CLAUDE.md)

Do not add inline, block, or docstring comments in any language; code must be self-explanatory

Files:

  • crates/recording/src/output_pipeline/core.rs
  • packages/web-domain/src/S3Bucket.ts
  • packages/web-backend/src/Loom/ImportVideo.ts
  • packages/web-backend/src/Http/Errors.ts
  • scripts/setup.js
  • packages/web-backend/src/Spaces/index.ts
  • scripts/env-cli.js
  • apps/web-cluster/src/runner/index.ts
  • infra/sst.config.ts
  • packages/web-backend/src/Workflows.ts
  • infra/sst-env.d.ts
  • packages/web-domain/src/Loom.ts
  • packages/web-backend/src/S3Buckets/index.ts
  • apps/web/lib/server.ts
**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/*.rs: Format Rust code using rustfmt and ensure all Rust code passes workspace-level clippy lints.
Rust modules should be named with snake_case, and crate directories should be in kebab-case.

Files:

  • crates/recording/src/output_pipeline/core.rs
crates/*/src/**/*

📄 CodeRabbit inference engine (AGENTS.md)

Rust crates should place tests within the src/ and/or a sibling tests/ directory for each crate inside crates/*.

Files:

  • crates/recording/src/output_pipeline/core.rs
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use strict TypeScript and avoid any; leverage shared types from packages

**/*.{ts,tsx}: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by running pnpm format.

Files:

  • packages/web-domain/src/S3Bucket.ts
  • packages/web-backend/src/Loom/ImportVideo.ts
  • packages/web-backend/src/Http/Errors.ts
  • packages/web-backend/src/Spaces/index.ts
  • apps/web-cluster/src/runner/index.ts
  • infra/sst.config.ts
  • packages/web-backend/src/Workflows.ts
  • infra/sst-env.d.ts
  • packages/web-domain/src/Loom.ts
  • packages/web-backend/src/S3Buckets/index.ts
  • apps/web/lib/server.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx}: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g., user-menu.tsx).
Use PascalCase for React/Solid components.

Files:

  • packages/web-domain/src/S3Bucket.ts
  • packages/web-backend/src/Loom/ImportVideo.ts
  • packages/web-backend/src/Http/Errors.ts
  • scripts/setup.js
  • packages/web-backend/src/Spaces/index.ts
  • scripts/env-cli.js
  • apps/web-cluster/src/runner/index.ts
  • infra/sst.config.ts
  • packages/web-backend/src/Workflows.ts
  • infra/sst-env.d.ts
  • packages/web-domain/src/Loom.ts
  • packages/web-backend/src/S3Buckets/index.ts
  • apps/web/lib/server.ts
apps/web/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for all client-side server state and data fetching in the web app
Web mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData rather than broad invalidations
Client code should use useEffectQuery/useEffectMutation and useRpcClient from apps/web/lib/EffectRuntime.ts; do not create ManagedRuntime inside components

Files:

  • apps/web/lib/server.ts
apps/web/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

On the client, always use useEffectQuery or useEffectMutation from @/lib/EffectRuntime; never call EffectRuntime.run* directly in components.

Files:

  • apps/web/lib/server.ts
🧬 Code graph analysis (4)
packages/web-backend/src/Loom/ImportVideo.ts (3)
packages/web-domain/src/Loom.ts (2)
  • VideoInvalidError (25-29)
  • ExternalLoomError (19-23)
packages/web-domain/src/S3Bucket.ts (1)
  • S3Error (21-23)
packages/web-domain/src/Database.ts (1)
  • DatabaseError (3-6)
packages/web-backend/src/Http/Errors.ts (1)
packages/web-backend/src/Workflows.ts (1)
  • InvalidRpcAuth (9-12)
packages/web-backend/src/Workflows.ts (1)
apps/web/lib/Rpcs.ts (1)
  • Rpc (11-14)
apps/web/lib/server.ts (3)
packages/web-backend/src/Workflows.ts (1)
  • Workflows (18-18)
packages/web-backend/src/Database.ts (1)
  • Database (5-15)
apps/web/lib/tracing.ts (1)
  • layerTracer (5-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
  • GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
  • GitHub Check: Analyze (rust)
🔇 Additional comments (13)
crates/recording/src/output_pipeline/core.rs (1)

131-131: LGTM: Improved error visibility.

Using {:#} prints the full error chain, which is helpful for debugging task failures by exposing root causes.

scripts/setup.js (1)

221-239: Bug fix: Correctly handle nested .dylib files.

The change from entry.path to entry.parentPath fixes signing of nested libraries when using recursive: true.
Verify that the project’s Node.js version (engines field, .nvmrc/.node-version, CI workflows) is ≥20.12.0 to ensure parentPath is supported.

packages/web-backend/src/Spaces/index.ts (1)

6-8: Ensure allowImportingTsExtensions is enabled
Only apps/desktop/tsconfig.json currently sets "allowImportingTsExtensions": true. Confirm that the TS config used to compile packages/web-backend (root or its own tsconfig.json) also includes this setting; otherwise imports with .ts extensions will fail (TS5090).

packages/web-domain/src/S3Bucket.ts (1)

2-2: Import path normalization looks good

Switching to "./User.ts" matches local extension style used elsewhere.

CLAUDE.md (1)

270-275: Docs update aligns with env changes

Removal of NEXT_PUBLIC_CAP_AWS_* and minor formatting look consistent.

Also applies to: 394-396

infra/sst.config.ts (3)

36-36: Enabling random provider is correct for RandomString

LGTM.


188-190: RandomString for WORKFLOWS_RPC_SECRET

Good addition and length.


305-305: Propagating WORKFLOWS_RPC_SECRET into service envs

LGTM.

packages/web-backend/src/Http/Errors.ts (2)

13-14: S3 error cause mapping fix is correct

Changing cause from "database" to "s3" aligns with the error source.


15-18: Map InvalidRpcAuth to HTTP 401 Unauthorized
Auth failures should return 401 Unauthorized, not 500 Internal Server Error.

-    () => new Http.InternalServerError({ cause: "unknown" }),
+    () => new Http.Unauthorized({ cause: "invalid-rpc-auth" }),

Verify that Http.Unauthorized is exported (or use the appropriate auth‐error class such as Http.Forbidden).

apps/web/lib/server.ts (1)

81-97: Client-side RPC auth injection looks correct

Header injection via SecretAuthMiddleware is wired properly with a redacted secret.

packages/web-domain/src/Loom.ts (1)

25-29: Rename complete with no stale references
VideoURLInvalidError is not used anywhere in the codebase.

packages/web-backend/src/Workflows.ts (1)

9-17: RPC auth middleware wiring confirmed SecretAuthMiddleware is correctly applied to RpcClient.make(Workflows.RpcGroup).

@@ -1 +1,2 @@
auto-install-peers = true
force-legacy-deploy = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

force-legacy-deploy is not a supported npm/pnpm config.

Neither npm nor pnpm exposes a force-legacy-deploy setting, so this line will just emit warnings at install time and won’t change behavior. If you meant to opt into the legacy peer dependency resolver, use legacy-peer-deps = true; otherwise please clarify the intended knob.

🤖 Prompt for AI Agents
In .npmrc around line 2, the config key "force-legacy-deploy = true" is invalid
for npm/pnpm and will produce warnings; either remove this line if it was
accidental, or if you intended to opt into the legacy peer dependency resolver
replace it with the supported key "legacy-peer-deps = true". Ensure you commit
the corrected .npmrc and run an install to verify no warnings remain.

Layer.provide(
HttpApiBuilder.middleware(Effect.provide(CookiePasswordAttachmentLive)),
),
Layer.provide(layerTracer),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicate tracer provision

layerTracer is already part of Dependencies; providing it again is redundant.

Apply:

-    Layer.provide(layerTracer),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Layer.provide(layerTracer),
🤖 Prompt for AI Agents
In apps/web/lib/server.ts around line 169, the Layer.provide(layerTracer) entry
is redundant because layerTracer is already included in the Dependencies set;
remove this duplicate provision (delete or omit the Layer.provide(layerTracer)
line) so the tracer is provided only once via Dependencies and keep the
remaining Layer.provide calls unchanged.

Comment on lines +420 to +425
permissions: [
{
actions: ["s3:*"],
resources: [bucket.arn, $interpolate`${bucket.arn}/*`],
},
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Overbroad S3 permissions; tighten to least privilege

s3:* on bucket and objects is excessive. Restrict to the specific actions needed (ListBucket on bucket; GetObject/PutObject/DeleteObject/CreateMultipartUpload/AbortMultipartUpload/ListMultipartUploadParts on objects).

Apply:

-    permissions: [
-      {
-        actions: ["s3:*"],
-        resources: [bucket.arn, $interpolate`${bucket.arn}/*`],
-      },
-    ],
+    permissions: [
+      {
+        actions: ["s3:ListBucket"],
+        resources: [bucket.arn],
+      },
+      {
+        actions: [
+          "s3:GetObject",
+          "s3:PutObject",
+          "s3:DeleteObject",
+          "s3:CreateMultipartUpload",
+          "s3:AbortMultipartUpload",
+          "s3:ListMultipartUploadParts"
+        ],
+        resources: [$interpolate`${bucket.arn}/*`],
+      },
+    ],
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
permissions: [
{
actions: ["s3:*"],
resources: [bucket.arn, $interpolate`${bucket.arn}/*`],
},
],
permissions: [
{
actions: ["s3:ListBucket"],
resources: [bucket.arn],
},
{
actions: [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:CreateMultipartUpload",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
resources: [$interpolate`${bucket.arn}/*`],
},
],
🤖 Prompt for AI Agents
In infra/sst.config.ts around lines 420 to 425, the IAM statement grants overly
broad "s3:*" on the bucket and objects; replace this with least-privilege
actions by granting "s3:ListBucket" on the bucket ARN and on the object ARNs
grant only "s3:GetObject", "s3:PutObject", "s3:DeleteObject",
"s3:CreateMultipartUpload", "s3:AbortMultipartUpload", and
"s3:ListMultipartUploadParts"; ensure resources array separates the bucket ARN
for ListBucket and the interpolated `${bucket.arn}/*` for object actions and
remove "s3:*".

Comment on lines +121 to 130
const contentLength = yield* Headers.get(
resp.headers,
"content-length",
).pipe(
Option.map((v) => Number(v)),
Option.getOrUndefined,
Effect.catchTag(
"NoSuchElementException",
() => new Loom.VideoInvalidError({ cause: "InvalidContentLength" }),
),
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix content-length header parsing

Headers.get returns an Effect, not an Option. Running it through Option.map means we call the Option helpers on an Effect value, which throws before we can emit a VideoInvalidError. The handler passed to Effect.catchTag also needs to return an Effect.fail so the workflow actually fails with the right error. Please rewrite this block to stay in the Effect API, e.g.:

-const contentLength = yield* Headers.get(
-  resp.headers,
-  "content-length",
-).pipe(
-  Option.map((v) => Number(v)),
-  Effect.catchTag(
-    "NoSuchElementException",
-    () => new Loom.VideoInvalidError({ cause: "InvalidContentLength" }),
-  ),
-);
+const contentLength = yield* Headers.get(resp.headers, "content-length").pipe(
+  Effect.map((value) => Number(value)),
+  Effect.tap((n) =>
+    Number.isFinite(n)
+      ? Effect.unit
+      : Effect.fail(
+          new Loom.VideoInvalidError({ cause: "InvalidContentLength" }),
+        ),
+  ),
+  Effect.catchTag("NoSuchElementException", () =>
+    Effect.fail(new Loom.VideoInvalidError({ cause: "InvalidContentLength" })),
+  ),
+);

This keeps everything in Effect, guarantees we surface the user-facing error, and avoids a runtime type error before the retry logic can kick in.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const contentLength = yield* Headers.get(
resp.headers,
"content-length",
).pipe(
Option.map((v) => Number(v)),
Option.getOrUndefined,
Effect.catchTag(
"NoSuchElementException",
() => new Loom.VideoInvalidError({ cause: "InvalidContentLength" }),
),
);
const contentLength = yield* Headers.get(resp.headers, "content-length").pipe(
Effect.map((value) => Number(value)),
Effect.tap((n) =>
Number.isFinite(n)
? Effect.unit
: Effect.fail(new Loom.VideoInvalidError({ cause: "InvalidContentLength" })),
),
Effect.catchTag("NoSuchElementException", () =>
Effect.fail(new Loom.VideoInvalidError({ cause: "InvalidContentLength" })),
),
);
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 121–130, the code
treats Headers.get as if it returned an Option (using Option.map) but
Headers.get returns an Effect, so replace Option helpers with Effect
equivalents: call Effect.map on the result of Headers.get to coerce the header
to Number, and change the Effect.catchTag handler to return an Effect.fail with
new Loom.VideoInvalidError({ cause: "InvalidContentLength" }) so the overall
expression remains an Effect that fails with the proper user-facing error
instead of throwing a runtime type error.

Comment on lines +28 to +54
credentials: yield* Config.option(
Config.all([
Config.string("CAP_AWS_ACCESS_KEY"),
Config.string("CAP_AWS_SECRET_KEY"),
]).pipe(
Config.map(([accessKeyId, secretAccessKey]) => ({
accessKeyId,
secretAccessKey,
})),
Config.orElse(() =>
Config.string("VERCEL_AWS_ROLE_ARN").pipe(
Config.map((arn) => awsCredentialsProvider({ roleArn: arn })),
),
),
),
).pipe(
Effect.flatMap(
Effect.catchTag("NoSuchElementException", () =>
Effect.succeed(
process.env.NODE_ENV === "development"
? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE })
: fromContainerMetadata(),
),
),
),
),

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Credential fallback logic is incorrect (Effect/Option misuse); fallback never runs

Using Effect.flatMap with Effect.catchTag on an Option value won’t catch “None”, and the types don’t line up. Provide a proper Option fallback.

Apply:

-      credentials: yield* Config.option(
-        Config.all([
-          Config.string("CAP_AWS_ACCESS_KEY"),
-          Config.string("CAP_AWS_SECRET_KEY"),
-        ]).pipe(
-          Config.map(([accessKeyId, secretAccessKey]) => ({
-            accessKeyId,
-            secretAccessKey,
-          })),
-          Config.orElse(() =>
-            Config.string("VERCEL_AWS_ROLE_ARN").pipe(
-              Config.map((arn) => awsCredentialsProvider({ roleArn: arn })),
-            ),
-          ),
-        ),
-      ).pipe(
-        Effect.flatMap(
-          Effect.catchTag("NoSuchElementException", () =>
-            Effect.succeed(
-              process.env.NODE_ENV === "development"
-                ? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE })
-                : fromContainerMetadata(),
-            ),
-          ),
-        ),
-      ),
+      credentials: (yield* Config.option(
+        Config.all([
+          Config.string("CAP_AWS_ACCESS_KEY"),
+          Config.string("CAP_AWS_SECRET_KEY"),
+        ]).pipe(
+          Config.map(([accessKeyId, secretAccessKey]) => ({
+            accessKeyId,
+            secretAccessKey,
+          })),
+          Config.orElse(() =>
+            Config.string("VERCEL_AWS_ROLE_ARN").pipe(
+              Config.map((arn) => awsCredentialsProvider({ roleArn: arn })),
+            ),
+          ),
+        ),
+      )).pipe(
+        Option.getOrElse(() =>
+          process.env.NODE_ENV === "development"
+            ? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE })
+            : fromContainerMetadata(),
+        ),
+      ),

This returns either explicit creds/provider from env/role or the SSO/IMDS provider. As per coding guidelines

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
credentials: yield* Config.option(
Config.all([
Config.string("CAP_AWS_ACCESS_KEY"),
Config.string("CAP_AWS_SECRET_KEY"),
]).pipe(
Config.map(([accessKeyId, secretAccessKey]) => ({
accessKeyId,
secretAccessKey,
})),
Config.orElse(() =>
Config.string("VERCEL_AWS_ROLE_ARN").pipe(
Config.map((arn) => awsCredentialsProvider({ roleArn: arn })),
),
),
),
).pipe(
Effect.flatMap(
Effect.catchTag("NoSuchElementException", () =>
Effect.succeed(
process.env.NODE_ENV === "development"
? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE })
: fromContainerMetadata(),
),
),
),
),
credentials: (yield* Config.option(
Config.all([
Config.string("CAP_AWS_ACCESS_KEY"),
Config.string("CAP_AWS_SECRET_KEY"),
]).pipe(
Config.map(([accessKeyId, secretAccessKey]) => ({
accessKeyId,
secretAccessKey,
})),
Config.orElse(() =>
Config.string("VERCEL_AWS_ROLE_ARN").pipe(
Config.map((arn) => awsCredentialsProvider({ roleArn: arn })),
),
),
),
))).pipe(
Option.getOrElse(() =>
process.env.NODE_ENV === "development"
? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE })
: fromContainerMetadata(),
),
),
🤖 Prompt for AI Agents
In packages/web-backend/src/S3Buckets/index.ts around lines 28–54 the current
code tries to handle a missing Option by using Effect.flatMap with
Effect.catchTag on a Config.option result, which never catches a None and yields
a type mismatch; replace this Effect-level error handling with an
Option/Config-level fallback: remove the Effect.flatMap/Effect.catchTag block
and use Config.orElse to provide a fallback Config that returns the same
credential type (either explicit creds/provider) by selecting fromSSO (dev) or
fromContainerMetadata (prod); if fromSSO/fromContainerMetadata are effects, wrap
them to match Config's expected return (e.g. use
Config.fromEffect/Config.succeed with Effect.sync) so the types align and the
fallback runs when env vars are absent.

credentials: defaultConfigs.credentials,
forcePathStyle: defaultConfigs.forcePathStyle,
requestStreamBufferSize: 16 * 1024,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

S3ClientConfig: requestStreamBufferSize is not a valid option

S3Client doesn’t accept requestStreamBufferSize; this will fail type-checking. Remove or set options on the request handler instead (e.g., NodeHttpHandler timeouts) or tune upload via @aws-sdk/lib-storage.

Apply:

-        requestStreamBufferSize: 16 * 1024,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
});
- requestStreamBufferSize: 16 * 1024,
});
🤖 Prompt for AI Agents
In packages/web-backend/src/S3Buckets/index.ts around line 71, the
S3ClientConfig is being given requestStreamBufferSize which is not a valid
option and will fail TypeScript checks; remove requestStreamBufferSize from the
S3Client configuration, and if you need to tune buffering or timeouts, configure
the request handler (for example pass a configured NodeHttpHandler with
timeout/connection options) or adjust upload behavior using @aws-sdk/lib-storage
for multipart uploads instead.

envs.VITE_SERVER_URL = "http://localhost:3000";
envs.WEB_URL = "http://localhost:3000";
envs.NEXTAUTH_URL = envs.WEB_URL;
envs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Persist the RPC secret and restore missing CAP_AWS_REGION (breaks S3 defaults)

  • WORKFLOWS_RPC_SECRET is regenerated every run; persist like NEXTAUTH_SECRET/DB key.
  • CAP_AWS_REGION is no longer prompted in the non‑Docker flow, but backend requires it (Config.string("CAP_AWS_REGION")). This will fail at runtime.

Apply:

-    envs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64");
+    if (!allEnvs.WORKFLOWS_RPC_SECRET) {
+      allEnvs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64");
+      log.info("Generated WORKFLOWS_RPC_SECRET");
+    }
+    envs.WORKFLOWS_RPC_SECRET = allEnvs.WORKFLOWS_RPC_SECRET;

Additionally (outside this hunk):

  • Add CAP_AWS_REGION to the S3 prompt group when not using Docker.

Example:

CAP_AWS_REGION: () =>
  text({
    message: "CAP_AWS_REGION",
    placeholder: allEnvs.CAP_AWS_REGION ?? "us-east-1",
    defaultValue: allEnvs.CAP_AWS_REGION ?? "us-east-1",
  }),
  • Fix placeholders to use CAP_AWS_ENDPOINT instead of the removed NEXT_PUBLIC_CAP_AWS_ENDPOINT:
defaultValue: allEnvs.CAP_AWS_ENDPOINT,
placeholder: allEnvs.CAP_AWS_ENDPOINT,

Confirm that local dev without Docker now sets CAP_AWS_REGION and that .env/target profile keep a stable WORKFLOWS_RPC_SECRET across runs. Based on learnings


🏁 Script executed:

#!/bin/bash
set -e
# Locate env-cli.js
file=$(fd env-cli.js)
echo "Found file: $file"
echo "--- Lines 1-80 of $file ---"
sed -n '1,80p' "$file"
echo "--- NEXTAUTH_SECRET occurrences ---"
rg -n "NEXTAUTH_SECRET" -g "$file"
echo "--- allEnvs.NEXTAUTH_SECRET occurrences ---"
rg -n "allEnvs.NEXTAUTH_SECRET" -g "$file"
echo "--- DB key references (DB|DATABASE|DB_SECRET) ---"
rg -nP --color=never "(DB_|DATABASE)" -g "$file" | head -n 20
echo "--- CAP_AWS_REGION occurrences ---"
rg -n "CAP_AWS_REGION" -g "$file"
echo "--- CAP_AWS_ENDPOINT occurrences ---"
rg -n "CAP_AWS_ENDPOINT" -g "$file"

Length of output: 4337


Persist WORKFLOWS_RPC_SECRET and restore CAP_AWS_REGION prompt

  • Wrap WORKFLOWS_RPC_SECRET generation in

    -    envs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64");
    +    if (!allEnvs.WORKFLOWS_RPC_SECRET) {
    +      allEnvs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64");
    +      log.info("Generated WORKFLOWS_RPC_SECRET");
    +    }
    +    envs.WORKFLOWS_RPC_SECRET = allEnvs.WORKFLOWS_RPC_SECRET;

    so the secret persists across runs.

  • In the non-Docker S3 prompt group, add:

    CAP_AWS_REGION: () =>
      text({
        message: "CAP_AWS_REGION",
        placeholder: allEnvs.CAP_AWS_REGION ?? "us-east-1",
        defaultValue: allEnvs.CAP_AWS_REGION ?? "us-east-1",
      }),
  • Update the CAP_AWS_ENDPOINT prompt to use allEnvs.CAP_AWS_ENDPOINT for both defaultValue and placeholder (not NEXT_PUBLIC_CAP_AWS_ENDPOINT).

🤖 Prompt for AI Agents
In scripts/env-cli.js around line 61, the current code always regenerates
WORKFLOWS_RPC_SECRET (envs.WORKFLOWS_RPC_SECRET =
crypto.randomBytes(32).toString("base64");) so the secret doesn't persist;
change it to only generate and assign a new secret when
allEnvs.WORKFLOWS_RPC_SECRET is falsy (i.e., envs.WORKFLOWS_RPC_SECRET =
allEnvs.WORKFLOWS_RPC_SECRET || crypto.randomBytes(32).toString("base64");). In
the non-Docker S3 prompt group add a CAP_AWS_REGION prompt entry using
allEnvs.CAP_AWS_REGION for placeholder and defaultValue (text prompt with
message "CAP_AWS_REGION" and placeholder/defaultValue = allEnvs.CAP_AWS_REGION
?? "us-east-1"). Also update the CAP_AWS_ENDPOINT prompt to use
allEnvs.CAP_AWS_ENDPOINT for both defaultValue and placeholder instead of
NEXT_PUBLIC_CAP_AWS_ENDPOINT.

@Brendonovich Brendonovich marked this pull request as ready for review October 14, 2025 10:13
@Brendonovich Brendonovich merged commit f4cc6e3 into main Oct 14, 2025
14 of 15 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Oct 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants