-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Staging cluster deployment #1183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughIntroduces secret-based RPC authentication and corresponding client/server middleware. Refactors Loom video import errors and adds an internal-access check. Adjusts S3 credential resolution with fallbacks and buffering. Prunes public AWS env vars across CI, Docker, config, and schema. Adds an SST-generated WORKFLOWS_RPC_SECRET and updates infrastructure/environment wiring. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant WebApp as Web (apps/web)
participant RpcClient as RPC Client Middleware
participant Runner as RPC Server (runner)
participant AuthMW as SecretAuthMiddleware
participant Errors as Http Errors
User->>WebApp: Initiates workflow RPC
WebApp->>RpcClient: Build RPC request
RpcClient->>RpcClient: Inject Authorization: Bearer <redacted secret>
RpcClient->>Runner: RPC call with headers
Runner->>AuthMW: Validate Authorization vs WORKFLOWS_RPC_SECRET
alt valid
AuthMW-->>Runner: ok
Runner-->>RpcClient: RPC response
RpcClient-->>WebApp: Result
else invalid/missing
AuthMW-->>Runner: raise InvalidRpcAuth
Runner->>Errors: Map InvalidRpcAuth
Errors-->>RpcClient: HTTP 500 (unknown cause)
RpcClient-->>WebApp: Error
end
sequenceDiagram
autonumber
actor User
participant HTTP as Backend HTTP (Loom/Http)
participant Import as Loom ImportVideo
participant Loom as Loom API
participant S3 as S3 Client
User->>HTTP: POST /loom/importVideo
HTTP->>HTTP: Get CurrentUser
alt email not ending with "@cap.so"
HTTP-->>User: Terminate (internal access only)
else allowed
HTTP->>Import: Execute import flow
Import->>Loom: Verify URL and metadata
alt NotFound/InvalidContentLength
Import-->>HTTP: VideoInvalidError (400)
HTTP-->>User: Error
else OK
Import->>S3: Stream upload with periodic progress logs
S3-->>Import: Success
Import-->>HTTP: Done
HTTP-->>User: Success
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (4)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (5)
packages/web-backend/package.json (1)
14-18: Align AWS SDK v3 versions to avoid duplication and type skewMixing 3.485.0/3.821.0 with 3.908.0 can pull multiple copies and widen bundle size. Recommend pinning all @aws-sdk/* to the same minor.
Apply:
- "@aws-sdk/client-s3": "^3.485.0", - "@aws-sdk/cloudfront-signer": "^3.821.0", - "@aws-sdk/credential-providers": "^3.908.0", - "@aws-sdk/s3-presigned-post": "^3.485.0", - "@aws-sdk/s3-request-presigner": "^3.485.0", + "@aws-sdk/client-s3": "^3.908.0", + "@aws-sdk/cloudfront-signer": "^3.908.0", + "@aws-sdk/credential-providers": "^3.908.0", + "@aws-sdk/s3-presigned-post": "^3.908.0", + "@aws-sdk/s3-request-presigner": "^3.908.0",infra/sst.config.ts (2)
3-4: Minor: unused type importsIf not used, consider removing to keep lint clean.
156-161: Vercel env wiring for RPC URL/secretLooks good. Consider explicitly setting type: "encrypted" for the secret if the provider supports it.
packages/web-backend/src/S3Buckets/index.ts (1)
3-3: Add missing import only if usedfromContainerMetadata/fromSSO added—fine. If switching to a unified chain later, consider fromNodeProviderChain to cover env/SSO/IMDS in one go.
apps/web-cluster/src/runner/index.ts (1)
58-79: Use constant-time comparison for the auth secretAvoid timing attacks by comparing digests with timingSafeEqual.
Within the middleware body:
- const authHeader = Headers.get(options.headers, "authorization"); - if (Option.isNone(authHeader) || authHeader.value !== authSecret) { - if (Option.isNone(authHeader)) - yield* Effect.log("No auth header provided"); - - return yield* new Workflows.InvalidRpcAuth(); - } - - return yield* options.next; + const authHeader = Headers.get(options.headers, "authorization"); + const provided = Option.isNone(authHeader) ? "" : authHeader.value; + const headerDigest = createHash("sha256").update(provided).digest(); + const secretDigest = createHash("sha256").update(authSecret).digest(); + if (!timingSafeEqual(headerDigest, secretDigest)) { + if (Option.isNone(authHeader)) yield* Effect.log("No auth header provided"); + return yield* new Workflows.InvalidRpcAuth(); + } + return yield* options.next;Add import:
import { createHash, timingSafeEqual } from "node:crypto";
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (23)
.dockerignore(1 hunks).github/workflows/ci.yml(0 hunks).github/workflows/docker-build-web.yml(0 hunks).github/workflows/publish.yml(0 hunks).npmrc(1 hunks)CLAUDE.md(3 hunks)apps/web-cluster/src/runner/index.ts(4 hunks)apps/web/Dockerfile(0 hunks)apps/web/lib/server.ts(6 hunks)crates/recording/src/output_pipeline/core.rs(1 hunks)infra/sst-env.d.ts(1 hunks)infra/sst.config.ts(6 hunks)packages/env/build.ts(0 hunks)packages/web-backend/package.json(1 hunks)packages/web-backend/src/Http/Errors.ts(2 hunks)packages/web-backend/src/Loom/ImportVideo.ts(6 hunks)packages/web-backend/src/S3Buckets/index.ts(3 hunks)packages/web-backend/src/Spaces/index.ts(1 hunks)packages/web-backend/src/Workflows.ts(1 hunks)packages/web-domain/src/Loom.ts(3 hunks)packages/web-domain/src/S3Bucket.ts(1 hunks)scripts/env-cli.js(1 hunks)scripts/setup.js(1 hunks)
💤 Files with no reviewable changes (5)
- apps/web/Dockerfile
- .github/workflows/publish.yml
- .github/workflows/docker-build-web.yml
- packages/env/build.ts
- .github/workflows/ci.yml
🧰 Additional context used
📓 Path-based instructions (8)
crates/**/src/**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
For desktop IPC, use tauri_specta derive/macros on Rust types/events; do not hand-roll bindings
Files:
crates/recording/src/output_pipeline/core.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (CLAUDE.md)
Do not add inline, block, or docstring comments in any language; code must be self-explanatory
Files:
crates/recording/src/output_pipeline/core.rspackages/web-domain/src/S3Bucket.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-backend/src/Http/Errors.tsscripts/setup.jspackages/web-backend/src/Spaces/index.tsscripts/env-cli.jsapps/web-cluster/src/runner/index.tsinfra/sst.config.tspackages/web-backend/src/Workflows.tsinfra/sst-env.d.tspackages/web-domain/src/Loom.tspackages/web-backend/src/S3Buckets/index.tsapps/web/lib/server.ts
**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/*.rs: Format Rust code usingrustfmtand ensure all Rust code passes workspace-level clippy lints.
Rust modules should be named with snake_case, and crate directories should be in kebab-case.
Files:
crates/recording/src/output_pipeline/core.rs
crates/*/src/**/*
📄 CodeRabbit inference engine (AGENTS.md)
Rust crates should place tests within the
src/and/or a siblingtests/directory for each crate insidecrates/*.
Files:
crates/recording/src/output_pipeline/core.rs
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; leverage shared types from packages
**/*.{ts,tsx}: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by runningpnpm format.
Files:
packages/web-domain/src/S3Bucket.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-backend/src/Http/Errors.tspackages/web-backend/src/Spaces/index.tsapps/web-cluster/src/runner/index.tsinfra/sst.config.tspackages/web-backend/src/Workflows.tsinfra/sst-env.d.tspackages/web-domain/src/Loom.tspackages/web-backend/src/S3Buckets/index.tsapps/web/lib/server.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g.,user-menu.tsx).
Use PascalCase for React/Solid components.
Files:
packages/web-domain/src/S3Bucket.tspackages/web-backend/src/Loom/ImportVideo.tspackages/web-backend/src/Http/Errors.tsscripts/setup.jspackages/web-backend/src/Spaces/index.tsscripts/env-cli.jsapps/web-cluster/src/runner/index.tsinfra/sst.config.tspackages/web-backend/src/Workflows.tsinfra/sst-env.d.tspackages/web-domain/src/Loom.tspackages/web-backend/src/S3Buckets/index.tsapps/web/lib/server.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for all client-side server state and data fetching in the web app
Web mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData rather than broad invalidations
Client code should use useEffectQuery/useEffectMutation and useRpcClient from apps/web/lib/EffectRuntime.ts; do not create ManagedRuntime inside components
Files:
apps/web/lib/server.ts
apps/web/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
On the client, always use
useEffectQueryoruseEffectMutationfrom@/lib/EffectRuntime; never callEffectRuntime.run*directly in components.
Files:
apps/web/lib/server.ts
🧬 Code graph analysis (4)
packages/web-backend/src/Loom/ImportVideo.ts (3)
packages/web-domain/src/Loom.ts (2)
VideoInvalidError(25-29)ExternalLoomError(19-23)packages/web-domain/src/S3Bucket.ts (1)
S3Error(21-23)packages/web-domain/src/Database.ts (1)
DatabaseError(3-6)
packages/web-backend/src/Http/Errors.ts (1)
packages/web-backend/src/Workflows.ts (1)
InvalidRpcAuth(9-12)
packages/web-backend/src/Workflows.ts (1)
apps/web/lib/Rpcs.ts (1)
Rpc(11-14)
apps/web/lib/server.ts (3)
packages/web-backend/src/Workflows.ts (1)
Workflows(18-18)packages/web-backend/src/Database.ts (1)
Database(5-15)apps/web/lib/tracing.ts (1)
layerTracer(5-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (13)
crates/recording/src/output_pipeline/core.rs (1)
131-131: LGTM: Improved error visibility.Using
{:#}prints the full error chain, which is helpful for debugging task failures by exposing root causes.scripts/setup.js (1)
221-239: Bug fix: Correctly handle nested.dylibfiles.The change from
entry.pathtoentry.parentPathfixes signing of nested libraries when usingrecursive: true.
Verify that the project’s Node.js version (engines field, .nvmrc/.node-version, CI workflows) is ≥20.12.0 to ensureparentPathis supported.packages/web-backend/src/Spaces/index.ts (1)
6-8: EnsureallowImportingTsExtensionsis enabled
Onlyapps/desktop/tsconfig.jsoncurrently sets"allowImportingTsExtensions": true. Confirm that the TS config used to compilepackages/web-backend(root or its owntsconfig.json) also includes this setting; otherwise imports with.tsextensions will fail (TS5090).packages/web-domain/src/S3Bucket.ts (1)
2-2: Import path normalization looks goodSwitching to "./User.ts" matches local extension style used elsewhere.
CLAUDE.md (1)
270-275: Docs update aligns with env changesRemoval of NEXT_PUBLIC_CAP_AWS_* and minor formatting look consistent.
Also applies to: 394-396
infra/sst.config.ts (3)
36-36: Enabling random provider is correct for RandomStringLGTM.
188-190: RandomString for WORKFLOWS_RPC_SECRETGood addition and length.
305-305: Propagating WORKFLOWS_RPC_SECRET into service envsLGTM.
packages/web-backend/src/Http/Errors.ts (2)
13-14: S3 error cause mapping fix is correctChanging cause from "database" to "s3" aligns with the error source.
15-18: MapInvalidRpcAuthto HTTP 401 Unauthorized
Auth failures should return 401 Unauthorized, not 500 Internal Server Error.- () => new Http.InternalServerError({ cause: "unknown" }), + () => new Http.Unauthorized({ cause: "invalid-rpc-auth" }),Verify that
Http.Unauthorizedis exported (or use the appropriate auth‐error class such asHttp.Forbidden).apps/web/lib/server.ts (1)
81-97: Client-side RPC auth injection looks correctHeader injection via SecretAuthMiddleware is wired properly with a redacted secret.
packages/web-domain/src/Loom.ts (1)
25-29: Rename complete with no stale references
VideoURLInvalidErroris not used anywhere in the codebase.packages/web-backend/src/Workflows.ts (1)
9-17: RPC auth middleware wiring confirmed SecretAuthMiddleware is correctly applied to RpcClient.make(Workflows.RpcGroup).
| @@ -1 +1,2 @@ | |||
| auto-install-peers = true | |||
| force-legacy-deploy = true | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
force-legacy-deploy is not a supported npm/pnpm config.
Neither npm nor pnpm exposes a force-legacy-deploy setting, so this line will just emit warnings at install time and won’t change behavior. If you meant to opt into the legacy peer dependency resolver, use legacy-peer-deps = true; otherwise please clarify the intended knob.
🤖 Prompt for AI Agents
In .npmrc around line 2, the config key "force-legacy-deploy = true" is invalid
for npm/pnpm and will produce warnings; either remove this line if it was
accidental, or if you intended to opt into the legacy peer dependency resolver
replace it with the supported key "legacy-peer-deps = true". Ensure you commit
the corrected .npmrc and run an install to verify no warnings remain.
| Layer.provide( | ||
| HttpApiBuilder.middleware(Effect.provide(CookiePasswordAttachmentLive)), | ||
| ), | ||
| Layer.provide(layerTracer), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate tracer provision
layerTracer is already part of Dependencies; providing it again is redundant.
Apply:
- Layer.provide(layerTracer),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Layer.provide(layerTracer), |
🤖 Prompt for AI Agents
In apps/web/lib/server.ts around line 169, the Layer.provide(layerTracer) entry
is redundant because layerTracer is already included in the Dependencies set;
remove this duplicate provision (delete or omit the Layer.provide(layerTracer)
line) so the tracer is provided only once via Dependencies and keep the
remaining Layer.provide calls unchanged.
| permissions: [ | ||
| { | ||
| actions: ["s3:*"], | ||
| resources: [bucket.arn, $interpolate`${bucket.arn}/*`], | ||
| }, | ||
| ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overbroad S3 permissions; tighten to least privilege
s3:* on bucket and objects is excessive. Restrict to the specific actions needed (ListBucket on bucket; GetObject/PutObject/DeleteObject/CreateMultipartUpload/AbortMultipartUpload/ListMultipartUploadParts on objects).
Apply:
- permissions: [
- {
- actions: ["s3:*"],
- resources: [bucket.arn, $interpolate`${bucket.arn}/*`],
- },
- ],
+ permissions: [
+ {
+ actions: ["s3:ListBucket"],
+ resources: [bucket.arn],
+ },
+ {
+ actions: [
+ "s3:GetObject",
+ "s3:PutObject",
+ "s3:DeleteObject",
+ "s3:CreateMultipartUpload",
+ "s3:AbortMultipartUpload",
+ "s3:ListMultipartUploadParts"
+ ],
+ resources: [$interpolate`${bucket.arn}/*`],
+ },
+ ],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| permissions: [ | |
| { | |
| actions: ["s3:*"], | |
| resources: [bucket.arn, $interpolate`${bucket.arn}/*`], | |
| }, | |
| ], | |
| permissions: [ | |
| { | |
| actions: ["s3:ListBucket"], | |
| resources: [bucket.arn], | |
| }, | |
| { | |
| actions: [ | |
| "s3:GetObject", | |
| "s3:PutObject", | |
| "s3:DeleteObject", | |
| "s3:CreateMultipartUpload", | |
| "s3:AbortMultipartUpload", | |
| "s3:ListMultipartUploadParts" | |
| ], | |
| resources: [$interpolate`${bucket.arn}/*`], | |
| }, | |
| ], |
🤖 Prompt for AI Agents
In infra/sst.config.ts around lines 420 to 425, the IAM statement grants overly
broad "s3:*" on the bucket and objects; replace this with least-privilege
actions by granting "s3:ListBucket" on the bucket ARN and on the object ARNs
grant only "s3:GetObject", "s3:PutObject", "s3:DeleteObject",
"s3:CreateMultipartUpload", "s3:AbortMultipartUpload", and
"s3:ListMultipartUploadParts"; ensure resources array separates the bucket ARN
for ListBucket and the interpolated `${bucket.arn}/*` for object actions and
remove "s3:*".
| const contentLength = yield* Headers.get( | ||
| resp.headers, | ||
| "content-length", | ||
| ).pipe( | ||
| Option.map((v) => Number(v)), | ||
| Option.getOrUndefined, | ||
| Effect.catchTag( | ||
| "NoSuchElementException", | ||
| () => new Loom.VideoInvalidError({ cause: "InvalidContentLength" }), | ||
| ), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix content-length header parsing
Headers.get returns an Effect, not an Option. Running it through Option.map means we call the Option helpers on an Effect value, which throws before we can emit a VideoInvalidError. The handler passed to Effect.catchTag also needs to return an Effect.fail so the workflow actually fails with the right error. Please rewrite this block to stay in the Effect API, e.g.:
-const contentLength = yield* Headers.get(
- resp.headers,
- "content-length",
-).pipe(
- Option.map((v) => Number(v)),
- Effect.catchTag(
- "NoSuchElementException",
- () => new Loom.VideoInvalidError({ cause: "InvalidContentLength" }),
- ),
-);
+const contentLength = yield* Headers.get(resp.headers, "content-length").pipe(
+ Effect.map((value) => Number(value)),
+ Effect.tap((n) =>
+ Number.isFinite(n)
+ ? Effect.unit
+ : Effect.fail(
+ new Loom.VideoInvalidError({ cause: "InvalidContentLength" }),
+ ),
+ ),
+ Effect.catchTag("NoSuchElementException", () =>
+ Effect.fail(new Loom.VideoInvalidError({ cause: "InvalidContentLength" })),
+ ),
+);This keeps everything in Effect, guarantees we surface the user-facing error, and avoids a runtime type error before the retry logic can kick in.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const contentLength = yield* Headers.get( | |
| resp.headers, | |
| "content-length", | |
| ).pipe( | |
| Option.map((v) => Number(v)), | |
| Option.getOrUndefined, | |
| Effect.catchTag( | |
| "NoSuchElementException", | |
| () => new Loom.VideoInvalidError({ cause: "InvalidContentLength" }), | |
| ), | |
| ); | |
| const contentLength = yield* Headers.get(resp.headers, "content-length").pipe( | |
| Effect.map((value) => Number(value)), | |
| Effect.tap((n) => | |
| Number.isFinite(n) | |
| ? Effect.unit | |
| : Effect.fail(new Loom.VideoInvalidError({ cause: "InvalidContentLength" })), | |
| ), | |
| Effect.catchTag("NoSuchElementException", () => | |
| Effect.fail(new Loom.VideoInvalidError({ cause: "InvalidContentLength" })), | |
| ), | |
| ); |
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 121–130, the code
treats Headers.get as if it returned an Option (using Option.map) but
Headers.get returns an Effect, so replace Option helpers with Effect
equivalents: call Effect.map on the result of Headers.get to coerce the header
to Number, and change the Effect.catchTag handler to return an Effect.fail with
new Loom.VideoInvalidError({ cause: "InvalidContentLength" }) so the overall
expression remains an Effect that fails with the proper user-facing error
instead of throwing a runtime type error.
| credentials: yield* Config.option( | ||
| Config.all([ | ||
| Config.string("CAP_AWS_ACCESS_KEY"), | ||
| Config.string("CAP_AWS_SECRET_KEY"), | ||
| ]).pipe( | ||
| Config.map(([accessKeyId, secretAccessKey]) => ({ | ||
| accessKeyId, | ||
| secretAccessKey, | ||
| })), | ||
| Config.orElse(() => | ||
| Config.string("VERCEL_AWS_ROLE_ARN").pipe( | ||
| Config.map((arn) => awsCredentialsProvider({ roleArn: arn })), | ||
| ), | ||
| ), | ||
| ), | ||
| ).pipe( | ||
| Effect.flatMap( | ||
| Effect.catchTag("NoSuchElementException", () => | ||
| Effect.succeed( | ||
| process.env.NODE_ENV === "development" | ||
| ? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE }) | ||
| : fromContainerMetadata(), | ||
| ), | ||
| ), | ||
| ), | ||
| ), | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Credential fallback logic is incorrect (Effect/Option misuse); fallback never runs
Using Effect.flatMap with Effect.catchTag on an Option value won’t catch “None”, and the types don’t line up. Provide a proper Option fallback.
Apply:
- credentials: yield* Config.option(
- Config.all([
- Config.string("CAP_AWS_ACCESS_KEY"),
- Config.string("CAP_AWS_SECRET_KEY"),
- ]).pipe(
- Config.map(([accessKeyId, secretAccessKey]) => ({
- accessKeyId,
- secretAccessKey,
- })),
- Config.orElse(() =>
- Config.string("VERCEL_AWS_ROLE_ARN").pipe(
- Config.map((arn) => awsCredentialsProvider({ roleArn: arn })),
- ),
- ),
- ),
- ).pipe(
- Effect.flatMap(
- Effect.catchTag("NoSuchElementException", () =>
- Effect.succeed(
- process.env.NODE_ENV === "development"
- ? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE })
- : fromContainerMetadata(),
- ),
- ),
- ),
- ),
+ credentials: (yield* Config.option(
+ Config.all([
+ Config.string("CAP_AWS_ACCESS_KEY"),
+ Config.string("CAP_AWS_SECRET_KEY"),
+ ]).pipe(
+ Config.map(([accessKeyId, secretAccessKey]) => ({
+ accessKeyId,
+ secretAccessKey,
+ })),
+ Config.orElse(() =>
+ Config.string("VERCEL_AWS_ROLE_ARN").pipe(
+ Config.map((arn) => awsCredentialsProvider({ roleArn: arn })),
+ ),
+ ),
+ ),
+ )).pipe(
+ Option.getOrElse(() =>
+ process.env.NODE_ENV === "development"
+ ? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE })
+ : fromContainerMetadata(),
+ ),
+ ),This returns either explicit creds/provider from env/role or the SSO/IMDS provider. As per coding guidelines
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| credentials: yield* Config.option( | |
| Config.all([ | |
| Config.string("CAP_AWS_ACCESS_KEY"), | |
| Config.string("CAP_AWS_SECRET_KEY"), | |
| ]).pipe( | |
| Config.map(([accessKeyId, secretAccessKey]) => ({ | |
| accessKeyId, | |
| secretAccessKey, | |
| })), | |
| Config.orElse(() => | |
| Config.string("VERCEL_AWS_ROLE_ARN").pipe( | |
| Config.map((arn) => awsCredentialsProvider({ roleArn: arn })), | |
| ), | |
| ), | |
| ), | |
| ).pipe( | |
| Effect.flatMap( | |
| Effect.catchTag("NoSuchElementException", () => | |
| Effect.succeed( | |
| process.env.NODE_ENV === "development" | |
| ? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE }) | |
| : fromContainerMetadata(), | |
| ), | |
| ), | |
| ), | |
| ), | |
| credentials: (yield* Config.option( | |
| Config.all([ | |
| Config.string("CAP_AWS_ACCESS_KEY"), | |
| Config.string("CAP_AWS_SECRET_KEY"), | |
| ]).pipe( | |
| Config.map(([accessKeyId, secretAccessKey]) => ({ | |
| accessKeyId, | |
| secretAccessKey, | |
| })), | |
| Config.orElse(() => | |
| Config.string("VERCEL_AWS_ROLE_ARN").pipe( | |
| Config.map((arn) => awsCredentialsProvider({ roleArn: arn })), | |
| ), | |
| ), | |
| ), | |
| ))).pipe( | |
| Option.getOrElse(() => | |
| process.env.NODE_ENV === "development" | |
| ? fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE }) | |
| : fromContainerMetadata(), | |
| ), | |
| ), |
🤖 Prompt for AI Agents
In packages/web-backend/src/S3Buckets/index.ts around lines 28–54 the current
code tries to handle a missing Option by using Effect.flatMap with
Effect.catchTag on a Config.option result, which never catches a None and yields
a type mismatch; replace this Effect-level error handling with an
Option/Config-level fallback: remove the Effect.flatMap/Effect.catchTag block
and use Config.orElse to provide a fallback Config that returns the same
credential type (either explicit creds/provider) by selecting fromSSO (dev) or
fromContainerMetadata (prod); if fromSSO/fromContainerMetadata are effects, wrap
them to match Config's expected return (e.g. use
Config.fromEffect/Config.succeed with Effect.sync) so the types align and the
fallback runs when env vars are absent.
| credentials: defaultConfigs.credentials, | ||
| forcePathStyle: defaultConfigs.forcePathStyle, | ||
| requestStreamBufferSize: 16 * 1024, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
S3ClientConfig: requestStreamBufferSize is not a valid option
S3Client doesn’t accept requestStreamBufferSize; this will fail type-checking. Remove or set options on the request handler instead (e.g., NodeHttpHandler timeouts) or tune upload via @aws-sdk/lib-storage.
Apply:
- requestStreamBufferSize: 16 * 1024,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| }); | |
| - requestStreamBufferSize: 16 * 1024, | |
| }); |
🤖 Prompt for AI Agents
In packages/web-backend/src/S3Buckets/index.ts around line 71, the
S3ClientConfig is being given requestStreamBufferSize which is not a valid
option and will fail TypeScript checks; remove requestStreamBufferSize from the
S3Client configuration, and if you need to tune buffering or timeouts, configure
the request handler (for example pass a configured NodeHttpHandler with
timeout/connection options) or adjust upload behavior using @aws-sdk/lib-storage
for multipart uploads instead.
| envs.VITE_SERVER_URL = "http://localhost:3000"; | ||
| envs.WEB_URL = "http://localhost:3000"; | ||
| envs.NEXTAUTH_URL = envs.WEB_URL; | ||
| envs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Persist the RPC secret and restore missing CAP_AWS_REGION (breaks S3 defaults)
- WORKFLOWS_RPC_SECRET is regenerated every run; persist like NEXTAUTH_SECRET/DB key.
- CAP_AWS_REGION is no longer prompted in the non‑Docker flow, but backend requires it (Config.string("CAP_AWS_REGION")). This will fail at runtime.
Apply:
- envs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64");
+ if (!allEnvs.WORKFLOWS_RPC_SECRET) {
+ allEnvs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64");
+ log.info("Generated WORKFLOWS_RPC_SECRET");
+ }
+ envs.WORKFLOWS_RPC_SECRET = allEnvs.WORKFLOWS_RPC_SECRET;Additionally (outside this hunk):
- Add CAP_AWS_REGION to the S3 prompt group when not using Docker.
Example:
CAP_AWS_REGION: () =>
text({
message: "CAP_AWS_REGION",
placeholder: allEnvs.CAP_AWS_REGION ?? "us-east-1",
defaultValue: allEnvs.CAP_AWS_REGION ?? "us-east-1",
}),- Fix placeholders to use CAP_AWS_ENDPOINT instead of the removed NEXT_PUBLIC_CAP_AWS_ENDPOINT:
defaultValue: allEnvs.CAP_AWS_ENDPOINT,
placeholder: allEnvs.CAP_AWS_ENDPOINT,Confirm that local dev without Docker now sets CAP_AWS_REGION and that .env/target profile keep a stable WORKFLOWS_RPC_SECRET across runs. Based on learnings
🏁 Script executed:
#!/bin/bash
set -e
# Locate env-cli.js
file=$(fd env-cli.js)
echo "Found file: $file"
echo "--- Lines 1-80 of $file ---"
sed -n '1,80p' "$file"
echo "--- NEXTAUTH_SECRET occurrences ---"
rg -n "NEXTAUTH_SECRET" -g "$file"
echo "--- allEnvs.NEXTAUTH_SECRET occurrences ---"
rg -n "allEnvs.NEXTAUTH_SECRET" -g "$file"
echo "--- DB key references (DB|DATABASE|DB_SECRET) ---"
rg -nP --color=never "(DB_|DATABASE)" -g "$file" | head -n 20
echo "--- CAP_AWS_REGION occurrences ---"
rg -n "CAP_AWS_REGION" -g "$file"
echo "--- CAP_AWS_ENDPOINT occurrences ---"
rg -n "CAP_AWS_ENDPOINT" -g "$file"Length of output: 4337
Persist WORKFLOWS_RPC_SECRET and restore CAP_AWS_REGION prompt
-
Wrap
WORKFLOWS_RPC_SECRETgeneration in- envs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64"); + if (!allEnvs.WORKFLOWS_RPC_SECRET) { + allEnvs.WORKFLOWS_RPC_SECRET = crypto.randomBytes(32).toString("base64"); + log.info("Generated WORKFLOWS_RPC_SECRET"); + } + envs.WORKFLOWS_RPC_SECRET = allEnvs.WORKFLOWS_RPC_SECRET;
so the secret persists across runs.
-
In the non-Docker S3 prompt group, add:
CAP_AWS_REGION: () => text({ message: "CAP_AWS_REGION", placeholder: allEnvs.CAP_AWS_REGION ?? "us-east-1", defaultValue: allEnvs.CAP_AWS_REGION ?? "us-east-1", }),
-
Update the
CAP_AWS_ENDPOINTprompt to useallEnvs.CAP_AWS_ENDPOINTfor bothdefaultValueandplaceholder(notNEXT_PUBLIC_CAP_AWS_ENDPOINT).
🤖 Prompt for AI Agents
In scripts/env-cli.js around line 61, the current code always regenerates
WORKFLOWS_RPC_SECRET (envs.WORKFLOWS_RPC_SECRET =
crypto.randomBytes(32).toString("base64");) so the secret doesn't persist;
change it to only generate and assign a new secret when
allEnvs.WORKFLOWS_RPC_SECRET is falsy (i.e., envs.WORKFLOWS_RPC_SECRET =
allEnvs.WORKFLOWS_RPC_SECRET || crypto.randomBytes(32).toString("base64");). In
the non-Docker S3 prompt group add a CAP_AWS_REGION prompt entry using
allEnvs.CAP_AWS_REGION for placeholder and defaultValue (text prompt with
message "CAP_AWS_REGION" and placeholder/defaultValue = allEnvs.CAP_AWS_REGION
?? "us-east-1"). Also update the CAP_AWS_ENDPOINT prompt to use
allEnvs.CAP_AWS_ENDPOINT for both defaultValue and placeholder instead of
NEXT_PUBLIC_CAP_AWS_ENDPOINT.
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Chores
Documentation