WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

Conversation

@Brendonovich
Copy link
Contributor

@Brendonovich Brendonovich commented Oct 14, 2025

Summary by CodeRabbit

  • New Features
    • None
  • Bug Fixes
    • Improved reliability of video uploads and signed URLs by switching to dynamic AWS credentials across services.
  • Refactor
    • Centralized AWS credential handling and integrated it into S3/CloudFront clients for consistent behavior in all environments.
  • Chores
    • Infrastructure updates: added Aurora DB resource, new secrets/resources, stage-aware OIDC and CloudFront configuration, and retained recordings bucket.
    • Streamlined Vercel environment variable management with stage-based conditions.
  • Style
    • Minor formatting cleanup in Stripe webhook code (no behavior change).

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 14, 2025

Walkthrough

Introduces a new AwsCredentials service and replaces hard-coded/static AWS credentials with dynamic, effect-driven retrieval across S3 and CloudFront usages. Updates dependency wiring to provide AwsCredentials, adjusts infra typings and configuration, re-exports the new service, and includes a minor formatting change in a Stripe webhook route.

Changes

Cohort / File(s) Summary
Credential provider introduction
packages/web-backend/src/Aws.ts, packages/web-backend/src/index.ts
Adds AwsCredentials Effect-based service resolving credentials from env secrets, Vercel role ARN, dev SSO, or container metadata; re-exported in package index.
S3 client refactor to AwsCredentials
packages/web-backend/src/S3Buckets/index.ts
Replaces inline credential logic with AwsCredentials; injects AwsCredentials.Default as a dependency and uses its credentials for default and bucket clients.
Web app: CloudFront/S3 switch to dynamic credentials
apps/web/actions/video/upload.ts, apps/web/app/api/upload/[...route]/signed.ts
Imports AwsCredentials and uses await runPromise(Effect.map(AwsCredentials, (c) => c.credentials)) to instantiate CloudFrontClient; removes static env-based credentials.
Server dependency wiring
apps/web/lib/server.ts
Includes AwsCredentials.Default in Layer.mergeAll Dependencies export.
Infra typings/resources
infra/sst-env.d.ts
Replaces DATABASE_URL with AuroraDB resource; adds CAP_AWS_ACCESS_KEY, CAP_AWS_SECRET_KEY, GITHUB_PAT, MyApi, Runner, ShardManager, Vpc; keeps DATABASE_URL_MYSQL.
Infra config adjustments
infra/sst.config.ts
Revises bucket creation, conditional CloudFront distribution, OIDC provider logic, IAM policy resolution, staged workflow cluster, and Vercel env wiring; adds conditional statements and dynamic resolution.
Formatting-only change
apps/web/app/api/webhooks/stripe/route.ts
Reformats ternary assignment; no logic change.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant App as App code (web/server)
  participant Aws as AwsCredentials (Effect service)
  participant Env as Env Secrets (CAP_AWS_ACCESS_KEY/SECRET)
  participant Role as Vercel Role ARN
  participant SSO as Dev SSO Profile
  participant Meta as Container Metadata
  participant AWS as AWS SDK Clients (S3/CloudFront)

  App->>Aws: request credentials (Effect)
  alt Env secrets present
    Aws->>Env: read access/secret
    Env-->>Aws: credentials
  else Role ARN present
    Aws->>Role: assume role (provider)
    Role-->>Aws: credentials
  else Dev environment
    Aws->>SSO: fromSSO(profile)
    SSO-->>Aws: credentials
  else Fallback
    Aws->>Meta: fromContainerMetadata()
    Meta-->>Aws: credentials
  end
  Aws-->>App: { credentials }
  App->>AWS: instantiate clients with credentials
  AWS-->>App: perform S3/CloudFront operations
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

A rabbit taps the cloud at night,
Sniffing creds by moonlit byte—
Env or role, it finds the key,
Hops to S3 and CF with glee.
Pipes and layers click in tune,
Staged burrows hum beneath the moon.
Thump—deploy! 🌙🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The title “last few staging things” is too generic and does not convey the substantive changes in this pull request, which include integrating dynamic AWS credential management, updating CloudFront and S3 configurations, and extending SST environment types. Because it uses non-descriptive phrasing and fails to highlight the main technical objectives, it does not provide meaningful context for reviewers. Please revise the title to clearly summarize the primary change, for example “Integrate dynamic AWS credentials across web and infrastructure modules,” so that it accurately reflects the scope and intent of the pull request.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch staging

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Brendonovich Brendonovich merged commit 2a93704 into main Oct 14, 2025
14 of 15 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
apps/web/actions/video/upload.ts (1)

54-66: CloudFront invalidation condition is inverted and can skip/over-invalidate; also make it best-effort

  • Invalidate when using the default bucket, not when using a different custom bucket.
  • Wrap client construction in try/catch so invalidation failures don’t block presign flow.

Apply this diff:

-    if (
-      !customBucket ||
-      !s3Config ||
-      customBucket.bucketName !== serverEnv().CAP_AWS_BUCKET
-    ) {
+    if (!customBucket || customBucket.bucketName === serverEnv().CAP_AWS_BUCKET) {
       const distributionId = serverEnv().CAP_CLOUDFRONT_DISTRIBUTION_ID;
       if (distributionId) {
-        const cloudfront = new CloudFrontClient({
-          region: serverEnv().CAP_AWS_REGION || "us-east-1",
-          credentials: await runPromise(
-            Effect.map(AwsCredentials, (c) => c.credentials),
-          ),
-        });
-
-        const pathToInvalidate = "/" + fileKey;
-
-        try {
-          await cloudfront.send(
+        try {
+          const cloudfront = new CloudFrontClient({
+            region: serverEnv().CAP_AWS_REGION || "us-east-1",
+            credentials: await runPromise(
+              Effect.map(AwsCredentials, (c) => c.credentials),
+            ),
+          });
+          const pathToInvalidate = "/" + fileKey;
+          await cloudfront.send(
             new CreateInvalidationCommand({
               DistributionId: distributionId,
               InvalidationBatch: {
                 CallerReference: `${Date.now()}`,
                 Paths: {
                   Quantity: 1,
                   Items: [pathToInvalidate],
                 },
               },
             }),
           );
         } catch (error) {
           console.error("Failed to create CloudFront invalidation:", error);
         }
       }
     }

Also applies to: 68-86

apps/web/app/api/upload/[...route]/signed.ts (1)

65-79: Same invalidation condition problem; also wrap client construction in try/catch

  • Only invalidate when using the default bucket.
  • Make invalidation non-blocking by including client creation in try/catch.

Apply this diff:

-      if (
-        !customBucket ||
-        !s3Config ||
-        customBucket.bucketName !== serverEnv().CAP_AWS_BUCKET
-      ) {
+      if (!customBucket || customBucket.bucketName === serverEnv().CAP_AWS_BUCKET) {
         const distributionId = serverEnv().CAP_CLOUDFRONT_DISTRIBUTION_ID;
         if (distributionId) {
           console.log("Creating CloudFront invalidation for", fileKey);
-
-          const cloudfront = new CloudFrontClient({
-            region: serverEnv().CAP_AWS_REGION || "us-east-1",
-            credentials: await runPromise(
-              Effect.map(AwsCredentials, (c) => c.credentials),
-            ),
-          });
-
-          const pathToInvalidate = "/" + fileKey;
-
-          try {
-            const invalidation = await cloudfront.send(
+          try {
+            const cloudfront = new CloudFrontClient({
+              region: serverEnv().CAP_AWS_REGION || "us-east-1",
+              credentials: await runPromise(
+                Effect.map(AwsCredentials, (c) => c.credentials),
+              ),
+            });
+            const pathToInvalidate = "/" + fileKey;
+            const invalidation = await cloudfront.send(
               new CreateInvalidationCommand({
                 DistributionId: distributionId,
                 InvalidationBatch: {
                   CallerReference: `${Date.now()}`,
                   Paths: {
                     Quantity: 1,
                     Items: [pathToInvalidate],
                   },
                 },
               }),
             );
             console.log("CloudFront invalidation created:", invalidation);
           } catch (error) {
             console.error("Failed to create CloudFront invalidation:", error);
           }
         }
       }

Also applies to: 81-100

🧹 Nitpick comments (3)
packages/web-backend/src/Aws.ts (2)

15-31: Support optional session token for static keys

Handle CAP_AWS_SESSION_TOKEN when access keys are temporary STS creds.

-      const accessKeys = yield* Config.option(
+      const accessKeys = yield* Config.option(
         Config.all([
           Config.string("CAP_AWS_ACCESS_KEY"),
           Config.string("CAP_AWS_SECRET_KEY"),
         ]),
       );
+      const sessionToken = yield* Config.option(
+        Config.string("CAP_AWS_SESSION_TOKEN"),
+      );
@@
-      if (Option.isSome(accessKeys)) {
+      if (Option.isSome(accessKeys)) {
         const [accessKeyId, secretAccessKey] = accessKeys.value;
         yield* Effect.log("Using CAP_AWS_ACCESS_KEY and CAP_AWS_SECRET_KEY");
-        credentials = { accessKeyId, secretAccessKey };
+        credentials = Option.isSome(sessionToken)
+          ? { accessKeyId, secretAccessKey, sessionToken: sessionToken.value }
+          : { accessKeyId, secretAccessKey };
       } else if (Option.isSome(vercelAwsRole)) {

32-35: Avoid passing undefined profile to fromSSO

Pass profile only when set to prevent subtle resolution issues.

-      } else if (process.env.NODE_ENV === "development") {
+      } else if (process.env.NODE_ENV === "development") {
         yield* Effect.log("Using AWS_DEFAULT_PROFILE");
-        credentials = fromSSO({ profile: process.env.AWS_DEFAULT_PROFILE });
+        const profile = process.env.AWS_DEFAULT_PROFILE;
+        credentials = profile ? fromSSO({ profile }) : fromSSO();
       } else {
infra/sst.config.ts (1)

132-155: Tighten IAM policy: avoid s3: on bucket*

Limit to least privilege for bucket-level and object-level actions.

-          policy: $resolve([
-            recordingsBucket.arn,
-            cloudfrontDistribution?.arn,
-          ] as const).apply(([bucketArn, cloudfrontArn]) =>
-            JSON.stringify({
-              Version: "2012-10-17",
-              Statement: [
-                {
-                  Effect: "Allow",
-                  Action: ["s3:*"],
-                  Resource: `${bucketArn}/*`,
-                },
-                {
-                  Effect: "Allow",
-                  Action: ["s3:*"],
-                  Resource: bucketArn,
-                },
-                cloudfrontArn && {
-                  Effect: "Allow",
-                  Action: ["cloudfront:CreateInvalidation"],
-                  Resource: cloudfrontArn,
-                },
-              ].filter(Boolean),
-            }),
-          ),
+          policy: $resolve([
+            recordingsBucket.arn,
+            cloudfrontDistribution?.arn,
+          ] as const).apply(([bucketArn, cloudfrontArn]) =>
+            JSON.stringify({
+              Version: "2012-10-17",
+              Statement: [
+                // Bucket-level permissions
+                {
+                  Effect: "Allow",
+                  Action: ["s3:ListBucket"],
+                  Resource: bucketArn,
+                },
+                // Object-level permissions
+                {
+                  Effect: "Allow",
+                  Action: [
+                    "s3:GetObject",
+                    "s3:PutObject",
+                    "s3:DeleteObject",
+                    "s3:AbortMultipartUpload"
+                  ],
+                  Resource: `${bucketArn}/*`,
+                },
+                // CloudFront invalidation (prod only)
+                cloudfrontArn && {
+                  Effect: "Allow",
+                  Action: ["cloudfront:CreateInvalidation"],
+                  Resource: cloudfrontArn,
+                },
+              ].filter(Boolean),
+            }),
+          ),
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d717f93 and 5dace4a.

📒 Files selected for processing (8)
  • apps/web/actions/video/upload.ts (2 hunks)
  • apps/web/app/api/upload/[...route]/signed.ts (2 hunks)
  • apps/web/lib/server.ts (2 hunks)
  • infra/sst-env.d.ts (1 hunks)
  • infra/sst.config.ts (6 hunks)
  • packages/web-backend/src/Aws.ts (1 hunks)
  • packages/web-backend/src/S3Buckets/index.ts (4 hunks)
  • packages/web-backend/src/index.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by running pnpm format.

Use strict TypeScript and avoid any; leverage shared types

Files:

  • apps/web/actions/video/upload.ts
  • packages/web-backend/src/index.ts
  • infra/sst-env.d.ts
  • apps/web/lib/server.ts
  • packages/web-backend/src/Aws.ts
  • apps/web/app/api/upload/[...route]/signed.ts
  • packages/web-backend/src/S3Buckets/index.ts
  • infra/sst.config.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx}: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g., user-menu.tsx).
Use PascalCase for React/Solid components.

Files:

  • apps/web/actions/video/upload.ts
  • packages/web-backend/src/index.ts
  • infra/sst-env.d.ts
  • apps/web/lib/server.ts
  • packages/web-backend/src/Aws.ts
  • apps/web/app/api/upload/[...route]/signed.ts
  • packages/web-backend/src/S3Buckets/index.ts
  • infra/sst.config.ts
apps/web/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

On the client, always use useEffectQuery or useEffectMutation from @/lib/EffectRuntime; never call EffectRuntime.run* directly in components.

Files:

  • apps/web/actions/video/upload.ts
  • apps/web/lib/server.ts
  • apps/web/app/api/upload/[...route]/signed.ts
apps/web/actions/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

All Groq/OpenAI calls must execute in Next.js Server Actions under apps/web/actions; do not invoke AI providers elsewhere

Files:

  • apps/web/actions/video/upload.ts
apps/web/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for all client-side server state and fetching in the web app
Mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData
Run server-side effects via the ManagedRuntime from apps/web/lib/server.ts using EffectRuntime.runPromise/runPromiseExit; do not create runtimes ad hoc
Client code should use helpers from apps/web/lib/EffectRuntime.ts (useEffectQuery, useEffectMutation, useRpcClient); never call ManagedRuntime.make inside components

Files:

  • apps/web/actions/video/upload.ts
  • apps/web/lib/server.ts
  • apps/web/app/api/upload/[...route]/signed.ts
apps/web/app/api/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/app/api/**/*.{ts,tsx}: Prefer Server Actions for API surface; when routes are necessary, implement under app/api and export only the handler from apiToHandler(ApiLive)
Construct API routes with @effect/platform HttpApi/HttpApiBuilder, declare contracts with Schema, and only export the handler
Use HttpAuthMiddleware for required auth and provideOptionalAuth for guests; avoid duplicating session lookups
Map domain errors to transport with HttpApiError.* and keep translation exhaustive (catchTags/tapErrorCause)
Inside HttpApiBuilder.group, acquire services with Effect.gen and provide dependencies via Layer.provide instead of manual provideService

Files:

  • apps/web/app/api/upload/[...route]/signed.ts
apps/web/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Server components needing Effect services must call EffectRuntime.runPromise(effect.pipe(provideOptionalAuth))

Files:

  • apps/web/app/api/upload/[...route]/signed.ts
🧠 Learnings (1)
📚 Learning: 2025-09-24T07:24:21.449Z
Learnt from: Brendonovich
PR: CapSoftware/Cap#1060
File: apps/web/components/forms/server.ts:85-97
Timestamp: 2025-09-24T07:24:21.449Z
Learning: In the S3BucketAccess service, bucketName is an Effect that must be yielded (using yield*) to get the actual string value before use in string interpolation or other operations.

Applied to files:

  • packages/web-backend/src/S3Buckets/index.ts
🧬 Code graph analysis (4)
apps/web/actions/video/upload.ts (2)
apps/web/lib/server.ts (1)
  • runPromise (124-136)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
apps/web/lib/server.ts (1)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
apps/web/app/api/upload/[...route]/signed.ts (2)
apps/web/lib/server.ts (1)
  • runPromise (124-136)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
packages/web-backend/src/S3Buckets/index.ts (2)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
packages/web-backend/src/Database.ts (1)
  • Database (5-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
  • GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
  • GitHub Check: Analyze (rust)
🔇 Additional comments (12)
infra/sst-env.d.ts (1)

7-50: Auto-generated types look fine

Resource shapes and module augmentation are consistent; no changes needed here.

Also applies to: 54-55

packages/web-backend/src/index.ts (1)

2-2: Expose Aws surface

Good call re-exporting Aws.ts to make AwsCredentials available to consumers.

apps/web/lib/server.ts (1)

5-16: AwsCredentials wired into server runtime

Including AwsCredentials.Default in Dependencies is correct and aligns with downstream usage.

Also applies to: 100-111

apps/web/actions/video/upload.ts (1)

63-66: Credential sourcing via AwsCredentials looks good

Passing either identity or provider into CloudFrontClient is valid. Nice centralization.

apps/web/app/api/upload/[...route]/signed.ts (1)

76-79: Credential centralization is correct

Using AwsCredentials via runPromise is appropriate here.

packages/web-backend/src/S3Buckets/index.ts (2)

8-9: Credential wiring via AwsCredentials looks correct

Dependency injection and S3Client config now consume the provider. Nice consolidation.

Also applies to: 17-18, 29-29


183-187: Good: dependencies updated to include AwsCredentials.Default

Ensures service resolution at runtime.

infra/sst.config.ts (5)

48-52: Retain S3 bucket on delete: LGTM

Prevents accidental data loss across stage transitions.


66-69: Prod-only CloudFront resolution

Looks good. Verify no staging code paths require invalidations.


95-103: OIDC provider handling per stage makes sense

Reuse in prod/staging, create otherwise. Ensure provider exists in target accounts for those stages.


161-165: Staging-only workflow cluster

Reasonable scoping; avoids infra drift in prod.


166-194: Env var creation loop: safe filtering and stage-specific targets

Looks good. Values using Outputs are preserved; staging customEnvironmentIds applied only when needed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
infra/sst.config.ts (3)

13-14: Hard-coded AXIOM API token committed to source. Move to a secret.

Avoid committing tokens; store in Secret and reference via secrets. Also ensures Pulumi state doesn’t contain cleartext.

Apply this diff here to remove the hard-coded token:

-const AXIOM_API_TOKEN = "xaat-c0704be6-e942-4935-b068-3b491d7cc00f";
+// AXIOM_API_TOKEN will be provided via Secrets()

Then, add AXIOM_API_TOKEN to Secrets and update usages:

  • In function Secrets() add:
AXIOM_API_TOKEN: new sst.Secret("AXIOM_API_TOKEN"),
  • Update env var wiring (examples):
// vercelVariables entry
{ key: "NEXT_PUBLIC_AXIOM_TOKEN", value: secrets.AXIOM_API_TOKEN.value }

// commonEnvironment
AXIOM_API_TOKEN: secrets.AXIOM_API_TOKEN.value

200-210: Add AXIOM_API_TOKEN to Secrets to replace the in-code literal.

Centralize the token with other secrets.

Apply this diff inside Secrets():

   return {
     DATABASE_URL_MYSQL: new sst.Secret("DATABASE_URL_MYSQL"),
     CAP_AWS_ACCESS_KEY: new sst.Secret("CAP_AWS_ACCESS_KEY"),
     CAP_AWS_SECRET_KEY: new sst.Secret("CAP_AWS_SECRET_KEY"),
     GITHUB_PAT: new sst.Secret("GITHUB_PAT"),
+    AXIOM_API_TOKEN: new sst.Secret("AXIOM_API_TOKEN"),
     WORKFLOWS_RPC_SECRET: new random.RandomString("WORKFLOWS_RPC_SECRET", {
       length: 48,
     }),
   };

300-309: Use secret-backed AXIOM token in service environment.

Avoid embedding tokens in code and Pulumi state.

-  AXIOM_API_TOKEN,
+  AXIOM_API_TOKEN: secrets.AXIOM_API_TOKEN.value,

Additionally, consider wrapping with pulumi.secret(...) if the provider doesn’t already mark it as a secret Output.

🧹 Nitpick comments (3)
packages/web-backend/src/S3Buckets/index.ts (1)

8-8: LGTM! Well-executed credential refactor.

The service now correctly depends on AwsCredentials for credential management, eliminating duplicated credential resolution logic. The credentials are properly extracted (line 17) and integrated into the default client configuration (line 29), with the dependency correctly declared (lines 183-187).

Also applies to: 17-17, 29-29, 183-187

infra/sst.config.ts (2)

71-71: Remove unused IAM user.

VercelUser is created but never used; drop it to reduce surface area.

-const vercelUser = new aws.iam.User("VercelUser", { forceDestroy: false });

169-182: Avoid NonNullable cast; type the collection up-front and treat secrets as Pulumi secrets.

Minor: instead of casting inside forEach, build a typed array of vercel.ProjectEnvironmentVariableArgs. Also mark sensitive values (like WORKFLOWS_RPC_SECRET) as Pulumi secrets to avoid plaintext in state.

Example:

const vars: Array<Omit<vercel.ProjectEnvironmentVariableArgs, "projectId">> = [
  ...vercelVariables,
  ...(workflowCluster
    ? [
        { key: "WORKFLOWS_RPC_URL", value: workflowCluster.api.url },
        { key: "WORKFLOWS_RPC_SECRET", value: pulumi.secret(secrets.WORKFLOWS_RPC_SECRET.result) },
      ]
    : []),
  { key: "VERCEL_AWS_ROLE_ARN", value: vercelAwsAccessRole.arn },
];

vars.forEach((v) => new vercel.ProjectEnvironmentVariable(`VercelEnv${v.key}`, { ...v, projectId: vercelProject.id, /* ... */ }));
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d717f93 and c698566.

📒 Files selected for processing (9)
  • apps/web/actions/video/upload.ts (2 hunks)
  • apps/web/app/api/upload/[...route]/signed.ts (2 hunks)
  • apps/web/app/api/webhooks/stripe/route.ts (1 hunks)
  • apps/web/lib/server.ts (2 hunks)
  • infra/sst-env.d.ts (1 hunks)
  • infra/sst.config.ts (6 hunks)
  • packages/web-backend/src/Aws.ts (1 hunks)
  • packages/web-backend/src/S3Buckets/index.ts (4 hunks)
  • packages/web-backend/src/index.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by running pnpm format.

Use strict TypeScript and avoid any; leverage shared types

Files:

  • packages/web-backend/src/index.ts
  • packages/web-backend/src/Aws.ts
  • apps/web/app/api/webhooks/stripe/route.ts
  • apps/web/lib/server.ts
  • packages/web-backend/src/S3Buckets/index.ts
  • infra/sst.config.ts
  • apps/web/actions/video/upload.ts
  • apps/web/app/api/upload/[...route]/signed.ts
  • infra/sst-env.d.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx}: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g., user-menu.tsx).
Use PascalCase for React/Solid components.

Files:

  • packages/web-backend/src/index.ts
  • packages/web-backend/src/Aws.ts
  • apps/web/app/api/webhooks/stripe/route.ts
  • apps/web/lib/server.ts
  • packages/web-backend/src/S3Buckets/index.ts
  • infra/sst.config.ts
  • apps/web/actions/video/upload.ts
  • apps/web/app/api/upload/[...route]/signed.ts
  • infra/sst-env.d.ts
apps/web/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

On the client, always use useEffectQuery or useEffectMutation from @/lib/EffectRuntime; never call EffectRuntime.run* directly in components.

Files:

  • apps/web/app/api/webhooks/stripe/route.ts
  • apps/web/lib/server.ts
  • apps/web/actions/video/upload.ts
  • apps/web/app/api/upload/[...route]/signed.ts
apps/web/app/api/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/app/api/**/*.{ts,tsx}: Prefer Server Actions for API surface; when routes are necessary, implement under app/api and export only the handler from apiToHandler(ApiLive)
Construct API routes with @effect/platform HttpApi/HttpApiBuilder, declare contracts with Schema, and only export the handler
Use HttpAuthMiddleware for required auth and provideOptionalAuth for guests; avoid duplicating session lookups
Map domain errors to transport with HttpApiError.* and keep translation exhaustive (catchTags/tapErrorCause)
Inside HttpApiBuilder.group, acquire services with Effect.gen and provide dependencies via Layer.provide instead of manual provideService

Files:

  • apps/web/app/api/webhooks/stripe/route.ts
  • apps/web/app/api/upload/[...route]/signed.ts
apps/web/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for all client-side server state and fetching in the web app
Mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData
Run server-side effects via the ManagedRuntime from apps/web/lib/server.ts using EffectRuntime.runPromise/runPromiseExit; do not create runtimes ad hoc
Client code should use helpers from apps/web/lib/EffectRuntime.ts (useEffectQuery, useEffectMutation, useRpcClient); never call ManagedRuntime.make inside components

Files:

  • apps/web/app/api/webhooks/stripe/route.ts
  • apps/web/lib/server.ts
  • apps/web/actions/video/upload.ts
  • apps/web/app/api/upload/[...route]/signed.ts
apps/web/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Server components needing Effect services must call EffectRuntime.runPromise(effect.pipe(provideOptionalAuth))

Files:

  • apps/web/app/api/webhooks/stripe/route.ts
  • apps/web/app/api/upload/[...route]/signed.ts
apps/web/actions/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

All Groq/OpenAI calls must execute in Next.js Server Actions under apps/web/actions; do not invoke AI providers elsewhere

Files:

  • apps/web/actions/video/upload.ts
🧠 Learnings (1)
📚 Learning: 2025-09-24T07:24:21.449Z
Learnt from: Brendonovich
PR: CapSoftware/Cap#1060
File: apps/web/components/forms/server.ts:85-97
Timestamp: 2025-09-24T07:24:21.449Z
Learning: In the S3BucketAccess service, bucketName is an Effect that must be yielded (using yield*) to get the actual string value before use in string interpolation or other operations.

Applied to files:

  • packages/web-backend/src/S3Buckets/index.ts
🧬 Code graph analysis (5)
apps/web/app/api/webhooks/stripe/route.ts (2)
packages/env/server.ts (1)
  • serverEnv (84-88)
packages/env/index.ts (1)
  • serverEnv (2-2)
apps/web/lib/server.ts (1)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
packages/web-backend/src/S3Buckets/index.ts (2)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
packages/web-backend/src/Database.ts (1)
  • Database (5-15)
apps/web/actions/video/upload.ts (2)
apps/web/lib/server.ts (1)
  • runPromise (124-136)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
apps/web/app/api/upload/[...route]/signed.ts (2)
apps/web/lib/server.ts (1)
  • runPromise (124-136)
packages/web-backend/src/Aws.ts (1)
  • AwsCredentials (9-43)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
  • GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
  • GitHub Check: Analyze (rust)
🔇 Additional comments (11)
apps/web/app/api/webhooks/stripe/route.ts (1)

118-121: LGTM! Improved readability.

The ternary expression is now more readable with the multi-line format while preserving the same logic.

infra/sst-env.d.ts (1)

1-1: Auto-generated file - no manual changes needed.

This file is auto-generated by SST (line 1 comment). The new resource definitions reflect infrastructure changes and are correctly formatted.

packages/web-backend/src/index.ts (1)

2-2: LGTM! AwsCredentials correctly exposed.

The new export makes AwsCredentials available to package consumers, enabling the credential refactor across the codebase.

apps/web/lib/server.ts (1)

5-5: LGTM! AwsCredentials integrated into dependency layer.

The new service is properly imported and added to the merged dependencies, making it available throughout the application's Effect runtime.

Also applies to: 108-108

apps/web/app/api/upload/[...route]/signed.ts (1)

9-9: LGTM! Dynamic credentials correctly implemented.

The CloudFrontClient now uses dynamic credentials from AwsCredentials via the Effect runtime, replacing static environment-based configuration. The pattern using runPromise and Effect.map is correct and consistent with other changes in the PR.

Also applies to: 76-78

apps/web/actions/video/upload.ts (1)

13-13: LGTM! Consistent credential usage pattern.

The implementation matches the pattern used in apps/web/app/api/upload/[...route]/signed.ts, maintaining consistency across CloudFront client initialization throughout the codebase.

Also applies to: 63-65

packages/web-backend/src/Aws.ts (1)

1-43: LGTM! Well-designed credential provider.

The AwsCredentials service implements a robust credential resolution strategy with a clear precedence order:

  1. CAP_AWS_ACCESS_KEY / CAP_AWS_SECRET_KEY (static credentials)
  2. VERCEL_AWS_ROLE_ARN (Vercel OIDC provider)
  3. AWS_DEFAULT_PROFILE (SSO in development)
  4. ECS metadata (fallback for container environments)

The implementation correctly:

  • Uses Config.option for safe environment variable access
  • Returns appropriate credential types compatible with AWS SDK clients
  • Logs the credential source for debugging without exposing secrets
  • Follows the Effect.Service pattern for dependency injection
infra/sst.config.ts (4)

75-80: LGTM: conditional WEB_URL envs.

This guard is fine and avoids empty values.


95-103: OIDC provider lookup vs. creation looks good.

Pattern aligns with staged/prod reuse and ephemeral env creation.

Ensure the production/staging OIDC provider already exists at https://oidc.vercel.com/${VERCEL_TEAM_SLUG}; otherwise stack will fail during getOpenIdConnectProviderOutput.


161-165: LGTM: staging-only WorkflowCluster.

Creation is gated to staging; clean separation.


66-70: Directly use .arn from getDistributionOutput
aws.cloudfront.getDistributionOutput exposes an arn property, so you can reference cloudfrontDistribution.arn directly.

Comment on lines +48 to +52
const recordingsBucket = new aws.s3.BucketV2(
"RecordingsBucket",
{},
{ retainOnDelete: true },
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

S3 bucket lacks security hardening (public access block, SSE, ownership controls).

Harden defaults for the recordings bucket.

Keep the bucket instantiation, and add these resources below it:

new aws.s3.BucketPublicAccessBlock("RecordingsBucketPublicAccess", {
  bucket: recordingsBucket.id,
  blockPublicAcls: true,
  ignorePublicAcls: true,
  blockPublicPolicy: true,
  restrictPublicBuckets: true,
});

new aws.s3.BucketOwnershipControls("RecordingsBucketOwnership", {
  bucket: recordingsBucket.id,
  rule: { objectOwnership: "BucketOwnerEnforced" },
});

new aws.s3.BucketServerSideEncryptionConfigurationV2("RecordingsBucketSSE", {
  bucket: recordingsBucket.id,
  rules: [
    {
      applyServerSideEncryptionByDefault: { sseAlgorithm: "AES256" },
      bucketKeyEnabled: true,
    },
  ],
});

// Optional but recommended:
new aws.s3.BucketVersioningV2("RecordingsBucketVersioning", {
  bucket: recordingsBucket.id,
  versioningConfiguration: { status: "Enabled" },
});
🤖 Prompt for AI Agents
In infra/sst.config.ts around lines 48 to 52, the RecordingsBucket is created
without S3 security hardening; add resources immediately after the bucket to
enforce Block Public Access (blockPublicAcls, ignorePublicAcls,
blockPublicPolicy, restrictPublicBuckets), set Bucket Ownership Controls to
BucketOwnerEnforced, configure Server-Side Encryption (AES256 with bucket key
enabled), and optionally enable Versioning; reference recordingsBucket.id for
each new aws.s3 resource so the bucket gets public access blocked, ownership
enforced, SSE configured, and versioning enabled.

Comment on lines 54 to 59
const vercelVariables = [
{ key: "NEXT_PUBLIC_AXIOM_TOKEN", value: AXIOM_API_TOKEN },
{ key: "NEXT_PUBLIC_AXIOM_DATASET", value: AXIOM_DATASET },
{ key: "CAP_AWS_BUCKET", value: recordingsBucket.bucket },
{ key: "NEXT_PUBLIC_CAP_AWS_BUCKET", value: recordingsBucket.bucket },
{ key: "DATABASE_URL", value: secrets.DATABASE_URL_MYSQL.value },
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use secret-backed AXIOM token for Vercel env like other sensitive values.

Replace the direct constant usage with the Secret value.

-  { key: "NEXT_PUBLIC_AXIOM_TOKEN", value: AXIOM_API_TOKEN },
+  { key: "NEXT_PUBLIC_AXIOM_TOKEN", value: secrets.AXIOM_API_TOKEN.value },

Note: Although labeled “NEXT_PUBLIC”, avoid committing the token in code; source-of-truth via Secret prevents accidental reuse and rotation issues.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const vercelVariables = [
{ key: "NEXT_PUBLIC_AXIOM_TOKEN", value: AXIOM_API_TOKEN },
{ key: "NEXT_PUBLIC_AXIOM_DATASET", value: AXIOM_DATASET },
{ key: "CAP_AWS_BUCKET", value: recordingsBucket.bucket },
{ key: "NEXT_PUBLIC_CAP_AWS_BUCKET", value: recordingsBucket.bucket },
{ key: "DATABASE_URL", value: secrets.DATABASE_URL_MYSQL.value },
];
const vercelVariables = [
{ key: "NEXT_PUBLIC_AXIOM_TOKEN", value: secrets.AXIOM_API_TOKEN.value },
{ key: "NEXT_PUBLIC_AXIOM_DATASET", value: AXIOM_DATASET },
{ key: "CAP_AWS_BUCKET", value: recordingsBucket.bucket },
{ key: "DATABASE_URL", value: secrets.DATABASE_URL_MYSQL.value },
];
🤖 Prompt for AI Agents
In infra/sst.config.ts around lines 54 to 59, the Vercel env currently uses the
plain AXIOM_API_TOKEN constant; replace that with the secret-backed value (e.g.,
secrets.AXIOM_API_TOKEN.value) so the token is sourced from the Secrets object
like DATABASE_URL_MYSQL.value, preventing accidental commit and enabling
rotation — keep the key name as "NEXT_PUBLIC_AXIOM_TOKEN" but use the secret
value reference instead of the constant.

Comment on lines 120 to 123
StringLike: {
[`${oidc.url}:sub`]: [
`owner:${VERCEL_TEAM_SLUG}:project:*:environment:staging`,
`owner:${VERCEL_TEAM_SLUG}:project:*:environment:${$app.stage}`,
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Narrow OIDC sub condition to least privilege (avoid project:*).

Scope sub to specific project(s) to reduce blast radius.

If the project name is cap-web, prefer:

- [`${oidc.url}:sub`]: [`owner:${VERCEL_TEAM_SLUG}:project:*:environment:${$app.stage}`]
+ [`${oidc.url}:sub`]: [`owner:${VERCEL_TEAM_SLUG}:project:cap-web:environment:${$app.stage}`]

Verify the exact sub format per Vercel OIDC docs for role assumptions.


🌐 Web query:

What is the exact `sub` (subject) format emitted by Vercel OIDC for deployments? How to scope IAM role trust to a single project and environment?

💡 Result:

Exact sub format

  • sub = "owner:[TEAM_SLUG]:project:[PROJECT_NAME]:environment:[ENVIRONMENT]". Example: "owner:acme:project:acme_website:environment:production". [1]

How to scope an IAM role trust to a single Vercel project + environment

{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal":{
"Federated":"arn:aws:iam::[ACCOUNT_ID]:oidc-provider/oidc.vercel.com/[TEAM_SLUG]"
},
"Action":"sts:AssumeRoleWithWebIdentity",
"Condition":{
"StringEquals":{
"oidc.vercel.com/[TEAM_SLUG]:aud":"https://vercel.com/[TEAM_SLUG]",
"oidc.vercel.com/[TEAM_SLUG]:sub":"owner:[TEAM_SLUG]:project:[PROJECT_NAME]:environment:[ENVIRONMENT]"
}
}
}
]
}

  • Notes: use StringLike with wildcards if you want to allow multiple projects/environments; for team→global issuer differences see Vercel docs. [1][2]

Sources

  • Vercel OIDC reference and AWS guide. [1][2]

Narrow OIDC sub to Specific Project
Replace the wildcard with the exact project name to enforce least privilege. For example, for cap-web:

- [`${oidc.url}:sub`]: [`owner:${VERCEL_TEAM_SLUG}:project:*:environment:${$app.stage}`]
+ [`${oidc.url}:sub`]: [`owner:${VERCEL_TEAM_SLUG}:project:cap-web:environment:${$app.stage}`]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
StringLike: {
[`${oidc.url}:sub`]: [
`owner:${VERCEL_TEAM_SLUG}:project:*:environment:staging`,
`owner:${VERCEL_TEAM_SLUG}:project:*:environment:${$app.stage}`,
],
StringLike: {
[`${oidc.url}:sub`]: [
`owner:${VERCEL_TEAM_SLUG}:project:cap-web:environment:${$app.stage}`,
],
🤖 Prompt for AI Agents
In infra/sst.config.ts around lines 120 to 123, the OIDC StringLike claim
currently uses a wildcard for the Vercel project which is overly broad; replace
the `project:*` segment with the exact project slug (e.g., `project:cap-web`) or
with a specific env var like `project:${VERCEL_PROJECT_NAME}` so the claim
becomes `owner:${VERCEL_TEAM_SLUG}:project:cap-web:environment:${$app.stage}`
(or equivalent using the env var), and verify the project slug matches the
actual Vercel project name.

Comment on lines +132 to 157
policy: $resolve([
recordingsBucket.arn,
cloudfrontDistribution?.arn,
] as const).apply(([bucketArn, cloudfrontArn]) =>
JSON.stringify({
Version: "2012-10-17",
Statement: [
{
Effect: "Allow",
Action: ["s3:*"],
Resource: `${arn}/*`,
Resource: `${bucketArn}/*`,
},
{
Effect: "Allow",
Action: ["s3:*"],
Resource: `${arn}`,
Resource: bucketArn,
},
],
cloudfrontArn && {
Effect: "Allow",
Action: ["cloudfront:CreateInvalidation"],
Resource: cloudfrontArn,
},
].filter(Boolean),
}),
),
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

IAM policy hardening: principle of least privilege and CloudFront resource scoping.

  • S3 permissions are s3:* on bucket and objects. Limit to required actions (e.g., GetObject, PutObject, DeleteObject, ListBucket) to reduce risk.
  • Confirm whether cloudfront:CreateInvalidation supports resource-level ARNs. If not, set Resource: "*" (typical for CloudFront invalidations).

Apply this diff to scope actions; adjust CloudFront after verification:

-          Action: ["s3:*"],
+          Action: ["s3:ListBucket"],
           Resource: ${bucketArn},
         },
         {
           Effect: "Allow",
-          Action: ["s3:*"],
+          Action: ["s3:GetObject","s3:PutObject","s3:DeleteObject"],
           Resource: `${bucketArn}/*`,
         },
-        cloudfrontArn && {
-          Effect: "Allow",
-          Action: ["cloudfront:CreateInvalidation"],
-          Resource: cloudfrontArn,
-        },
+        // After verification, either keep ARN if supported or use "*"
+        cloudfrontArn && {
+          Effect: "Allow",
+          Action: ["cloudfront:CreateInvalidation"],
+          Resource: cloudfrontArn, // replace with "*" if ARNs aren't supported
+        },

Also, ensure $resolve([recordingsBucket.arn, cloudfrontDistribution?.arn]) tolerates undefined. If not, wrap with an explicit nullable Output or branch before $resolve.


🌐 Web query:

Does AWS IAM support resource-level permissions for `cloudfront:CreateInvalidation`, or must `Resource` be `"*"`?

💡 Result:

Yes — cloudfront:CreateInvalidation supports resource-level permissions. The Service Authorization Reference shows CreateInvalidation requires the distribution resource (arn:...:distribution/...) so you can restrict Resource to that distribution ARN instead of "*". [1][2]

Sources:

  • AWS Service Authorization Reference — CloudFront (CreateInvalidation → resource type: distribution*). [1]
  • CloudFront IAM / Developer Guide (how resource-level permissions work for CloudFront). [2]

Scope S3 actions and narrow CloudFront invalidation resource

  • Restrict S3 to s3:ListBucket on ${bucketArn} and s3:GetObject,s3:PutObject,s3:DeleteObject on ${bucketArn}/*.
  • CloudFront invalidation supports distribution ARNs—keep Resource: cloudfrontArn.
  • Ensure $resolve([recordingsBucket.arn, cloudfrontDistribution?.arn]) handles undefined.
         {
           Effect: "Allow",
-          Action: ["s3:*"],
+          Action: ["s3:ListBucket"],
           Resource: ${bucketArn},
         },
         {
           Effect: "Allow",
-          Action: ["s3:*"],
+          Action: ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
           Resource: `${bucketArn}/*`,
         },
-        cloudfrontArn && {
-          Effect: "Allow",
-          Action: ["cloudfront:CreateInvalidation"],
-          Resource: cloudfrontArn,
-        },
+        cloudfrontArn && {
+          Effect: "Allow",
+          Action: ["cloudfront:CreateInvalidation"],
+          Resource: cloudfrontArn,
+        },

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In infra/sst.config.ts around lines 132 to 157, the IAM policy is overly
permissive and may break when cloudfrontDistribution?.arn is undefined; update
the policy to (1) replace the broad "s3:*" permissions with two statements: one
granting "s3:ListBucket" on the bucket ARN (Resource: bucketArn) and another
granting "s3:GetObject","s3:PutObject","s3:DeleteObject" on the objects path
(Resource: `${bucketArn}/*`), (2) keep the CloudFront invalidation statement as
Effect: "Allow", Action: ["cloudfront:CreateInvalidation"], Resource:
cloudfrontArn (do not convert to wildcard), and (3) ensure the $resolve([...])
call and its apply callback handle undefined cloudfrontArn safely (e.g., include
cloudfrontArn in the array and only push the CloudFront statement when
cloudfrontArn is truthy) so the policy JSON never contains undefined.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant