-
Notifications
You must be signed in to change notification settings - Fork 0
Phase 3: AI API Endpoints & Worker Integration #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add AI chat API endpoint (POST /api/ai/chat) - Request validation with Zod-style validation - Platform context enrichment from D1 database - Cost tracking and usage reporting - Comprehensive error handling - Add 3 AI-enhanced MCP tools: - aiSharpAnalysis: AI-powered customer profiling - aiSteamDetection: AI-powered line movement detection - aiRiskReport: AI-powered risk assessment & hedge recommendations - Update worker routing in src/index.ts - Add POST /api/ai/chat route - Import handleAIChat handler - Register AI tools in MCP system: - Add registerAITools() function to toolRegistry - Add AI tool definitions to tools.ts - Integrate with existing MCP infrastructure - Add branching strategy documentation Files: 5 new, 3 modified (+684 lines) Co-authored-by: Augment Agent <[email protected]>
WalkthroughAdds AI endpoints (/api/ai/health, /api/ai/query, /api/ai/chat), three MCP AI tool handlers (sharp analysis, steam detection, risk report), registry/tool definitions, and configuration files (.editorconfig, ESLint, Prettier, Husky). Includes documentation for branching strategy and a Phase 3 summary. No changes to existing non-AI routes. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Client
participant W as Worker (index.ts)
participant H as handleAIChat
participant A as Analytics DB
participant K as Kimi API (BettingAnalyzer)
C->>W: POST /api/ai/chat {messages, includeContext, maxTokens}
W->>H: handleAIChat(request, env)
H->>H: Validate body / defaults
alt includeContext == true
H->>A: Fetch platform context
A-->>H: Context data or none
H->>H: Insert system context message (if available)
end
H->>K: Analyze chat (messages, maxTokens, apiKey)
K-->>H: {text, usage, cost}
H-->>W: JSON {text, usage, cost, context?, ts}
W-->>C: 200 with CORS
opt Error paths
H-->>W: JSON error {code, message, ts}
W-->>C: 4xx/5xx with CORS
end
sequenceDiagram
autonumber
participant C as Client
participant W as Worker (index.ts)
participant Q as handleAIQuery
participant DB as D1 (Analytics/Raw)
participant K as Kimi API
C->>W: POST /api/ai/query {query, agentID?}
W->>Q: handleAIQuery(request, env)
Q->>Q: Validate API key, parse intent
alt sharp_analysis / steam_detection / risk_report
Q->>DB: Fetch required data
DB-->>Q: Data
Q->>K: Run analysis (BettingAnalyzer)
K-->>Q: Result + usage/cost
else agent_performance
Q->>DB: Query top agents
DB-->>Q: Aggregates
else general_chat
Q->>K: Chat response
K-->>Q: Text + usage/cost
end
Q-->>W: JSON {query, intent, result, usage?, cost?, ts}
W-->>C: 200 or error code
sequenceDiagram
autonumber
participant C as Client
participant W as Worker (index.ts)
participant H as handleHealthCheck
participant A as ANALYTICS DB
participant R as RAW_FEED_DB
participant K as Kimi API
C->>W: GET /api/ai/health
W->>H: handleHealthCheck(request, env)
H->>K: Probe connectivity (apiKey)
par DB checks
H->>A: SELECT count(tables)
H->>R: SELECT count(tables)
end
A-->>H: Status
R-->>H: Status
K-->>H: Status
H-->>W: JSON {status, kimi, databases[], ts, version}
W-->>C: 200/206/503 with CORS
Estimated code review effortπ― 4 (Complex) | β±οΈ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touchesβ Failed checks (1 warning)
β Passed checks (2 passed)
β¨ Finishing touches
π§ͺ Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Add .prettierrc.json for code formatting - Add .eslintrc.json for linting rules - Add .editorconfig for editor consistency - Update pre-commit hook to check formatting - Add Phase 3 completion documentation Files: 5 new
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
π§Ή Nitpick comments (4)
src/mcp/toolRegistry.ts (1)
181-194: Add requestId propagation and structured logging/errors in registry.Per guidelines for src/mcp/**: generate a requestId, include it in logs and JSON responses, and log duration. Recommend passing requestId into handlers and including it in error payloads.
As per coding guidelines
@@ function registerAITools() { // AI Sharp Analysis toolRegistry.set('aiSharpAnalysis', getAISharpAnalysis); @@ // AI Risk Report toolRegistry.set('aiRiskReport', getAIRiskReport); } @@ export async function callTool( toolName: string, args: Record<string, any>, env: MCPEnv ): Promise<MCPToolResult> { // Initialize registry on first call initializeRegistry(); + const requestId = Date.now().toString(36); + const startedAt = Date.now(); + const handler = toolRegistry.get(toolName); if (!handler) { return { content: [ { type: 'text', - text: `Tool not found: ${toolName}\n\nAvailable tools: ${Array.from(toolRegistry.keys()).join(', ')}`, + text: JSON.stringify({ + error: 'Tool not found', + tool: toolName, + requestId, + availableTools: Array.from(toolRegistry.keys()), + }, null, 2), }, ], isError: true, }; } try { - console.log(`[MCP] Executing tool: ${toolName}`, { args }); - const result = await handler(args, env); - console.log(`[MCP] Tool completed: ${toolName}`, { - isError: result.isError || false, - }); + console.log(`[${requestId}] [MCP] Executing tool: ${toolName}`, { args }); + const result = await handler({ ...args, requestId }, env); + const durationMs = Date.now() - startedAt; + console.log(`[${requestId}] [MCP] Tool completed: ${toolName}`, { + isError: result.isError || false, + durationMs, + }); return result; } catch (error) { - console.error(`[MCP] Tool execution failed: ${toolName}`, error); - return createErrorResult(error); + const durationMs = Date.now() - startedAt; + console.error(`[${requestId}] [MCP] Tool execution failed: ${toolName}`, { error, durationMs }); + return createErrorResult(error, requestId); } } @@ -function createErrorResult(error: unknown): MCPToolResult { +function createErrorResult(error: unknown, requestId?: string): MCPToolResult { const message = error instanceof Error ? error.message : String(error); return { content: [ { type: 'text', - text: `Error: ${message}`, + text: JSON.stringify({ error: message, requestId }, null, 2), }, ], isError: true, }; }Optionally thread requestId into existing internal wrappers:
@@ - } catch (error) { - return createErrorResult(error); + } catch (error) { + return createErrorResult(error, (args as any)?.requestId); }src/mcp/tools.ts (1)
444-489: Schemas and names align with handlers; consider ID casing consistency.Definitions match handler names/args. Optional: standardize ID casing across tools (customerId vs customerID, eventId vs eventID) to reduce client confusion.
src/mcp/handlers/aiSharpAnalysis.ts (2)
27-34: Cast D1 result viaas unknown asper project convention.Aligns with D1 typing guideline.
As per coding guidelines
- const customer = await env.ANALYTICS.prepare(` + const customer = (await env.ANALYTICS.prepare(` SELECT cid, clv, wr, ao, nb, upd FROM sharp_indicators WHERE cid = ? LIMIT 1 - `).bind(customerId).first<CustomerData>(); + `) + .bind(customerId) + .first()) as unknown as CustomerData;
49-66: Include requestId and duration in success path.Return requestId and log completion with duration.
As per coding guidelines
aiUsage: { inputTokens: result.usage.inputTokens, outputTokens: result.usage.outputTokens, totalTokens: result.usage.totalTokens, cost: result.cost.totalCost }, - timestamp: new Date().toISOString(), + requestId, + timestamp: new Date().toISOString(), }; - return { content: [{ type: 'text', text: JSON.stringify(response, null, 2) }], isError: false }; + const durationMs = Date.now() - startedAt; + console.log(`[${requestId}] aiSharpAnalysis completed`, { durationMs, isError: false }); + return { content: [{ type: 'text', text: JSON.stringify(response, null, 2) }], isError: false };
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (8)
BRANCHING_STRATEGY.md(1 hunks)src/api/ai-chat.ts(1 hunks)src/index.ts(2 hunks)src/mcp/handlers/aiRiskReport.ts(1 hunks)src/mcp/handlers/aiSharpAnalysis.ts(1 hunks)src/mcp/handlers/aiSteamDetection.ts(1 hunks)src/mcp/toolRegistry.ts(3 hunks)src/mcp/tools.ts(2 hunks)
π§° Additional context used
π Path-based instructions (22)
src/**/*.ts
π CodeRabbit inference engine (.cursor/rules/cloudflare-workers.mdc)
Use the Env, BetTickerSnifferEnv, and MCPEnv types from src/types/api.ts across TypeScript sources
Use @cloudflare/workers-types for Cloudflare Worker TypeScript definitions across source files
src/**/*.ts: Always use parameterized queries with .bind() for all D1 queries
Never use string interpolation in SQL for D1 queries
Always cast D1 results withas unknown as <Type>when consuming D1Result
Always limit query results (use LIMIT) in D1 queries
Always handle database errors around D1 operations (try/catch and propagate meaningful errors)
Never forget to bind parameters when executing D1 queries
Never ignore database errors from D1 operations
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
src/{index.ts,mcp/**/*.ts,tools/**/*.ts,interceptors/**/*.ts}
π CodeRabbit inference engine (.cursor/rules/endpoint-routing.mdc)
src/{index.ts,mcp/**/*.ts,tools/**/*.ts,interceptors/**/*.ts}: Always generate and use a requestId; include it in logs and in JSON responses
Always include CORS headers (Access-Control-Allow-*) on responses and handle OPTIONS preflight
Use standardized error handling: try/catch with JSON error payload { error, message, requestId } and proper headers including CORS
Log request duration with requestId (startTime, duration in ms) upon completion
Files:
src/mcp/handlers/aiSteamDetection.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
src/mcp/handlers/**/*.ts
π CodeRabbit inference engine (.cursor/rules/endpoint-routing.mdc)
Create new MCP handler modules under src/mcp/handlers/ matching the tool name and exported signature
src/mcp/handlers/**/*.ts: Place each new MCP tool handler in src/mcp/handlers as its own module exporting an async function that returns MCPToolResult and uses MCPEnv
Always return isError: true in MCPToolResult for failure cases within handlers
Validate all args before executing any database queries in handlers
Use a dedicated TypeScript interface/type for handler args and type the args parameter accordingly; import MCPToolResult and MCPEnv from ../types
Optimize database queries in handlers to target under 500ms execution time
src/mcp/handlers/**/*.ts: Create MCP tool handlers under src/mcp/handlers/ and validate args before DB queries
MCP handlers must return MCPToolResult formatPlace all MCP tool handlers under src/mcp/handlers as individual TypeScript files (e.g., steamMoves.ts, riskConcentration.ts)
Files:
src/mcp/handlers/aiSteamDetection.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/handlers/aiSharpAnalysis.ts
**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (.cursorrules)
**/*.{ts,tsx,js,jsx}: Use Bun runtime and tooling: bun commands only; use Bun file APIs; leverage Bun native APIs; and use processManager for spawning
Never use parseFloat() for stakes; use Number() instead
Use explicit UTC timestamps instead of new Date() defaults
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
**/*.{ts,tsx}
π CodeRabbit inference engine (.cursorrules)
**/*.{ts,tsx}: Use standardized error classes by importing from src/utils/error-handler.ts
Always use try-catch or an asyncHandler wrapper around async request handlers
Return structured error responses and avoid exposing internal details
Always cast D1 results withas unknown aswhen necessary
Always handle database errors
Always validate all inputs using Zod schemas before processing
Always free WASM memory in finally blocks
Always generate a requestId (Date.now().toString(36)) and prefix logs with it
Always include CORS headers and handle OPTIONS preflight for HTTP endpoints
Always use processManager.spawn() with timeouts and cleanup in finally; never leave zombie processes
**/*.{ts,tsx}: Never use string interpolation in SQL queries; always use parameterized queries with placeholders and .bind(...)
All API responses must include CORS headers (e.g., Access-Control-Allow-Origin) when returning Response objects
All console logs should be structured to include the requestId prefix
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
**/*.{ts,tsx,sql}
π CodeRabbit inference engine (.cursorrules)
Database queries must be parameterized (use .bind()), never string interpolation
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
**/*.{ts,tsx,jsx,js}
π CodeRabbit inference engine (.cursorrules)
Always sanitize output to prevent XSS; never trust user input
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
{src/index.ts,src/interceptors/**/*.ts,src/tools/intelligence/**/*.ts,src/mcp/**/*.ts}
π CodeRabbit inference engine (CLAUDE.md)
Validate all external inputs and outputs with Zod (API params, BetTicker metadata, tool I/O, MCP messages)
Files:
src/mcp/handlers/aiSteamDetection.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
**/*.{ts,tsx,js,mjs,cjs}
π CodeRabbit inference engine (.cursor/rules/bun-runtime.mdc)
**/*.{ts,tsx,js,mjs,cjs}: Use the shared process manager (import processManager from './tests/utils/process-cleanup') for all process spawning
Do not call Bun.spawn() directly
Do not use Node.js child_process for spawning
Use Bun file APIs (Bun.file, Bun.write, etc.) instead of Node.js fs or fs/promises
Prefer Bunβs native APIs (crypto, HTTP client, streams, etc.) over external libraries when those features are needed
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
**/*
π CodeRabbit inference engine (.cursor/rules/code-searchability.mdc)
Avoid hardcoded URLs in code; reference configuration/constants instead
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tsBRANCHING_STRATEGY.mdsrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
src/mcp/handlers/*.ts
π CodeRabbit inference engine (.cursor/rules/file-naming.mdc)
MCP handler filenames must be camelCase to match tool names
Files:
src/mcp/handlers/aiSteamDetection.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/handlers/aiSharpAnalysis.ts
**/*.{ts,tsx,js,jsx,mjs,cjs}
π CodeRabbit inference engine (.cursor/rules/production-security.mdc)
**/*.{ts,tsx,js,jsx,mjs,cjs}: Do not use parseFloat() for betting stakes; use Number() and validate with isNaN() and > 0 before accepting input
For Cloudflare D1 queries, never interpolate values into SQL strings; use parameterized queries with .prepare(...).bind(...)
When sending to Cloudflare Queues, limit each batch to a maximum of 100 messages (chunk sends accordingly)
Avoid using new Date() without arguments in Edge Workers; prefer Date.now() or explicit UTC (e.g., toISOString())
Manage WASM memory explicitly: allocate, use, and free in a finally block to prevent leaks
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
!(README|CLAUDE).md
π CodeRabbit inference engine (.cursor/rules/root-organization.mdc)
Do not place Markdown files in the root except README.md and CLAUDE.md
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tsBRANCHING_STRATEGY.mdsrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
{src,scripts}/**/*.ts
π CodeRabbit inference engine (.cursor/rules/security-patterns.mdc)
{src,scripts}/**/*.ts: Always validate all inputs before use (type, length, bounds)
Always use parameterized queries; never interpolate variables into SQL strings
Always sanitize output to prevent XSS when rendering or embedding user-controlled data
Always validate JWT or auth tokens (verify signature and expiration) before trusting them
Always implement rate limiting for request handling paths
Always hash sensitive data (e.g., passwords) using strong algorithms (e.g., bcrypt) before storage
Always use secure, generic error messages; log details internally only
Always sanitize/redact sensitive fields (password, token, apiKey) in logs
Always use restrictive CORS configuration (specific origins, methods, headers)
Always validate required environment variables before use
Never expose internal error details (messages, stacks) in HTTP responses
Never hardcode secrets; load secrets from environment or secret manager and validate presence
Never omit or ignore standard security headers in responses (e.g., HSTS, X-Frame-Options, CSP, X-Content-Type-Options)
Use authenticated encryption for sensitive data at rest/in transit (e.g., AES-GCM)
Validate uploaded files (MIME type whitelist and maximum size)
Prevent path traversal by validating and normalizing file paths; reject paths with '..', absolute roots, or excessive length
Set standard security headers on all HTTP responses (e.g., HSTS, CSP, X-Frame-Options, X-Content-Type-Options, X-XSS-Protection)
Files:
src/mcp/handlers/aiSteamDetection.tssrc/api/ai-chat.tssrc/mcp/handlers/aiRiskReport.tssrc/mcp/toolRegistry.tssrc/index.tssrc/mcp/handlers/aiSharpAnalysis.tssrc/mcp/tools.ts
src/api/**/*.ts
π CodeRabbit inference engine (.cursor/rules/api-patterns.mdc)
src/api/**/*.ts: In endpoint code, throw typed errors via Errors.* and format failures with createErrorResponse(error, requestId, path)
Use specific Errors.* types to map to correct HTTP codes (badRequest, unauthorized, forbidden, notFound, validationError, rateLimit, databaseError, serviceUnavailable, timeout)
Prefer wrapping handlers with asyncHandler(...) so thrown errors are auto-caught and formatted
At the start of each handler, validate required environment variables with validateEnv(env, [...])
Validate incoming input using validateQueryParams/Validators and return validationErrorResponse on failure
Use parameterized D1 queries with .prepare(...).bind(...).all() and never interpolate variables into SQL strings
When reading D1 results, cast results via unknown as needed to strongly-typed arrays/objects
Always include standard CORS headers in responses: Access-Control-Allow-Origin '*', Content-Type 'application/json' (and methods/headers as needed)
Generate a requestId at the start of each request (Date.now().toString(36)), log with it, include it in success payloads and in createErrorResponse
Structure success responses as JSON with data payload plus metadata: requestId and timestamp; include pagination fields when applicable
Log appropriately per request: incoming, success summaries, and errors, all prefixed with [requestId]All endpoint handlers must generate and use a requestId (e.g., include requestId: string in handler signature and use it in logs)
Files:
src/api/ai-chat.ts
src/mcp/toolRegistry.ts
π CodeRabbit inference engine (.cursor/rules/endpoint-routing.mdc)
Register new MCP handlers in src/mcp/toolRegistry.ts (add case/registry entry)
Register each new tool in src/mcp/toolRegistry.ts by importing its handler and adding a switch case mapping the tool name to the handler call
Register each MCP tool in src/mcp/toolRegistry.ts
Map MCP tool names to handler functions in src/mcp/toolRegistry.ts (registry-based dispatch)
Files:
src/mcp/toolRegistry.ts
src/index.ts
π CodeRabbit inference engine (.cursor/rules/cloudflare-workers.mdc)
src/index.ts: Export a default Worker with an async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise handler in src/index.ts
Route requests by URL pathname in the fetch handler with cases for /health, /mcp, and /tools/getBettingExposure
Always include CORS headers (Access-Control-Allow-Origin, Access-Control-Allow-Methods, Access-Control-Allow-Headers) on responses for dashboard access
Generate and log a per-request ID (e.g., Date.now().toString(36)) for request tracing
Use ctx.waitUntil(...) for non-blocking async work (e.g., KV writes) during request handling
Implement a queue consumer via export default { async queue(batch: MessageBatch, env: Env) } and acknowledge each message (message.ack()) after processing
Implement scheduled tasks via export default { async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) }
Apply cost guard logic from src/guards/costCap.ts to enforce hard cost limits, graceful degradation, and request quotas in request handling
src/index.ts: All HTTP requests must be handled in src/index.ts via the top-level fetch() handler
Main request flow in fetch(): generate requestId, parse URL, log with requestId, handle OPTIONS (CORS preflight), route by pathname, and return responses with CORS headers
Core system endpoints must use exact pathname matches for /health, /logs, /diagnostics, and /system-status
Route /cloud/api/Manager/getBetTicker to the BetTicker interceptor handler for transparent proxying with KV interception
Handle BetTicker history/analysis API under /interceptor/* via handleInterceptorAPI() supporting /interceptor/{history,response,stats} routes
Route /mcp requests to the MCP JSON-RPC handler (handleMCPRequest)
Route /tools/* to handleMCPTools() for intelligence tools endpoints
For complex routes, delegate to separate handler functions (e.g., handleDiagnostics, handleLogs, handleSystemStatus, handleMCPTools, handleInterceptorAPI)
When adding a new core/system endpoint, add an exact-m...
Files:
src/index.ts
{src/index.ts,src/queues/**/*.ts,src/schedules/**/*.ts,src/interceptors/**/*.ts,src/mcp/server.ts}
π CodeRabbit inference engine (CLAUDE.md)
Use ctx.waitUntil() for async side-effects to avoid delaying responses in Workers
Files:
src/index.ts
**/*.md
π CodeRabbit inference engine (.cursor/rules/documentation.mdc)
All documentation Markdown files must be placed under docs/ (only README.md, LICENSE, and CLAUDE.md may live in the repository root)
Files:
BRANCHING_STRATEGY.md
*.md
π CodeRabbit inference engine (.cursor/rules/documentation.mdc)
Do not create Markdown documentation files in the repository root except README.md, LICENSE, and CLAUDE.md
Files:
BRANCHING_STRATEGY.md
*
π CodeRabbit inference engine (.cursor/rules/root-organization.mdc)
Only these files are permitted in the repository root: README.md, LICENSE, CLAUDE.md, package.json, package-lock.json, bun.lock, tsconfig.json, wrangler.toml, wrangler.staging.toml, wrangler.production.toml
Files:
BRANCHING_STRATEGY.md
src/mcp/tools.ts
π CodeRabbit inference engine (.cursor/rules/endpoint-routing.mdc)
Register new tool definitions with schema in src/mcp/tools.ts
src/mcp/tools.ts: Define each new tool in src/mcp/tools.ts with name, clear description, and a JSON Schema inputSchema including required and optional properties
Provide clear, user-facing descriptions for each tool definitionDefine MCP tools and schemas in src/mcp/tools.ts
Define MCP tool schemas, descriptions, and parameters in src/mcp/tools.ts
Files:
src/mcp/tools.ts
π§ Learnings (13)
π Learning: 2025-10-07T19:32:40.879Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/mcp-integration.mdc:0-0
Timestamp: 2025-10-07T19:32:40.879Z
Learning: Applies to src/mcp/toolRegistry.ts : Register each new tool in src/mcp/toolRegistry.ts by importing its handler and adding a switch case mapping the tool name to the handler call
Applied to files:
src/mcp/toolRegistry.ts
π Learning: 2025-10-08T04:34:43.736Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursorrules:0-0
Timestamp: 2025-10-08T04:34:43.736Z
Learning: Applies to src/mcp/toolRegistry.ts : Register each MCP tool in src/mcp/toolRegistry.ts
Applied to files:
src/mcp/toolRegistry.tssrc/mcp/tools.ts
π Learning: 2025-10-07T19:31:32.730Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/endpoint-routing.mdc:0-0
Timestamp: 2025-10-07T19:31:32.730Z
Learning: Applies to src/mcp/toolRegistry.ts : Register new MCP handlers in src/mcp/toolRegistry.ts (add case/registry entry)
Applied to files:
src/mcp/toolRegistry.ts
π Learning: 2025-10-08T04:36:05.383Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: CLAUDE.md:0-0
Timestamp: 2025-10-08T04:36:05.383Z
Learning: Applies to src/mcp/toolRegistry.ts : Map MCP tool names to handler functions in src/mcp/toolRegistry.ts (registry-based dispatch)
Applied to files:
src/mcp/toolRegistry.ts
π Learning: 2025-10-07T19:31:32.730Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/endpoint-routing.mdc:0-0
Timestamp: 2025-10-07T19:31:32.730Z
Learning: Applies to src/mcp/tools.ts : Register new tool definitions with schema in src/mcp/tools.ts
Applied to files:
src/mcp/toolRegistry.tssrc/mcp/tools.ts
π Learning: 2025-10-07T19:31:32.730Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/endpoint-routing.mdc:0-0
Timestamp: 2025-10-07T19:31:32.730Z
Learning: Applies to src/index.ts : Extend handleMCPTools switch-case to add new intelligence tool routes
Applied to files:
src/mcp/toolRegistry.tssrc/mcp/tools.ts
π Learning: 2025-10-07T19:32:40.879Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/mcp-integration.mdc:0-0
Timestamp: 2025-10-07T19:32:40.879Z
Learning: Applies to src/mcp/handlers/**/*.ts : Place each new MCP tool handler in src/mcp/handlers as its own module exporting an async function that returns MCPToolResult and uses MCPEnv
Applied to files:
src/mcp/toolRegistry.tssrc/mcp/tools.ts
π Learning: 2025-10-08T04:36:05.383Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: CLAUDE.md:0-0
Timestamp: 2025-10-08T04:36:05.383Z
Learning: Applies to src/mcp/tools.ts : Define MCP tool schemas, descriptions, and parameters in src/mcp/tools.ts
Applied to files:
src/mcp/toolRegistry.tssrc/mcp/tools.ts
π Learning: 2025-10-08T04:34:43.736Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursorrules:0-0
Timestamp: 2025-10-08T04:34:43.736Z
Learning: Applies to src/mcp/handlers/**/*.ts : MCP handlers must return MCPToolResult format
Applied to files:
src/mcp/toolRegistry.ts
π Learning: 2025-10-07T19:32:40.879Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/mcp-integration.mdc:0-0
Timestamp: 2025-10-07T19:32:40.879Z
Learning: Applies to src/mcp/handlers/**/*.ts : Always return isError: true in MCPToolResult for failure cases within handlers
Applied to files:
src/mcp/toolRegistry.ts
π Learning: 2025-10-08T04:34:43.736Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursorrules:0-0
Timestamp: 2025-10-08T04:34:43.736Z
Learning: Applies to src/mcp/tools.ts : Define MCP tools and schemas in src/mcp/tools.ts
Applied to files:
src/mcp/tools.ts
π Learning: 2025-10-07T19:32:40.879Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/mcp-integration.mdc:0-0
Timestamp: 2025-10-07T19:32:40.879Z
Learning: Applies to src/mcp/tools.ts : Provide clear, user-facing descriptions for each tool definition
Applied to files:
src/mcp/tools.ts
π Learning: 2025-10-07T19:32:40.879Z
Learnt from: CR
PR: brendadeeznuts1111/betting-brain-v3#0
File: .cursor/rules/mcp-integration.mdc:0-0
Timestamp: 2025-10-07T19:32:40.879Z
Learning: Applies to src/mcp/tools.ts : Define each new tool in src/mcp/tools.ts with name, clear description, and a JSON Schema inputSchema including required and optional properties
Applied to files:
src/mcp/tools.ts
𧬠Code graph analysis (7)
src/mcp/handlers/aiSteamDetection.ts (4)
src/types/api.ts (1)
Env(15-44)src/mcp/types.ts (1)
MCPToolResult(56-64)src/ai/types.ts (1)
LineMovementData(125-133)src/ai/betting-analyzer.ts (1)
BettingAnalyzer(24-320)
src/api/ai-chat.ts (3)
src/ai/types.ts (1)
AIChatMessage(76-79)src/types/api.ts (1)
Env(15-44)src/utils/request.ts (1)
CORS_HEADERS(79-84)
src/mcp/handlers/aiRiskReport.ts (3)
src/types/api.ts (1)
Env(15-44)src/mcp/types.ts (1)
MCPToolResult(56-64)src/ai/types.ts (1)
ExposureData(138-144)
src/mcp/toolRegistry.ts (3)
src/mcp/handlers/aiSharpAnalysis.ts (1)
getAISharpAnalysis(11-74)src/mcp/handlers/aiSteamDetection.ts (1)
getAISteamDetection(11-72)src/mcp/handlers/aiRiskReport.ts (1)
getAIRiskReport(11-70)
src/index.ts (1)
src/api/ai-chat.ts (1)
handleAIChat(118-235)
src/mcp/handlers/aiSharpAnalysis.ts (4)
src/types/api.ts (1)
Env(15-44)src/mcp/types.ts (1)
MCPToolResult(56-64)src/ai/types.ts (1)
CustomerData(113-120)src/ai/betting-analyzer.ts (1)
BettingAnalyzer(24-320)
src/mcp/tools.ts (1)
src/mcp/types.ts (1)
MCPTool(41-49)
πͺ GitHub Actions: cursor-rules-check
src/index.ts
[error] 1-1: Permission denied
π Additional comments (4)
src/mcp/handlers/aiRiskReport.ts (1)
48-60: Remove usage/cost access βgenerateRiskReportdoesnβt return them.
BettingAnalyzer.generateRiskReport()yields risk metrics only; it has nousage/costfields. Accessingresult.usage/result.costwill throw at runtime. Drop those properties (or extend the analyzer to supply them first) before shipping.
[raise_critical-issue]src/mcp/toolRegistry.ts (2)
26-30: AI tool handler imports added correctly.Imports align with new handlers and registry usage. Based on learnings.
204-204: Good: AI tools initialized in registry.Ensures tools are available on first call. Based on learnings.
src/mcp/tools.ts (1)
30-32: AI tool list inclusion looks correct.getAITools() is properly merged into the master tool list.
| # πΏ Kimi K2 AI Integration - Branching Strategy | ||
|
|
||
| ## Current Branch Structure | ||
|
|
||
| ### Main Branch | ||
| - **Branch:** `main` | ||
| - **Status:** Protected, production-ready | ||
| - **Last Commit:** `2f0275f` - Phase 1 & 2 Complete | ||
| - **Contains:** Core AI integration, bindings, documentation | ||
|
|
||
| ### Feature Branch (Active) | ||
| - **Branch:** `feature/kimi-ai-phase3-api-endpoints` | ||
| - **Status:** Active development | ||
| - **Purpose:** Phase 3 - API Endpoints & Worker Integration | ||
| - **Base:** `main` (up to date) | ||
|
|
||
| --- | ||
|
|
||
| ## Development Workflow | ||
|
|
||
| ### Phase 3 Development (Current) | ||
| ```bash | ||
| # We are here | ||
| git branch: feature/kimi-ai-phase3-api-endpoints | ||
|
|
||
| # Work on Phase 3 | ||
| - Create AI chat API endpoint | ||
| - Create AI-enhanced MCP tools | ||
| - Update worker routing | ||
| - Test integration | ||
|
|
||
| # Commit progress | ||
| git add . | ||
| git commit -m "feat: implement Phase 3 features" | ||
|
|
||
| # Push to remote | ||
| git push origin feature/kimi-ai-phase3-api-endpoints | ||
|
|
||
| # When ready, create PR to merge into main | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Branch Protection Rules | ||
|
|
||
| ### Main Branch | ||
| - β Requires pull request reviews | ||
| - β All tests must pass | ||
| - β No direct commits (use feature branches) | ||
| - β Squash and merge preferred | ||
|
|
||
| ### Feature Branches | ||
| - β Can commit directly | ||
| - β Regular commits encouraged | ||
| - β Push to remote frequently | ||
| - β Merge to main via PR when complete | ||
|
|
||
| --- | ||
|
|
||
| ## Phase Progression | ||
|
|
||
| ### β Phase 1 & 2 (Completed - on main) | ||
| - Core AI integration | ||
| - Bindings configuration | ||
| - Documentation | ||
| - **Branch:** `main` | ||
| - **Commit:** `2f0275f` | ||
|
|
||
| ### π Phase 3 (In Progress - on feature branch) | ||
| - API endpoints | ||
| - MCP tools | ||
| - Worker integration | ||
| - **Branch:** `feature/kimi-ai-phase3-api-endpoints` | ||
| - **Status:** Active development | ||
|
|
||
| ### π Phase 4 (Planned) | ||
| - Data pipeline connection | ||
| - Real-time integration | ||
| - **Branch:** TBD (will create when ready) | ||
|
|
||
| ### π Phase 5 (Planned) | ||
| - Testing & validation | ||
| - Performance optimization | ||
| - **Branch:** TBD | ||
|
|
||
| ### π Phase 6 (Planned) | ||
| - Production deployment | ||
| - Monitoring setup | ||
| - **Branch:** TBD | ||
|
|
||
| --- | ||
|
|
||
| ## Quick Commands | ||
|
|
||
| ### Switch to feature branch | ||
| ```bash | ||
| git checkout feature/kimi-ai-phase3-api-endpoints | ||
| ``` | ||
|
|
||
| ### Switch back to main | ||
| ```bash | ||
| git checkout main | ||
| ``` | ||
|
|
||
| ### Update feature branch with latest main | ||
| ```bash | ||
| git checkout feature/kimi-ai-phase3-api-endpoints | ||
| git pull origin main --rebase | ||
| ``` | ||
|
|
||
| ### Create new feature branch | ||
| ```bash | ||
| git checkout main | ||
| git pull origin main | ||
| git checkout -b feature/new-feature-name | ||
| ``` | ||
|
|
||
| ### Push feature branch to remote | ||
| ```bash | ||
| git push origin feature/kimi-ai-phase3-api-endpoints | ||
| ``` | ||
|
|
||
| ### Create PR (via GitHub CLI) | ||
| ```bash | ||
| gh pr create --base main --head feature/kimi-ai-phase3-api-endpoints \ | ||
| --title "Phase 3: API Endpoints & Worker Integration" \ | ||
| --body "Implements Phase 3 of Kimi K2 AI integration" | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Safety Guidelines | ||
|
|
||
| ### β DO | ||
| - Work on feature branches | ||
| - Commit frequently with clear messages | ||
| - Push to remote regularly | ||
| - Test before merging to main | ||
| - Use descriptive branch names | ||
|
|
||
| ### β DON'T | ||
| - Commit directly to main | ||
| - Force push to main | ||
| - Delete branches before merging | ||
| - Work on multiple phases in one branch | ||
| - Skip testing before PR | ||
|
|
||
| --- | ||
|
|
||
| ## Current Status | ||
|
|
||
| **Active Branch:** `feature/kimi-ai-phase3-api-endpoints` | ||
| **Working Directory:** Clean | ||
| **Ready for:** Phase 3 development | ||
|
|
||
| β **Safe to proceed with Phase 3 implementation!** | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relocate doc under /docs per repo policy.
Repository guidelines forbid new Markdown files in the root (only README.md, LICENSE, CLAUDE.md allowed). Please move this document into docs/ (e.g., docs/branching/BRANCHING_STRATEGY.md) and update any references.
π€ Prompt for AI Agents
BRANCHING_STRATEGY.md lines 1-157: The file is placed in the repo root but repo
policy only allows README.md, LICENSE, CLAUDE.md at root; move this Markdown
into the docs directory (suggested path: docs/branching/BRANCHING_STRATEGY.md),
update any internal links or CI/docs references that point to the old root path
to the new path, and ensure the new file is added to the repository and the root
copy removed; then run any doc build or link-checker used in CI to verify
references resolve.
| const response: AIChatResponse = { | ||
| response: result.response, | ||
| usage: result.usage, | ||
| cost: result.cost, | ||
| context, | ||
| timestamp: new Date().toISOString(), | ||
| }; | ||
|
|
||
| return new Response(JSON.stringify(response), { | ||
| status: 200, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } catch (error) { | ||
| console.error(`[${requestId}] β AI chat error:`, error); | ||
|
|
||
| const errorResponse: ErrorResponse = { | ||
| error: error instanceof Error ? error.message : 'Unknown error', | ||
| code: 'AI_CHAT_ERROR', | ||
| details: error instanceof Error ? error.stack : undefined, | ||
| timestamp: new Date().toISOString(), | ||
| }; | ||
|
|
||
| return new Response(JSON.stringify(errorResponse), { | ||
| status: 500, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include requestId in responses and stop leaking stack traces.
API guidelines require adding the generated requestId to every success/error payload returned from /api/ai/chat. The current response omits it, breaking tracing. Additionally, the error branch serializes error.stack, which exposes internal details to clients and violates our secure-error policy. Please add requestId to the JSON body and drop stack traces (log them server-side only).
π€ Prompt for AI Agents
In src/api/ai-chat.ts around lines 208 to 234, the responses donβt include the
generated requestId and the error payload leaks error.stack to clients; modify
the success and error JSON bodies to include requestId, and remove/omit the
stack trace from the error response (only log the full error server-side via
console.error). Ensure the AIChatResponse and ErrorResponse objects add
requestId: requestId, and that the error response only returns a safe
message/code/timestamp (no details.stack) while keeping server-side logging of
the error for diagnostics.
| // AI Chat endpoint | ||
| if (url.pathname === '/api/ai/chat' && request.method === 'POST') { | ||
| console.log(`[${requestId}] π€ AI Chat request`); | ||
| return handleAIChat(request, env); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard /api/ai/chat with cost cap logic.
Per routing rules we must enforce the cost guard from src/guards/costCap.ts on costed AI endpoints. Please apply the guard (e.g., wrap the handler call or early-reject when quotas are exceeded) before delegating to handleAIChat, otherwise runaway usage bypasses the mandated protections.
π€ Prompt for AI Agents
In src/index.ts around lines 282 to 286, the AI chat route is not protected by
the cost cap guard; import and invoke the cost cap guard from
src/guards/costCap.ts before calling handleAIChat and reject early when quotas
are exceeded. Specifically, call the guard with the request/environment info
right after logging; if the guard indicates quota exceeded, return an
appropriate HTTP error response (e.g., 429 or 402) and do not call handleAIChat,
otherwise proceed to call handleAIChat(request, env).
| import { Env } from '../../types/api'; | ||
| import { MCPToolResult } from '../types'; | ||
| import { BettingAnalyzer } from '../../ai/betting-analyzer'; | ||
| import { CustomerData } from '../../ai/types'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion | π Major
Use MCPEnv and add Zod for input validation.
Handlers should use MCPEnv and validate args with Zod before DB queries. Add requestId threading.
As per coding guidelines
-import { Env } from '../../types/api';
+import { MCPEnv } from '../../types/api';
import { MCPToolResult } from '../types';
import { BettingAnalyzer } from '../../ai/betting-analyzer';
import { CustomerData } from '../../ai/types';
+import { z } from 'zod';π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { Env } from '../../types/api'; | |
| import { MCPToolResult } from '../types'; | |
| import { BettingAnalyzer } from '../../ai/betting-analyzer'; | |
| import { CustomerData } from '../../ai/types'; | |
| import { MCPEnv } from '../../types/api'; | |
| import { MCPToolResult } from '../types'; | |
| import { BettingAnalyzer } from '../../ai/betting-analyzer'; | |
| import { CustomerData } from '../../ai/types'; | |
| import { z } from 'zod'; |
π€ Prompt for AI Agents
In src/mcp/handlers/aiSharpAnalysis.ts around lines 6 to 9, the handler
currently imports Env and lacks input validation and requestId propagation;
replace Env with MCPEnv, validate incoming args using a Zod schema before any DB
queries or business logic, and throw/return early on schema failures; ensure a
requestId is accepted (or generated) and threaded through all calls (DB queries,
BettingAnalyzer instantiation/use, and logging) so downstream ops receive it for
tracing.
| export async function getAISharpAnalysis(args: Record<string, any>, env: Env): Promise<MCPToolResult> { | ||
| try { | ||
| const { customerId, agentID = 'DEMO' } = args; | ||
|
|
||
| if (!customerId) { | ||
| throw new Error('customerId is required'); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion | π Major
Add requestId, Zod validation, and structured logging.
Generate requestId (or accept from args), validate inputs, and log with requestId + duration.
As per coding guidelines
-export async function getAISharpAnalysis(args: Record<string, any>, env: Env): Promise<MCPToolResult> {
- try {
- const { customerId, agentID = 'DEMO' } = args;
+export async function getAISharpAnalysis(args: Record<string, any>, env: MCPEnv): Promise<MCPToolResult> {
+ const incomingReqId = typeof (args as any)?.requestId === 'string' ? (args as any).requestId : undefined;
+ const requestId = incomingReqId ?? Date.now().toString(36);
+ const startedAt = Date.now();
+ try {
+ const ArgsSchema = z.object({
+ customerId: z.string().min(1, 'customerId is required'),
+ agentID: z.string().default('DEMO'),
+ requestId: z.string().optional(),
+ });
+ const parsed = ArgsSchema.safeParse(args);
+ if (!parsed.success) {
+ return {
+ content: [
+ {
+ type: 'text',
+ text: JSON.stringify(
+ { error: 'Invalid arguments', issues: parsed.error.flatten(), tool: 'aiSharpAnalysis', requestId, timestamp: new Date().toISOString() },
+ null,
+ 2
+ ),
+ },
+ ],
+ isError: true,
+ };
+ }
+ const { customerId, agentID } = parsed.data;π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function getAISharpAnalysis(args: Record<string, any>, env: Env): Promise<MCPToolResult> { | |
| try { | |
| const { customerId, agentID = 'DEMO' } = args; | |
| if (!customerId) { | |
| throw new Error('customerId is required'); | |
| } | |
| export async function getAISharpAnalysis(args: Record<string, any>, env: MCPEnv): Promise<MCPToolResult> { | |
| const incomingReqId = typeof (args as any)?.requestId === 'string' ? (args as any).requestId : undefined; | |
| const requestId = incomingReqId ?? Date.now().toString(36); | |
| const startedAt = Date.now(); | |
| try { | |
| const ArgsSchema = z.object({ | |
| customerId: z.string().min(1, 'customerId is required'), | |
| agentID: z.string().default('DEMO'), | |
| requestId: z.string().optional(), | |
| }); | |
| const parsed = ArgsSchema.safeParse(args); | |
| if (!parsed.success) { | |
| return { | |
| content: [ | |
| { | |
| type: 'text', | |
| text: JSON.stringify( | |
| { | |
| error: 'Invalid arguments', | |
| issues: parsed.error.flatten(), | |
| tool: 'aiSharpAnalysis', | |
| requestId, | |
| timestamp: new Date().toISOString(), | |
| }, | |
| null, | |
| 2 | |
| ), | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| const { customerId, agentID } = parsed.data; |
| console.error('AI Sharp Analysis error:', error); | ||
| return { | ||
| content: [{ type: 'text', text: JSON.stringify({ error: error instanceof Error ? error.message : 'Unknown error', tool: 'aiSharpAnalysis', agentID: args.agentID || 'DEMO', timestamp: new Date().toISOString() }, null, 2) }], | ||
| isError: true, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion | π Major
Structure error with requestId and duration; avoid raw error logging.
Include requestId in logs and response; keep details out of user-visible text.
As per coding guidelines
- } catch (error) {
- console.error('AI Sharp Analysis error:', error);
- return {
- content: [{ type: 'text', text: JSON.stringify({ error: error instanceof Error ? error.message : 'Unknown error', tool: 'aiSharpAnalysis', agentID: args.agentID || 'DEMO', timestamp: new Date().toISOString() }, null, 2) }],
- isError: true,
- };
- }
+ } catch (error) {
+ const durationMs = Date.now() - startedAt;
+ console.error(`[${requestId}] aiSharpAnalysis error`, { error, durationMs });
+ return {
+ content: [
+ {
+ type: 'text',
+ text: JSON.stringify(
+ {
+ error: error instanceof Error ? error.message : 'Unknown error',
+ tool: 'aiSharpAnalysis',
+ agentID: (args as any)?.agentID ?? 'DEMO',
+ requestId,
+ timestamp: new Date().toISOString(),
+ },
+ null,
+ 2
+ ),
+ },
+ ],
+ isError: true,
+ };
+ }π€ Prompt for AI Agents
In src/mcp/handlers/aiSharpAnalysis.ts around lines 68-73, replace the raw
console.error and the user-visible JSON.stringify of the error with a structured
log that includes requestId and duration, and return a generic error message (no
raw error details) plus requestId and duration. Specifically: obtain or generate
requestId (args.requestId || create one), compute durationMs from the handler
start time (Date.now() - startTime), log the error in a structured way including
requestId, durationMs, agentID, and the error stack/message (for server logs
only), and return content containing a non-sensitive error text and an object
with requestId and durationMs (no full error message or stack) and isError:
true.
| const response = { | ||
| tool: 'aiSteamDetection', | ||
| agentID, | ||
| eventId: result.eventId, | ||
| marketType: result.marketType, | ||
| detection: { isSteamMove: result.isSteamMove, confidence: result.confidence, severity: result.severity, reasoning: result.reasoning }, | ||
| lineMovement: { before: result.lineMovement.before, after: result.lineMovement.after, change: result.lineMovement.change, changePercent: result.lineMovement.changePercent }, | ||
| volumeMovement: { before: result.volumeMovement.before, after: result.volumeMovement.after, change: result.volumeMovement.change, changePercent: result.volumeMovement.changePercent }, | ||
| insights: result.insights, | ||
| rawData: { lineBefore: lineMovement.lb, lineAfter: lineMovement.la, volumeBefore: lineMovement.vb, volumeAfter: lineMovement.va, timestamp: lineMovement.ts }, | ||
| aiUsage: { inputTokens: result.usage.inputTokens, outputTokens: result.usage.outputTokens, totalTokens: result.usage.totalTokens, cost: result.cost.totalCost }, | ||
| timestamp: new Date().toISOString(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Donβt read usage/cost from analyzeSteamMove.
BettingAnalyzer.analyzeSteamMove() returns the detection payload only; it doesnβt expose usage or cost. Attempting to read those fields will crash the handler. Remove or backfill them by updating the analyzer first.
π€ Prompt for AI Agents
In src/mcp/handlers/aiSteamDetection.ts around lines 50 to 61, the handler is
reading result.usage and result.cost which do not exist on the payload returned
by BettingAnalyzer.analyzeSteamMove and will crash; either remove the aiUsage
property from the response object (omit
inputTokens/outputTokens/totalTokens/cost) or update
BettingAnalyzer.analyzeSteamMove to include a usage and cost object with those
fields before reading them here; implement one of these fixes and ensure the
response still validates (if removing, return no aiUsage or set it to
null/empty; if updating the analyzer, populate usage and cost with numeric
values).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
π§Ή Nitpick comments (2)
.husky/pre-commit (1)
13-14: Commented-out tests and linting - verify the timeline for enabling these.Integration tests and linting are currently disabled. While the comments indicate these are temporary ("no tests yet", "will add in next commit"), these are essential quality gates that should be prioritized.
Consider creating GitHub issues to track:
- Adding integration tests for pre-commit
- Enabling linting in the pre-commit hook
Would you like me to help draft issue descriptions for tracking these tasks?
Also applies to: 19-20
PHASE_3_COMPLETE.md (1)
16-16: Optional: Consider using proper heading syntax instead of emphasis.Multiple lines use bold emphasis (
**text**) where Markdown headings might be more semantically appropriate. While this is a stylistic choice and the current formatting is readable, using proper headings (e.g.,### AI Chat API Endpoint) would improve document structure and SEO.For example, on Line 16:
-**AI Chat API Endpoint** +### AI Chat API EndpointThis is flagged by markdownlint (MD036) but is entirely optional - the current format is acceptable if you prefer the visual style.
Also applies to: 63-63, 103-103, 145-145, 186-186, 200-200, 221-221, 246-246
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (5)
.editorconfig(1 hunks).eslintrc.json(1 hunks).husky/pre-commit(1 hunks).prettierrc.json(1 hunks)PHASE_3_COMPLETE.md(1 hunks)
β Files skipped from review due to trivial changes (1)
- .editorconfig
π§° Additional context used
π Path-based instructions (5)
!(README|CLAUDE).md
π CodeRabbit inference engine (.cursor/rules/root-organization.mdc)
Do not place Markdown files in the root except README.md and CLAUDE.md
Files:
.husky/pre-commit.prettierrc.json.eslintrc.jsonPHASE_3_COMPLETE.md
**/*.md
π CodeRabbit inference engine (.cursor/rules/documentation.mdc)
All documentation Markdown files must be placed under docs/ (only README.md, LICENSE, and CLAUDE.md may live in the repository root)
Files:
PHASE_3_COMPLETE.md
*.md
π CodeRabbit inference engine (.cursor/rules/documentation.mdc)
Do not create Markdown documentation files in the repository root except README.md, LICENSE, and CLAUDE.md
Files:
PHASE_3_COMPLETE.md
**/*
π CodeRabbit inference engine (.cursor/rules/code-searchability.mdc)
Avoid hardcoded URLs in code; reference configuration/constants instead
Files:
PHASE_3_COMPLETE.md
*
π CodeRabbit inference engine (.cursor/rules/root-organization.mdc)
Only these files are permitted in the repository root: README.md, LICENSE, CLAUDE.md, package.json, package-lock.json, bun.lock, tsconfig.json, wrangler.toml, wrangler.staging.toml, wrangler.production.toml
Files:
PHASE_3_COMPLETE.md
πͺ GitHub Actions: cursor-rules-check
PHASE_3_COMPLETE.md
[error] 1-1: PHASE_3_COMPLETE.md: command not found.
πͺ markdownlint-cli2 (0.18.1)
PHASE_3_COMPLETE.md
16-16: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
63-63: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
103-103: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
145-145: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
186-186: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
200-200: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
221-221: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
246-246: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
π Additional comments (2)
.husky/pre-commit (2)
6-11: LGTM! Formatting check implementation looks solid.The formatting check properly fails fast with a clear error message and exit code, guiding developers to run
bun run fmtto fix issues.
17-17: Clarify security-check behavior in pre-commit hook
Line 17 (bun run security || echo "β οΈ Security check skipped") currently silences failures. Choose one:
- Enforce checks by failing the commit:
- bun run security || echo "β οΈ Security check skipped" + echo "π Running security checksβ¦" + bun run security || { + echo "β Security check failed. Please address security issues before committing." + exit 1 + }- Keep skips but document intent:
- bun run security || echo "β οΈ Security check skipped" + # Security checks are optional during development + bun run security || echo "β οΈ Security check skipped"
| # Pre-commit hook with integrated testing system | ||
|
|
||
| echo "π Running pre-commit checks..." | ||
| echo "οΏ½οΏ½ Running pre-commit checks..." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the corrupted emoji character.
The echo statement contains a corrupted Unicode character οΏ½οΏ½ instead of a proper emoji. This is likely due to an encoding issue and will display incorrectly in terminals.
Apply this diff to fix the encoding issue:
-echo "οΏ½οΏ½ Running pre-commit checks..."
+echo "π Running pre-commit checks..."π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo "οΏ½οΏ½ Running pre-commit checks..." | |
| echo "π Running pre-commit checks..." |
π€ Prompt for AI Agents
.husky/pre-commit line 4: the echo statement contains a corrupted Unicode glyph
("οΏ½οΏ½"); replace it with a valid emoji or plain ASCII text (for example "π§" or
"[pre-commit]") and ensure the file is saved with UTF-8 encoding (no BOM) so
terminals display it correctly; update the echo line to use the chosen valid
character or text and commit the file.
| # Phase 3: API Endpoints & Worker Integration - COMPLETE β | ||
|
|
||
| ## Summary | ||
|
|
||
| Successfully implemented Phase 3 of the Kimi K2 AI integration with betting-brain-v3! | ||
|
|
||
| **Commit:** `558fa44` - "feat: Add Phase 3 - AI API endpoints & MCP tools" | ||
| **Branch:** `feature/kimi-ai-phase3-api-endpoints` | ||
| **Files:** 5 new, 3 modified (+684 lines) | ||
|
|
||
| --- | ||
|
|
||
| ## β Files Created | ||
|
|
||
| ### 1. `src/api/ai-chat.ts` (235 lines) | ||
| **AI Chat API Endpoint** | ||
|
|
||
| **Route:** `POST /api/ai/chat` | ||
|
|
||
| **Features:** | ||
| - β Request validation (messages array, role/content validation) | ||
| - β Platform context enrichment from D1 database | ||
| - β Cost tracking and usage reporting | ||
| - β Comprehensive error handling | ||
| - β CORS headers support | ||
|
|
||
| **Request Schema:** | ||
| ```typescript | ||
| interface AIChatRequest { | ||
| messages: AIChatMessage[]; | ||
| includeContext?: boolean; // Default: true | ||
| maxTokens?: number; // Default: 2000 | ||
| } | ||
| ``` | ||
|
|
||
| **Response Schema:** | ||
| ```typescript | ||
| interface AIChatResponse { | ||
| response: string; | ||
| usage: { inputTokens, outputTokens, totalTokens }; | ||
| cost: { inputCost, outputCost, totalCost }; | ||
| context?: { totalCustomers, avgCLV, avgWinRate }; | ||
| timestamp: string; | ||
| } | ||
| ``` | ||
|
|
||
| **Example Request:** | ||
| ```bash | ||
| curl -X POST https://fantasy402.com/api/ai/chat \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{ | ||
| "messages": [ | ||
| {"role": "user", "content": "Analyze customer CUST_12345"} | ||
| ], | ||
| "includeContext": true, | ||
| "maxTokens": 2000 | ||
| }' | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### 2. `src/mcp/handlers/aiSharpAnalysis.ts` (74 lines) | ||
| **AI Sharp Customer Analysis Tool** | ||
|
|
||
| **MCP Tool Name:** `aiSharpAnalysis` | ||
|
|
||
| **Features:** | ||
| - β Queries D1 `sharp_indicators` table | ||
| - β AI-powered customer profiling | ||
| - β Sharp score calculation (0-100) | ||
| - β Confidence rating | ||
| - β Actionable recommendations | ||
| - β Detailed insights and reasoning | ||
|
|
||
| **Input Parameters:** | ||
| ```typescript | ||
| { | ||
| customerId: string; // Required | ||
| agentID?: string; // Optional, default: 'DEMO' | ||
| } | ||
| ``` | ||
|
|
||
| **Output:** | ||
| ```typescript | ||
| { | ||
| tool: 'aiSharpAnalysis', | ||
| customerId: string, | ||
| analysis: { | ||
| sharpScore: number, // 0-100 | ||
| confidence: number, // 0-100% | ||
| recommendation: string, // SHARP | RECREATIONAL | MONITOR | ||
| reasoning: string | ||
| }, | ||
| indicators: { clv, winRate, actionCount, netBet }, | ||
| insights: string[], | ||
| aiUsage: { inputTokens, outputTokens, totalTokens, cost } | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### 3. `src/mcp/handlers/aiSteamDetection.ts` (72 lines) | ||
| **AI Steam Move Detection Tool** | ||
|
|
||
| **MCP Tool Name:** `aiSteamDetection` | ||
|
|
||
| **Features:** | ||
| - β Queries D1 `line_movements` table | ||
| - β AI-powered line movement analysis | ||
| - β Steam move detection | ||
| - β Severity assessment (LOW | MEDIUM | HIGH) | ||
| - β Volume movement analysis | ||
| - β Detailed insights | ||
|
|
||
| **Input Parameters:** | ||
| ```typescript | ||
| { | ||
| eventId: string; // Required | ||
| marketType?: string; // Optional, default: 'SPREAD' | ||
| agentID?: string; // Optional, default: 'DEMO' | ||
| } | ||
| ``` | ||
|
|
||
| **Output:** | ||
| ```typescript | ||
| { | ||
| tool: 'aiSteamDetection', | ||
| eventId: string, | ||
| marketType: string, | ||
| detection: { | ||
| isSteamMove: boolean, | ||
| confidence: number, // 0-100% | ||
| severity: string, // LOW | MEDIUM | HIGH | ||
| reasoning: string | ||
| }, | ||
| lineMovement: { before, after, change, changePercent }, | ||
| volumeMovement: { before, after, change, changePercent }, | ||
| insights: string[] | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### 4. `src/mcp/handlers/aiRiskReport.ts` (70 lines) | ||
| **AI Risk Assessment Tool** | ||
|
|
||
| **MCP Tool Name:** `aiRiskReport` | ||
|
|
||
| **Features:** | ||
| - β Queries D1 `exposure_tracking` table | ||
| - β AI-powered risk assessment | ||
| - β Risk level classification (LOW | MEDIUM | HIGH | CRITICAL) | ||
| - β Exposure breakdown by side | ||
| - β Hedge strategy recommendations | ||
| - β Actionable insights | ||
|
|
||
| **Input Parameters:** | ||
| ```typescript | ||
| { | ||
| eventId: string; // Required | ||
| agentID?: string; // Optional, default: 'DEMO' | ||
| } | ||
| ``` | ||
|
|
||
| **Output:** | ||
| ```typescript | ||
| { | ||
| tool: 'aiRiskReport', | ||
| eventId: string, | ||
| riskAssessment: { | ||
| riskLevel: string, // LOW | MEDIUM | HIGH | CRITICAL | ||
| totalRisk: number, | ||
| netExposure: number, | ||
| reasoning: string | ||
| }, | ||
| exposureBreakdown: [{ side, risk, net, percentage }], | ||
| recommendations: string[], | ||
| hedgeStrategy: { action, amount, reasoning } | null, | ||
| insights: string[] | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### 5. `BRANCHING_STRATEGY.md` (157 lines) | ||
| **Git Branching Strategy Documentation** | ||
|
|
||
| **Contents:** | ||
| - Current branch structure | ||
| - Development workflow | ||
| - Phase progression | ||
| - Quick commands | ||
| - Safety guidelines | ||
|
|
||
| --- | ||
|
|
||
| ## π Files Modified | ||
|
|
||
| ### 1. `src/index.ts` (+7 lines) | ||
| **Worker Routing Updates** | ||
|
|
||
| **Changes:** | ||
| - β Added import for `handleAIChat` | ||
| - β Added route: `POST /api/ai/chat` | ||
| - β Added request logging | ||
|
|
||
| **Code:** | ||
| ```typescript | ||
| import { handleAIChat } from './api/ai-chat'; | ||
|
|
||
| // AI Chat endpoint | ||
| if (url.pathname === '/api/ai/chat' && request.method === 'POST') { | ||
| console.log(`[${requestId}] π€ AI Chat request`); | ||
| return handleAIChat(request, env); | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### 2. `src/mcp/toolRegistry.ts` (+20 lines) | ||
| **MCP Tool Registry Updates** | ||
|
|
||
| **Changes:** | ||
| - β Added imports for AI tool handlers | ||
| - β Added `registerAITools()` function | ||
| - β Registered 3 AI tools in toolRegistry Map | ||
| - β Called `registerAITools()` in `initializeRegistry()` | ||
|
|
||
| **Code:** | ||
| ```typescript | ||
| // Import AI-enhanced MCP tool handlers | ||
| import { getAISharpAnalysis } from './handlers/aiSharpAnalysis'; | ||
| import { getAISteamDetection } from './handlers/aiSteamDetection'; | ||
| import { getAIRiskReport } from './handlers/aiRiskReport'; | ||
|
|
||
| function registerAITools() { | ||
| toolRegistry.set('aiSharpAnalysis', getAISharpAnalysis); | ||
| toolRegistry.set('aiSteamDetection', getAISteamDetection); | ||
| toolRegistry.set('aiRiskReport', getAIRiskReport); | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### 3. `src/mcp/tools.ts` (+49 lines) | ||
| **MCP Tool Definitions** | ||
|
|
||
| **Changes:** | ||
| - β Added `getAITools()` function | ||
| - β Added 3 AI tool definitions with JSON schemas | ||
| - β Integrated into `getMCPTools()` export | ||
|
|
||
| **Tool Definitions:** | ||
| ```typescript | ||
| { | ||
| name: 'aiSharpAnalysis', | ||
| description: 'AI-powered customer profiling and sharp detection using Kimi K2', | ||
| inputSchema: { | ||
| type: 'object', | ||
| properties: { | ||
| customerId: { type: 'string', description: 'Customer ID to analyze' }, | ||
| agentID: { type: 'string', description: 'Agent ID to scope to (optional)', default: 'DEMO' } | ||
| }, | ||
| required: ['customerId'] | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## π― Integration Points | ||
|
|
||
| ### API Endpoint Integration | ||
| - β Route registered in `src/index.ts` | ||
| - β Handler implemented in `src/api/ai-chat.ts` | ||
| - β CORS headers configured | ||
| - β Error handling implemented | ||
| - β Cost tracking enabled | ||
|
|
||
| ### MCP Tool Integration | ||
| - β Tools registered in `toolRegistry.ts` | ||
| - β Tool definitions added to `tools.ts` | ||
| - β Handlers implemented in `src/mcp/handlers/` | ||
| - β D1 database queries integrated | ||
| - β AI analyzer integration complete | ||
|
|
||
| ### Data Pipeline Integration | ||
| - β D1 `sharp_indicators` table β aiSharpAnalysis | ||
| - β D1 `line_movements` table β aiSteamDetection | ||
| - β D1 `exposure_tracking` table β aiRiskReport | ||
| - β Platform context from D1 β AI chat endpoint | ||
|
|
||
| --- | ||
|
|
||
| ## π Statistics | ||
|
|
||
| **Total Changes:** | ||
| - **Files Created:** 5 | ||
| - **Files Modified:** 3 | ||
| - **Lines Added:** 684 | ||
| - **API Endpoints:** 1 (POST /api/ai/chat) | ||
| - **MCP Tools:** 3 (aiSharpAnalysis, aiSteamDetection, aiRiskReport) | ||
| - **D1 Tables Integrated:** 3 (sharp_indicators, line_movements, exposure_tracking) | ||
|
|
||
| --- | ||
|
|
||
| ## π Next Steps | ||
|
|
||
| ### Phase 4: Data Pipeline & Real-time Integration | ||
| - Connect live data feeds to AI tools | ||
| - Implement real-time analysis triggers | ||
| - Add WebSocket support for streaming AI responses | ||
| - Optimize D1 queries for performance | ||
|
|
||
| ### Phase 5: Testing & Validation | ||
| - Write unit tests for AI handlers | ||
| - Write integration tests for API endpoints | ||
| - Write E2E tests for MCP tools | ||
| - Performance testing and optimization | ||
|
|
||
| ### Phase 6: Production Deployment | ||
| - Deploy to Cloudflare Workers | ||
| - Configure monitoring and alerting | ||
| - Set up cost tracking dashboards | ||
| - Production documentation | ||
|
|
||
| --- | ||
|
|
||
| ## β Phase 3 Complete! | ||
|
|
||
| **Status:** Ready for PR review and merge to main | ||
| **Branch:** `feature/kimi-ai-phase3-api-endpoints` | ||
| **Commit:** `558fa44` | ||
| **Date:** October 8, 2025 | ||
|
|
||
| All Phase 3 objectives achieved! π | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move documentation to docs/ directory per coding guidelines.
This file violates the project's coding guidelines, which state: "All documentation Markdown files must be placed under docs/ (only README.md, LICENSE, and CLAUDE.md may live in the repository root)."
Additionally, the pipeline failure PHASE_3_COMPLETE.md: command not found suggests the file location is causing issues with the cursor-rules-check workflow.
As per coding guidelines.
Apply this fix:
#!/bin/bash
# Move the file to the correct location
mkdir -p docs
git mv PHASE_3_COMPLETE.md docs/PHASE_3_COMPLETE.mdThen update any references to this file in other documentation or scripts to point to docs/PHASE_3_COMPLETE.md.
π§° Tools
πͺ GitHub Actions: cursor-rules-check
[error] 1-1: PHASE_3_COMPLETE.md: command not found.
πͺ markdownlint-cli2 (0.18.1)
16-16: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
63-63: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
103-103: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
145-145: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
186-186: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
200-200: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
221-221: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
246-246: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
π€ Prompt for AI Agents
In PHASE_3_COMPLETE.md around lines 1-337 the documentation lives in the
repository root which violates the guideline that all markdown docs (except
README.md, LICENSE, CLAUDE.md) must live under docs/ and is triggering the
cursor-rules-check pipeline error; move the file into docs/ by creating the docs
directory if missing and performing a git mv PHASE_3_COMPLETE.md
docs/PHASE_3_COMPLETE.md, then update any references to PHASE_3_COMPLETE.md in
documentation, scripts, CI/workflows or README files to point to
docs/PHASE_3_COMPLETE.md and commit the change so the pipeline no longer treats
the file as a misplaced executable.
- Add GET /api/ai/health for system health monitoring - Check Kimi API connectivity - Check D1 database connectivity - Return detailed status and table counts - Add POST /api/ai/query for natural language queries - Parse intent from natural language - Support sharp analysis, steam detection, risk reports - Support agent performance queries - Auto-route to appropriate MCP tools - Update src/index.ts with new routes Files: 2 new, 1 modified (+400 lines)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
π Files selected for processing (3)
src/api/ai-health.ts(1 hunks)src/api/ai-query.ts(1 hunks)src/index.ts(2 hunks)
π§ Files skipped from review as they are similar to previous changes (1)
- src/index.ts
π§° Additional context used
π Path-based instructions (11)
src/**/*.ts
π CodeRabbit inference engine (.cursor/rules/cloudflare-workers.mdc)
Use the Env, BetTickerSnifferEnv, and MCPEnv types from src/types/api.ts across TypeScript sources
Use @cloudflare/workers-types for Cloudflare Worker TypeScript definitions across source files
src/**/*.ts: Always use parameterized queries with .bind() for all D1 queries
Never use string interpolation in SQL for D1 queries
Always cast D1 results withas unknown as <Type>when consuming D1Result
Always limit query results (use LIMIT) in D1 queries
Always handle database errors around D1 operations (try/catch and propagate meaningful errors)
Never forget to bind parameters when executing D1 queries
Never ignore database errors from D1 operations
Files:
src/api/ai-health.tssrc/api/ai-query.ts
**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (.cursorrules)
**/*.{ts,tsx,js,jsx}: Use Bun runtime and tooling: bun commands only; use Bun file APIs; leverage Bun native APIs; and use processManager for spawning
Never use parseFloat() for stakes; use Number() instead
Use explicit UTC timestamps instead of new Date() defaults
Files:
src/api/ai-health.tssrc/api/ai-query.ts
**/*.{ts,tsx}
π CodeRabbit inference engine (.cursorrules)
**/*.{ts,tsx}: Use standardized error classes by importing from src/utils/error-handler.ts
Always use try-catch or an asyncHandler wrapper around async request handlers
Return structured error responses and avoid exposing internal details
Always cast D1 results withas unknown aswhen necessary
Always handle database errors
Always validate all inputs using Zod schemas before processing
Always free WASM memory in finally blocks
Always generate a requestId (Date.now().toString(36)) and prefix logs with it
Always include CORS headers and handle OPTIONS preflight for HTTP endpoints
Always use processManager.spawn() with timeouts and cleanup in finally; never leave zombie processes
**/*.{ts,tsx}: Never use string interpolation in SQL queries; always use parameterized queries with placeholders and .bind(...)
All API responses must include CORS headers (e.g., Access-Control-Allow-Origin) when returning Response objects
All console logs should be structured to include the requestId prefix
Files:
src/api/ai-health.tssrc/api/ai-query.ts
**/*.{ts,tsx,sql}
π CodeRabbit inference engine (.cursorrules)
Database queries must be parameterized (use .bind()), never string interpolation
Files:
src/api/ai-health.tssrc/api/ai-query.ts
**/*.{ts,tsx,jsx,js}
π CodeRabbit inference engine (.cursorrules)
Always sanitize output to prevent XSS; never trust user input
Files:
src/api/ai-health.tssrc/api/ai-query.ts
src/api/**/*.ts
π CodeRabbit inference engine (.cursor/rules/api-patterns.mdc)
src/api/**/*.ts: In endpoint code, throw typed errors via Errors.* and format failures with createErrorResponse(error, requestId, path)
Use specific Errors.* types to map to correct HTTP codes (badRequest, unauthorized, forbidden, notFound, validationError, rateLimit, databaseError, serviceUnavailable, timeout)
Prefer wrapping handlers with asyncHandler(...) so thrown errors are auto-caught and formatted
At the start of each handler, validate required environment variables with validateEnv(env, [...])
Validate incoming input using validateQueryParams/Validators and return validationErrorResponse on failure
Use parameterized D1 queries with .prepare(...).bind(...).all() and never interpolate variables into SQL strings
When reading D1 results, cast results via unknown as needed to strongly-typed arrays/objects
Always include standard CORS headers in responses: Access-Control-Allow-Origin '*', Content-Type 'application/json' (and methods/headers as needed)
Generate a requestId at the start of each request (Date.now().toString(36)), log with it, include it in success payloads and in createErrorResponse
Structure success responses as JSON with data payload plus metadata: requestId and timestamp; include pagination fields when applicable
Log appropriately per request: incoming, success summaries, and errors, all prefixed with [requestId]All endpoint handlers must generate and use a requestId (e.g., include requestId: string in handler signature and use it in logs)
Files:
src/api/ai-health.tssrc/api/ai-query.ts
**/*.{ts,tsx,js,mjs,cjs}
π CodeRabbit inference engine (.cursor/rules/bun-runtime.mdc)
**/*.{ts,tsx,js,mjs,cjs}: Use the shared process manager (import processManager from './tests/utils/process-cleanup') for all process spawning
Do not call Bun.spawn() directly
Do not use Node.js child_process for spawning
Use Bun file APIs (Bun.file, Bun.write, etc.) instead of Node.js fs or fs/promises
Prefer Bunβs native APIs (crypto, HTTP client, streams, etc.) over external libraries when those features are needed
Files:
src/api/ai-health.tssrc/api/ai-query.ts
**/*
π CodeRabbit inference engine (.cursor/rules/code-searchability.mdc)
Avoid hardcoded URLs in code; reference configuration/constants instead
Files:
src/api/ai-health.tssrc/api/ai-query.ts
**/*.{ts,tsx,js,jsx,mjs,cjs}
π CodeRabbit inference engine (.cursor/rules/production-security.mdc)
**/*.{ts,tsx,js,jsx,mjs,cjs}: Do not use parseFloat() for betting stakes; use Number() and validate with isNaN() and > 0 before accepting input
For Cloudflare D1 queries, never interpolate values into SQL strings; use parameterized queries with .prepare(...).bind(...)
When sending to Cloudflare Queues, limit each batch to a maximum of 100 messages (chunk sends accordingly)
Avoid using new Date() without arguments in Edge Workers; prefer Date.now() or explicit UTC (e.g., toISOString())
Manage WASM memory explicitly: allocate, use, and free in a finally block to prevent leaks
Files:
src/api/ai-health.tssrc/api/ai-query.ts
!(README|CLAUDE).md
π CodeRabbit inference engine (.cursor/rules/root-organization.mdc)
Do not place Markdown files in the root except README.md and CLAUDE.md
Files:
src/api/ai-health.tssrc/api/ai-query.ts
{src,scripts}/**/*.ts
π CodeRabbit inference engine (.cursor/rules/security-patterns.mdc)
{src,scripts}/**/*.ts: Always validate all inputs before use (type, length, bounds)
Always use parameterized queries; never interpolate variables into SQL strings
Always sanitize output to prevent XSS when rendering or embedding user-controlled data
Always validate JWT or auth tokens (verify signature and expiration) before trusting them
Always implement rate limiting for request handling paths
Always hash sensitive data (e.g., passwords) using strong algorithms (e.g., bcrypt) before storage
Always use secure, generic error messages; log details internally only
Always sanitize/redact sensitive fields (password, token, apiKey) in logs
Always use restrictive CORS configuration (specific origins, methods, headers)
Always validate required environment variables before use
Never expose internal error details (messages, stacks) in HTTP responses
Never hardcode secrets; load secrets from environment or secret manager and validate presence
Never omit or ignore standard security headers in responses (e.g., HSTS, X-Frame-Options, CSP, X-Content-Type-Options)
Use authenticated encryption for sensitive data at rest/in transit (e.g., AES-GCM)
Validate uploaded files (MIME type whitelist and maximum size)
Prevent path traversal by validating and normalizing file paths; reject paths with '..', absolute roots, or excessive length
Set standard security headers on all HTTP responses (e.g., HSTS, CSP, X-Frame-Options, X-Content-Type-Options, X-XSS-Protection)
Files:
src/api/ai-health.tssrc/api/ai-query.ts
𧬠Code graph analysis (2)
src/api/ai-health.ts (2)
src/types/api.ts (1)
Env(15-44)src/utils/request.ts (1)
CORS_HEADERS(79-84)
src/api/ai-query.ts (6)
src/types/api.ts (1)
Env(15-44)src/mcp/handlers/aiSharpAnalysis.ts (1)
getAISharpAnalysis(11-74)src/mcp/handlers/aiSteamDetection.ts (1)
getAISteamDetection(11-72)src/mcp/handlers/aiRiskReport.ts (1)
getAIRiskReport(11-70)src/ai/betting-analyzer.ts (1)
BettingAnalyzer(24-320)src/utils/request.ts (1)
CORS_HEADERS(79-84)
π Additional comments (1)
src/api/ai-health.ts (1)
24-69: LGTM - Helper functions are well-implemented.The
checkDatabaseandcheckKimiAPIhelper functions properly handle errors and return appropriate status values. The database check uses a safe query to count tables, and the Kimi API check validates the API key before making the request.
| export async function handleHealthCheck(request: Request, env: Env): Promise<Response> { | ||
| console.log('π₯ Health check request received'); | ||
|
|
||
| try { | ||
| // Check Kimi API | ||
| const kimiStatus = await checkKimiAPI(env.KIMI_API_KEY); | ||
|
|
||
| // Check databases | ||
| const databases = await Promise.all([ | ||
| checkDatabase(env.ANALYTICS, 'ANALYTICS'), | ||
| checkDatabase(env.RAW_FEED_DB, 'RAW_FEED_DB'), | ||
| ]); | ||
|
|
||
| // Determine overall status | ||
| const allDbsConnected = databases.every(db => db.status === 'connected'); | ||
| const kimiConnected = kimiStatus === 'connected'; | ||
|
|
||
| let status: 'ok' | 'degraded' | 'error'; | ||
| if (kimiConnected && allDbsConnected) { | ||
| status = 'ok'; | ||
| } else if (kimiConnected || allDbsConnected) { | ||
| status = 'degraded'; | ||
| } else { | ||
| status = 'error'; | ||
| } | ||
|
|
||
| const response: HealthCheckResponse = { | ||
| status, | ||
| kimi: kimiStatus, | ||
| databases, | ||
| timestamp: new Date().toISOString(), | ||
| version: '3.0.0-phase3', | ||
| }; | ||
|
|
||
| console.log('β Health check complete:', status); | ||
|
|
||
| return new Response(JSON.stringify(response, null, 2), { | ||
| status: status === 'ok' ? 200 : status === 'degraded' ? 206 : 503, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } catch (error) { | ||
| console.error('β Health check error:', error); | ||
|
|
||
| const errorResponse: HealthCheckResponse = { | ||
| status: 'error', | ||
| kimi: 'disconnected', | ||
| databases: [], | ||
| timestamp: new Date().toISOString(), | ||
| version: '3.0.0-phase3', | ||
| }; | ||
|
|
||
| return new Response(JSON.stringify(errorResponse, null, 2), { | ||
| status: 503, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add requestId and follow endpoint guidelines.
The main handler is missing several required elements per coding guidelines:
- No requestId generation: All endpoint handlers must generate and use a requestId (e.g.,
Date.now().toString(36)). - Logs missing requestId prefix: Lines 75, 108, 115 should include
[${requestId}]prefix. - Response missing requestId: Lines 100-106 (success) and 117-122 (error) should include requestId in the payload.
- No environment validation: Should use
validateEnv(env, ['ANALYTICS', 'RAW_FEED_DB', 'KIMI_API_KEY'])at the start. - No asyncHandler wrapper: Consider wrapping the handler for consistent error formatting.
Apply these changes:
export async function handleHealthCheck(request: Request, env: Env): Promise<Response> {
+ const requestId = Date.now().toString(36);
- console.log('π₯ Health check request received');
+ console.log(`[${requestId}] π₯ Health check request received`);
try {
// Check Kimi API
const kimiStatus = await checkKimiAPI(env.KIMI_API_KEY);
// Check databases
const databases = await Promise.all([
checkDatabase(env.ANALYTICS, 'ANALYTICS'),
checkDatabase(env.RAW_FEED_DB, 'RAW_FEED_DB'),
]);
// Determine overall status
const allDbsConnected = databases.every(db => db.status === 'connected');
const kimiConnected = kimiStatus === 'connected';
let status: 'ok' | 'degraded' | 'error';
if (kimiConnected && allDbsConnected) {
status = 'ok';
} else if (kimiConnected || allDbsConnected) {
status = 'degraded';
} else {
status = 'error';
}
const response: HealthCheckResponse = {
status,
kimi: kimiStatus,
databases,
+ requestId,
timestamp: new Date().toISOString(),
version: '3.0.0-phase3',
};
- console.log('β
Health check complete:', status);
+ console.log(`[${requestId}] β
Health check complete:`, status);
return new Response(JSON.stringify(response, null, 2), {
status: status === 'ok' ? 200 : status === 'degraded' ? 206 : 503,
headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' },
});
} catch (error) {
- console.error('β Health check error:', error);
+ console.error(`[${requestId}] β Health check error:`, error);
const errorResponse: HealthCheckResponse = {
status: 'error',
kimi: 'disconnected',
databases: [],
+ requestId,
timestamp: new Date().toISOString(),
version: '3.0.0-phase3',
};
return new Response(JSON.stringify(errorResponse, null, 2), {
status: 503,
headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' },
});
}
}Also update the HealthCheckResponse interface:
interface HealthCheckResponse {
status: 'ok' | 'degraded' | 'error';
kimi: 'connected' | 'disconnected' | 'not_configured';
databases: {
name: string;
status: 'connected' | 'error';
tables?: number;
}[];
+ requestId: string;
timestamp: string;
version: string;
}As per coding guidelines: "All endpoint handlers must generate and use a requestId", "All console logs should be structured to include the requestId prefix", and "Structure success responses as JSON with data payload plus metadata: requestId and timestamp".
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function handleHealthCheck(request: Request, env: Env): Promise<Response> { | |
| console.log('π₯ Health check request received'); | |
| try { | |
| // Check Kimi API | |
| const kimiStatus = await checkKimiAPI(env.KIMI_API_KEY); | |
| // Check databases | |
| const databases = await Promise.all([ | |
| checkDatabase(env.ANALYTICS, 'ANALYTICS'), | |
| checkDatabase(env.RAW_FEED_DB, 'RAW_FEED_DB'), | |
| ]); | |
| // Determine overall status | |
| const allDbsConnected = databases.every(db => db.status === 'connected'); | |
| const kimiConnected = kimiStatus === 'connected'; | |
| let status: 'ok' | 'degraded' | 'error'; | |
| if (kimiConnected && allDbsConnected) { | |
| status = 'ok'; | |
| } else if (kimiConnected || allDbsConnected) { | |
| status = 'degraded'; | |
| } else { | |
| status = 'error'; | |
| } | |
| const response: HealthCheckResponse = { | |
| status, | |
| kimi: kimiStatus, | |
| databases, | |
| timestamp: new Date().toISOString(), | |
| version: '3.0.0-phase3', | |
| }; | |
| console.log('β Health check complete:', status); | |
| return new Response(JSON.stringify(response, null, 2), { | |
| status: status === 'ok' ? 200 : status === 'degraded' ? 206 : 503, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } catch (error) { | |
| console.error('β Health check error:', error); | |
| const errorResponse: HealthCheckResponse = { | |
| status: 'error', | |
| kimi: 'disconnected', | |
| databases: [], | |
| timestamp: new Date().toISOString(), | |
| version: '3.0.0-phase3', | |
| }; | |
| return new Response(JSON.stringify(errorResponse, null, 2), { | |
| status: 503, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| } | |
| export async function handleHealthCheck(request: Request, env: Env): Promise<Response> { | |
| const requestId = Date.now().toString(36); | |
| console.log(`[${requestId}] π₯ Health check request received`); | |
| try { | |
| // Check Kimi API | |
| const kimiStatus = await checkKimiAPI(env.KIMI_API_KEY); | |
| // Check databases | |
| const databases = await Promise.all([ | |
| checkDatabase(env.ANALYTICS, 'ANALYTICS'), | |
| checkDatabase(env.RAW_FEED_DB, 'RAW_FEED_DB'), | |
| ]); | |
| // Determine overall status | |
| const allDbsConnected = databases.every(db => db.status === 'connected'); | |
| const kimiConnected = kimiStatus === 'connected'; | |
| let status: 'ok' | 'degraded' | 'error'; | |
| if (kimiConnected && allDbsConnected) { | |
| status = 'ok'; | |
| } else if (kimiConnected || allDbsConnected) { | |
| status = 'degraded'; | |
| } else { | |
| status = 'error'; | |
| } | |
| const response: HealthCheckResponse = { | |
| status, | |
| kimi: kimiStatus, | |
| databases, | |
| requestId, | |
| timestamp: new Date().toISOString(), | |
| version: '3.0.0-phase3', | |
| }; | |
| console.log(`[${requestId}] β Health check complete:`, status); | |
| return new Response(JSON.stringify(response, null, 2), { | |
| status: status === 'ok' ? 200 : status === 'degraded' ? 206 : 503, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } catch (error) { | |
| console.error(`[${requestId}] β Health check error:`, error); | |
| const errorResponse: HealthCheckResponse = { | |
| status: 'error', | |
| kimi: 'disconnected', | |
| databases: [], | |
| requestId, | |
| timestamp: new Date().toISOString(), | |
| version: '3.0.0-phase3', | |
| }; | |
| return new Response(JSON.stringify(errorResponse, null, 2), { | |
| status: 503, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| } |
π€ Prompt for AI Agents
In src/api/ai-health.ts around lines 74 to 130: add an initial call to
validateEnv(env, ['ANALYTICS', 'RAW_FEED_DB', 'KIMI_API_KEY']), generate const
requestId = Date.now().toString(36) at the top of the handler, and prefix all
console logs on lines ~75, ~108 and ~115 with [${requestId}]. Include requestId
in both success and error JSON payloads (add requestId alongside status,
timestamp, version and data), and ensure the handler is exported/wrapped with
asyncHandler for consistent error formatting. Also update the
HealthCheckResponse interface to include requestId:string and move the actual
health data into a data field per endpoint guidelines.
| function parseQuery(query: string): { intent: string; params: Record<string, any> } { | ||
| const lowerQuery = query.toLowerCase(); | ||
|
|
||
| // Sharp customer analysis | ||
| if (lowerQuery.includes('sharp') || lowerQuery.includes('customer') || lowerQuery.includes('cust_')) { | ||
| const customerMatch = query.match(/cust[_-]?(\w+)/i); | ||
| if (customerMatch) { | ||
| return { | ||
| intent: 'sharp_analysis', | ||
| params: { customerId: `CUST_${customerMatch[1]}` }, | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| // Steam move detection | ||
| if (lowerQuery.includes('steam') || lowerQuery.includes('line move') || lowerQuery.includes('movement')) { | ||
| // Try to extract team names | ||
| const teams = query.match(/(\w+)\s+(?:vs?\.?|@)\s+(\w+)/i); | ||
| if (teams) { | ||
| return { | ||
| intent: 'steam_detection', | ||
| params: { | ||
| homeTeam: teams[2], | ||
| awayTeam: teams[1], | ||
| }, | ||
| }; | ||
| } | ||
|
|
||
| // Try to extract event ID | ||
| const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/); | ||
| if (eventMatch) { | ||
| return { | ||
| intent: 'steam_detection', | ||
| params: { eventId: eventMatch[1] }, | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| // Risk report | ||
| if (lowerQuery.includes('risk') || lowerQuery.includes('exposure') || lowerQuery.includes('hedge')) { | ||
| const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/); | ||
| if (eventMatch) { | ||
| return { | ||
| intent: 'risk_report', | ||
| params: { eventId: eventMatch[1] }, | ||
| }; | ||
| } | ||
|
|
||
| // General risk report | ||
| return { | ||
| intent: 'risk_report', | ||
| params: { date: new Date().toISOString().split('T')[0] }, | ||
| }; | ||
| } | ||
|
|
||
| // Agent performance | ||
| if (lowerQuery.includes('agent') || lowerQuery.includes('top') || lowerQuery.includes('performance')) { | ||
| return { | ||
| intent: 'agent_performance', | ||
| params: { period: 'week' }, | ||
| }; | ||
| } | ||
|
|
||
| // Default to general chat | ||
| return { | ||
| intent: 'general_chat', | ||
| params: { query }, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation and fix Date usage.
The function has several issues that should be addressed:
- Line 87: Uses
new Date()without explicit UTC. Per coding guidelines, preferDate.now()or explicit UTC timestamps instead ofnew Date()defaults. - No input validation: The query string is not validated for length or sanitized before regex operations, which could lead to performance issues or unexpected behavior with malformed input.
- No error handling: Regex operations could fail or produce unexpected results with certain inputs.
Apply these changes:
function parseQuery(query: string): { intent: string; params: Record<string, any> } {
+ // Validate input
+ if (!query || query.length > 1000) {
+ throw new Error('Invalid query: must be between 1 and 1000 characters');
+ }
+
const lowerQuery = query.toLowerCase();
// Sharp customer analysis
if (lowerQuery.includes('sharp') || lowerQuery.includes('customer') || lowerQuery.includes('cust_')) {
const customerMatch = query.match(/cust[_-]?(\w+)/i);
if (customerMatch) {
return {
intent: 'sharp_analysis',
params: { customerId: `CUST_${customerMatch[1]}` },
};
}
}
// Steam move detection
if (lowerQuery.includes('steam') || lowerQuery.includes('line move') || lowerQuery.includes('movement')) {
// Try to extract team names
const teams = query.match(/(\w+)\s+(?:vs?\.?|@)\s+(\w+)/i);
if (teams) {
return {
intent: 'steam_detection',
params: {
homeTeam: teams[2],
awayTeam: teams[1],
},
};
}
// Try to extract event ID
const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/);
if (eventMatch) {
return {
intent: 'steam_detection',
params: { eventId: eventMatch[1] },
};
}
}
// Risk report
if (lowerQuery.includes('risk') || lowerQuery.includes('exposure') || lowerQuery.includes('hedge')) {
const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/);
if (eventMatch) {
return {
intent: 'risk_report',
params: { eventId: eventMatch[1] },
};
}
// General risk report
return {
intent: 'risk_report',
- params: { date: new Date().toISOString().split('T')[0] },
+ params: { date: new Date(Date.now()).toISOString().split('T')[0] },
};
}
// Agent performance
if (lowerQuery.includes('agent') || lowerQuery.includes('top') || lowerQuery.includes('performance')) {
return {
intent: 'agent_performance',
params: { period: 'week' },
};
}
// Default to general chat
return {
intent: 'general_chat',
params: { query },
};
}As per coding guidelines: "Use explicit UTC timestamps instead of new Date() defaults" and "Always validate all inputs before use (type, length, bounds)".
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function parseQuery(query: string): { intent: string; params: Record<string, any> } { | |
| const lowerQuery = query.toLowerCase(); | |
| // Sharp customer analysis | |
| if (lowerQuery.includes('sharp') || lowerQuery.includes('customer') || lowerQuery.includes('cust_')) { | |
| const customerMatch = query.match(/cust[_-]?(\w+)/i); | |
| if (customerMatch) { | |
| return { | |
| intent: 'sharp_analysis', | |
| params: { customerId: `CUST_${customerMatch[1]}` }, | |
| }; | |
| } | |
| } | |
| // Steam move detection | |
| if (lowerQuery.includes('steam') || lowerQuery.includes('line move') || lowerQuery.includes('movement')) { | |
| // Try to extract team names | |
| const teams = query.match(/(\w+)\s+(?:vs?\.?|@)\s+(\w+)/i); | |
| if (teams) { | |
| return { | |
| intent: 'steam_detection', | |
| params: { | |
| homeTeam: teams[2], | |
| awayTeam: teams[1], | |
| }, | |
| }; | |
| } | |
| // Try to extract event ID | |
| const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/); | |
| if (eventMatch) { | |
| return { | |
| intent: 'steam_detection', | |
| params: { eventId: eventMatch[1] }, | |
| }; | |
| } | |
| } | |
| // Risk report | |
| if (lowerQuery.includes('risk') || lowerQuery.includes('exposure') || lowerQuery.includes('hedge')) { | |
| const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/); | |
| if (eventMatch) { | |
| return { | |
| intent: 'risk_report', | |
| params: { eventId: eventMatch[1] }, | |
| }; | |
| } | |
| // General risk report | |
| return { | |
| intent: 'risk_report', | |
| params: { date: new Date().toISOString().split('T')[0] }, | |
| }; | |
| } | |
| // Agent performance | |
| if (lowerQuery.includes('agent') || lowerQuery.includes('top') || lowerQuery.includes('performance')) { | |
| return { | |
| intent: 'agent_performance', | |
| params: { period: 'week' }, | |
| }; | |
| } | |
| // Default to general chat | |
| return { | |
| intent: 'general_chat', | |
| params: { query }, | |
| }; | |
| } | |
| function parseQuery(query: string): { intent: string; params: Record<string, any> } { | |
| // Validate input | |
| if (!query || query.length > 1000) { | |
| throw new Error('Invalid query: must be between 1 and 1000 characters'); | |
| } | |
| const lowerQuery = query.toLowerCase(); | |
| // Sharp customer analysis | |
| if (lowerQuery.includes('sharp') || lowerQuery.includes('customer') || lowerQuery.includes('cust_')) { | |
| const customerMatch = query.match(/cust[_-]?(\w+)/i); | |
| if (customerMatch) { | |
| return { | |
| intent: 'sharp_analysis', | |
| params: { customerId: `CUST_${customerMatch[1]}` }, | |
| }; | |
| } | |
| } | |
| // Steam move detection | |
| if (lowerQuery.includes('steam') || lowerQuery.includes('line move') || lowerQuery.includes('movement')) { | |
| // Try to extract team names | |
| const teams = query.match(/(\w+)\s+(?:vs?\.?|@)\s+(\w+)/i); | |
| if (teams) { | |
| return { | |
| intent: 'steam_detection', | |
| params: { | |
| homeTeam: teams[2], | |
| awayTeam: teams[1], | |
| }, | |
| }; | |
| } | |
| // Try to extract event ID | |
| const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/); | |
| if (eventMatch) { | |
| return { | |
| intent: 'steam_detection', | |
| params: { eventId: eventMatch[1] }, | |
| }; | |
| } | |
| } | |
| // Risk report | |
| if (lowerQuery.includes('risk') || lowerQuery.includes('exposure') || lowerQuery.includes('hedge')) { | |
| const eventMatch = query.match(/([A-Z]+_[A-Z0-9_]+)/); | |
| if (eventMatch) { | |
| return { | |
| intent: 'risk_report', | |
| params: { eventId: eventMatch[1] }, | |
| }; | |
| } | |
| // General risk report | |
| return { | |
| intent: 'risk_report', | |
| params: { date: new Date(Date.now()).toISOString().split('T')[0] }, | |
| }; | |
| } | |
| // Agent performance | |
| if (lowerQuery.includes('agent') || lowerQuery.includes('top') || lowerQuery.includes('performance')) { | |
| return { | |
| intent: 'agent_performance', | |
| params: { period: 'week' }, | |
| }; | |
| } | |
| // Default to general chat | |
| return { | |
| intent: 'general_chat', | |
| params: { query }, | |
| }; | |
| } |
π€ Prompt for AI Agents
In src/api/ai-query.ts around lines 36 to 104, validate and harden parseQuery:
ensure query is a non-empty string (typeof check, trim, enforce a max length
e.g. 1000 chars) and return a safe default error or general_chat if validation
fails; sanitize/truncate the string before running regexes; wrap the parsing
logic in a try/catch to handle unexpected regex errors and fall back to intent
'general_chat'; guard each regex result with null checks before using capture
groups; and replace the Date usage that returns the current date with an
explicit UTC-based timestamp (e.g. derive the ISO date via Date.now() or new
Date(Date.now()).toISOString().split('T')[0]) so the risk_report date is
generated using an explicit UTC timestamp.
| async function executeSharpAnalysis(params: Record<string, any>, env: Env): Promise<any> { | ||
| const result = await getAISharpAnalysis(params, env); | ||
|
|
||
| if (result.isError) { | ||
| throw new Error(result.content[0].text); | ||
| } | ||
|
|
||
| return JSON.parse(result.content[0].text); | ||
| } | ||
|
|
||
| /** | ||
| * Execute steam detection | ||
| */ | ||
| async function executeSteamDetection(params: Record<string, any>, env: Env): Promise<any> { | ||
| // If we have team names, construct event ID | ||
| if (params.homeTeam && params.awayTeam) { | ||
| const today = new Date().toISOString().split('T')[0].replace(/-/g, ''); | ||
| params.eventId = `NBA_${params.awayTeam.toUpperCase()}_${params.homeTeam.toUpperCase()}_${today}`; | ||
| params.marketType = 'SPREAD'; | ||
| } | ||
|
|
||
| const result = await getAISteamDetection(params, env); | ||
|
|
||
| if (result.isError) { | ||
| throw new Error(result.content[0].text); | ||
| } | ||
|
|
||
| return JSON.parse(result.content[0].text); | ||
| } | ||
|
|
||
| /** | ||
| * Execute risk report | ||
| */ | ||
| async function executeRiskReport(params: Record<string, any>, env: Env): Promise<any> { | ||
| const result = await getAIRiskReport(params, env); | ||
|
|
||
| if (result.isError) { | ||
| throw new Error(result.content[0].text); | ||
| } | ||
|
|
||
| return JSON.parse(result.content[0].text); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for JSON parsing.
The executor functions parse JSON without error handling, which could cause the handler to crash if the result content is malformed:
- Line 116 in
executeSharpAnalysis - Line 136 in
executeSteamDetection - Line 149 in
executeRiskReport
Add try-catch around JSON.parse operations:
async function executeSharpAnalysis(params: Record<string, any>, env: Env): Promise<any> {
const result = await getAISharpAnalysis(params, env);
if (result.isError) {
throw new Error(result.content[0].text);
}
- return JSON.parse(result.content[0].text);
+ try {
+ return JSON.parse(result.content[0].text);
+ } catch (error) {
+ throw new Error(`Failed to parse sharp analysis result: ${error instanceof Error ? error.message : 'Unknown error'}`);
+ }
}Apply similar changes to executeSteamDetection (line 136) and executeRiskReport (line 149).
As per coding guidelines: "Always use try-catch or an asyncHandler wrapper around async request handlers".
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function executeSharpAnalysis(params: Record<string, any>, env: Env): Promise<any> { | |
| const result = await getAISharpAnalysis(params, env); | |
| if (result.isError) { | |
| throw new Error(result.content[0].text); | |
| } | |
| return JSON.parse(result.content[0].text); | |
| } | |
| /** | |
| * Execute steam detection | |
| */ | |
| async function executeSteamDetection(params: Record<string, any>, env: Env): Promise<any> { | |
| // If we have team names, construct event ID | |
| if (params.homeTeam && params.awayTeam) { | |
| const today = new Date().toISOString().split('T')[0].replace(/-/g, ''); | |
| params.eventId = `NBA_${params.awayTeam.toUpperCase()}_${params.homeTeam.toUpperCase()}_${today}`; | |
| params.marketType = 'SPREAD'; | |
| } | |
| const result = await getAISteamDetection(params, env); | |
| if (result.isError) { | |
| throw new Error(result.content[0].text); | |
| } | |
| return JSON.parse(result.content[0].text); | |
| } | |
| /** | |
| * Execute risk report | |
| */ | |
| async function executeRiskReport(params: Record<string, any>, env: Env): Promise<any> { | |
| const result = await getAIRiskReport(params, env); | |
| if (result.isError) { | |
| throw new Error(result.content[0].text); | |
| } | |
| return JSON.parse(result.content[0].text); | |
| } | |
| async function executeSharpAnalysis(params: Record<string, any>, env: Env): Promise<any> { | |
| const result = await getAISharpAnalysis(params, env); | |
| if (result.isError) { | |
| throw new Error(result.content[0].text); | |
| } | |
| try { | |
| return JSON.parse(result.content[0].text); | |
| } catch (error) { | |
| throw new Error( | |
| `Failed to parse sharp analysis result: ${ | |
| error instanceof Error ? error.message : 'Unknown error' | |
| }` | |
| ); | |
| } | |
| } |
| export async function handleAIQuery(request: Request, env: Env): Promise<Response> { | ||
| const requestId = Date.now().toString(36); | ||
| console.log(`[${requestId}] π AI query request received`); | ||
|
|
||
| try { | ||
| // Validate API key | ||
| if (!env.KIMI_API_KEY || env.KIMI_API_KEY === 'sk-placeholder-use-dotenv-or-secrets') { | ||
| return new Response(JSON.stringify({ error: 'AI service not configured' }), { | ||
| status: 503, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } | ||
|
|
||
| // Parse request | ||
| const body = await request.json() as QueryRequest; | ||
|
|
||
| if (!body.query) { | ||
| return new Response(JSON.stringify({ error: 'query is required' }), { | ||
| status: 400, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } | ||
|
|
||
| console.log(`[${requestId}] Query: "${body.query}"`); | ||
|
|
||
| // Parse intent | ||
| const { intent, params } = parseQuery(body.query); | ||
| console.log(`[${requestId}] Intent: ${intent}`, params); | ||
|
|
||
| // Add agentID if provided | ||
| if (body.agentID) { | ||
| params.agentID = body.agentID; | ||
| } | ||
|
|
||
| // Execute based on intent | ||
| let result: any; | ||
| let usage: any; | ||
| let cost: any; | ||
|
|
||
| switch (intent) { | ||
| case 'sharp_analysis': | ||
| result = await executeSharpAnalysis(params, env); | ||
| usage = result.aiUsage; | ||
| cost = { totalCost: result.aiUsage?.cost }; | ||
| break; | ||
|
|
||
| case 'steam_detection': | ||
| result = await executeSteamDetection(params, env); | ||
| usage = result.aiUsage; | ||
| cost = { totalCost: result.aiUsage?.cost }; | ||
| break; | ||
|
|
||
| case 'risk_report': | ||
| result = await executeRiskReport(params, env); | ||
| usage = result.aiUsage; | ||
| cost = { totalCost: result.aiUsage?.cost }; | ||
| break; | ||
|
|
||
| case 'agent_performance': | ||
| result = await executeAgentPerformance(params, env); | ||
| break; | ||
|
|
||
| case 'general_chat': | ||
| result = await executeGeneralChat(params, env); | ||
| usage = result.usage; | ||
| cost = result.cost; | ||
| break; | ||
|
|
||
| default: | ||
| throw new Error(`Unknown intent: ${intent}`); | ||
| } | ||
|
|
||
| console.log(`[${requestId}] β Query executed successfully`); | ||
|
|
||
| const response: QueryResponse = { | ||
| query: body.query, | ||
| intent, | ||
| result, | ||
| usage, | ||
| cost, | ||
| timestamp: new Date().toISOString(), | ||
| }; | ||
|
|
||
| return new Response(JSON.stringify(response, null, 2), { | ||
| status: 200, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } catch (error) { | ||
| console.error(`[${requestId}] β Query error:`, error); | ||
|
|
||
| return new Response(JSON.stringify({ | ||
| error: error instanceof Error ? error.message : 'Unknown error', | ||
| timestamp: new Date().toISOString(), | ||
| }), { | ||
| status: 500, | ||
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix multiple guideline violations in main handler.
The main handler has several issues that must be addressed:
- Line 221: No try-catch around
request.json()- malformed JSON will crash the handler before reaching the outer try-catch. - Lines 281-288: Missing
requestIdin success response payload. - Lines 297-303: Missing
requestIdin error response. - No asyncHandler wrapper: The handler should use
asyncHandler(...)to auto-catch and format errors. - No Zod validation: Request body should be validated using a Zod schema.
Apply these fixes:
+import { z } from 'zod';
+
+const QueryRequestSchema = z.object({
+ query: z.string().min(1).max(1000),
+ agentID: z.string().optional(),
+});
+
export async function handleAIQuery(request: Request, env: Env): Promise<Response> {
const requestId = Date.now().toString(36);
console.log(`[${requestId}] π AI query request received`);
try {
// Validate API key
if (!env.KIMI_API_KEY || env.KIMI_API_KEY === 'sk-placeholder-use-dotenv-or-secrets') {
return new Response(JSON.stringify({ error: 'AI service not configured', requestId, timestamp: new Date().toISOString() }), {
status: 503,
headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' },
});
}
// Parse request
- const body = await request.json() as QueryRequest;
+ let body: QueryRequest;
+ try {
+ const rawBody = await request.json();
+ body = QueryRequestSchema.parse(rawBody);
+ } catch (error) {
+ return new Response(JSON.stringify({
+ error: error instanceof Error ? error.message : 'Invalid request body',
+ requestId,
+ timestamp: new Date().toISOString()
+ }), {
+ status: 400,
+ headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' },
+ });
+ }
- if (!body.query) {
- return new Response(JSON.stringify({ error: 'query is required' }), {
- status: 400,
- headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' },
- });
- }
console.log(`[${requestId}] Query: "${body.query}"`);
// Parse intent
const { intent, params } = parseQuery(body.query);
console.log(`[${requestId}] Intent: ${intent}`, params);
// Add agentID if provided
if (body.agentID) {
params.agentID = body.agentID;
}
// Execute based on intent
let result: any;
let usage: any;
let cost: any;
switch (intent) {
case 'sharp_analysis':
result = await executeSharpAnalysis(params, env);
usage = result.aiUsage;
cost = { totalCost: result.aiUsage?.cost };
break;
case 'steam_detection':
result = await executeSteamDetection(params, env);
usage = result.aiUsage;
cost = { totalCost: result.aiUsage?.cost };
break;
case 'risk_report':
result = await executeRiskReport(params, env);
usage = result.aiUsage;
cost = { totalCost: result.aiUsage?.cost };
break;
case 'agent_performance':
result = await executeAgentPerformance(params, env);
break;
case 'general_chat':
result = await executeGeneralChat(params, env);
usage = result.usage;
cost = result.cost;
break;
default:
throw new Error(`Unknown intent: ${intent}`);
}
console.log(`[${requestId}] β
Query executed successfully`);
const response: QueryResponse = {
query: body.query,
intent,
result,
usage,
cost,
+ requestId,
timestamp: new Date().toISOString(),
};
return new Response(JSON.stringify(response, null, 2), {
status: 200,
headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' },
});
} catch (error) {
console.error(`[${requestId}] β Query error:`, error);
return new Response(JSON.stringify({
error: error instanceof Error ? error.message : 'Unknown error',
+ requestId,
timestamp: new Date().toISOString(),
}), {
status: 500,
headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' },
});
}
}Also update the QueryResponse interface to include requestId:
interface QueryResponse {
query: string;
intent: string;
result: any;
usage?: {
inputTokens: number;
outputTokens: number;
totalTokens: number;
};
cost?: {
totalCost: number;
};
+ requestId: string;
timestamp: string;
}As per coding guidelines: "Always use try-catch or an asyncHandler wrapper around async request handlers", "Always validate all inputs using Zod schemas before processing", "Structure success responses as JSON with data payload plus metadata: requestId and timestamp", and "include it in success payloads and in createErrorResponse".
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function handleAIQuery(request: Request, env: Env): Promise<Response> { | |
| const requestId = Date.now().toString(36); | |
| console.log(`[${requestId}] π AI query request received`); | |
| try { | |
| // Validate API key | |
| if (!env.KIMI_API_KEY || env.KIMI_API_KEY === 'sk-placeholder-use-dotenv-or-secrets') { | |
| return new Response(JSON.stringify({ error: 'AI service not configured' }), { | |
| status: 503, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| // Parse request | |
| const body = await request.json() as QueryRequest; | |
| if (!body.query) { | |
| return new Response(JSON.stringify({ error: 'query is required' }), { | |
| status: 400, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| console.log(`[${requestId}] Query: "${body.query}"`); | |
| // Parse intent | |
| const { intent, params } = parseQuery(body.query); | |
| console.log(`[${requestId}] Intent: ${intent}`, params); | |
| // Add agentID if provided | |
| if (body.agentID) { | |
| params.agentID = body.agentID; | |
| } | |
| // Execute based on intent | |
| let result: any; | |
| let usage: any; | |
| let cost: any; | |
| switch (intent) { | |
| case 'sharp_analysis': | |
| result = await executeSharpAnalysis(params, env); | |
| usage = result.aiUsage; | |
| cost = { totalCost: result.aiUsage?.cost }; | |
| break; | |
| case 'steam_detection': | |
| result = await executeSteamDetection(params, env); | |
| usage = result.aiUsage; | |
| cost = { totalCost: result.aiUsage?.cost }; | |
| break; | |
| case 'risk_report': | |
| result = await executeRiskReport(params, env); | |
| usage = result.aiUsage; | |
| cost = { totalCost: result.aiUsage?.cost }; | |
| break; | |
| case 'agent_performance': | |
| result = await executeAgentPerformance(params, env); | |
| break; | |
| case 'general_chat': | |
| result = await executeGeneralChat(params, env); | |
| usage = result.usage; | |
| cost = result.cost; | |
| break; | |
| default: | |
| throw new Error(`Unknown intent: ${intent}`); | |
| } | |
| console.log(`[${requestId}] β Query executed successfully`); | |
| const response: QueryResponse = { | |
| query: body.query, | |
| intent, | |
| result, | |
| usage, | |
| cost, | |
| timestamp: new Date().toISOString(), | |
| }; | |
| return new Response(JSON.stringify(response, null, 2), { | |
| status: 200, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } catch (error) { | |
| console.error(`[${requestId}] β Query error:`, error); | |
| return new Response(JSON.stringify({ | |
| error: error instanceof Error ? error.message : 'Unknown error', | |
| timestamp: new Date().toISOString(), | |
| }), { | |
| status: 500, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| } | |
| import { z } from 'zod'; | |
| const QueryRequestSchema = z.object({ | |
| query: z.string().min(1).max(1000), | |
| agentID: z.string().optional(), | |
| }); | |
| export async function handleAIQuery(request: Request, env: Env): Promise<Response> { | |
| const requestId = Date.now().toString(36); | |
| console.log(`[${requestId}] π AI query request received`); | |
| try { | |
| // Validate API key | |
| if (!env.KIMI_API_KEY || env.KIMI_API_KEY === 'sk-placeholder-use-dotenv-or-secrets') { | |
| return new Response(JSON.stringify({ | |
| error: 'AI service not configured', | |
| requestId, | |
| timestamp: new Date().toISOString(), | |
| }), { | |
| status: 503, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| // Parse & validate request body | |
| let body: QueryRequest; | |
| try { | |
| const rawBody = await request.json(); | |
| body = QueryRequestSchema.parse(rawBody); | |
| } catch (error) { | |
| return new Response(JSON.stringify({ | |
| error: error instanceof Error ? error.message : 'Invalid request body', | |
| requestId, | |
| timestamp: new Date().toISOString(), | |
| }), { | |
| status: 400, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| console.log(`[${requestId}] Query: "${body.query}"`); | |
| // Parse intent | |
| const { intent, params } = parseQuery(body.query); | |
| console.log(`[${requestId}] Intent: ${intent}`, params); | |
| // Add agentID if provided | |
| if (body.agentID) { | |
| params.agentID = body.agentID; | |
| } | |
| // Execute based on intent | |
| let result: any; | |
| let usage: any; | |
| let cost: any; | |
| switch (intent) { | |
| case 'sharp_analysis': | |
| result = await executeSharpAnalysis(params, env); | |
| usage = result.aiUsage; | |
| cost = { totalCost: result.aiUsage?.cost }; | |
| break; | |
| case 'steam_detection': | |
| result = await executeSteamDetection(params, env); | |
| usage = result.aiUsage; | |
| cost = { totalCost: result.aiUsage?.cost }; | |
| break; | |
| case 'risk_report': | |
| result = await executeRiskReport(params, env); | |
| usage = result.aiUsage; | |
| cost = { totalCost: result.aiUsage?.cost }; | |
| break; | |
| case 'agent_performance': | |
| result = await executeAgentPerformance(params, env); | |
| break; | |
| case 'general_chat': | |
| result = await executeGeneralChat(params, env); | |
| usage = result.usage; | |
| cost = result.cost; | |
| break; | |
| default: | |
| throw new Error(`Unknown intent: ${intent}`); | |
| } | |
| console.log(`[${requestId}] β Query executed successfully`); | |
| const response: QueryResponse = { | |
| query: body.query, | |
| intent, | |
| result, | |
| usage, | |
| cost, | |
| requestId, | |
| timestamp: new Date().toISOString(), | |
| }; | |
| return new Response(JSON.stringify(response, null, 2), { | |
| status: 200, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } catch (error) { | |
| console.error(`[${requestId}] β Query error:`, error); | |
| return new Response(JSON.stringify({ | |
| error: error instanceof Error ? error.message : 'Unknown error', | |
| requestId, | |
| timestamp: new Date().toISOString(), | |
| }), { | |
| status: 500, | |
| headers: { ...CORS_HEADERS, 'Content-Type': 'application/json' }, | |
| }); | |
| } | |
| } |
π€ Prompt for AI Agents
In src/api/ai-query.ts around lines 207-305, the handler must be hardened and
follow project guidelines: wrap the exported handler with the asyncHandler(...)
wrapper and remove any uncaught async boundaries; validate the incoming body
with a Zod schema (define a QueryRequest zod schema and use safeParse on the
parsed JSON) and handle malformed JSON by catching request.json() errors (or
perform JSON.parse in a try/catch before zod), and return a 400 with a
structured error when validation fails; include requestId in both the success
response payload and the error response payload (add requestId to the
QueryResponse interface and ensure createErrorResponse / the error response
include it); and ensure all code paths that set usage/cost produce defined
values (initialize usage and cost to null/defaults). Make minimal edits: add Zod
schema + safeParse, wrap handler export with asyncHandler, add try/catch around
await request.json() (or catch parse errors before zod), include requestId and
timestamp in both success and error JSON responses, and update the QueryResponse
type to include requestId.
π Phase 3: AI API Endpoints & Worker Integration
Summary
Implements Phase 3 of the Kimi K2 AI integration, adding API endpoints and MCP tools for AI-powered betting analysis.
Changes
β AI Chat API Endpoint (
POST /api/ai/chat)β 3 AI-Enhanced MCP Tools
aiSharpAnalysis: AI-powered customer profilingaiSteamDetection: AI-powered line movement detectionaiRiskReport: AI-powered risk assessmentβ Worker Integration
src/index.tsβ Documentation
Files Changed
Testing
Related Issues
Part of Kimi K2 AI Integration (Phases 1-6)
Checklist
Next Steps
After merge:
See
PHASE_3_COMPLETE.mdfor detailed documentation.Summary by CodeRabbit
New Features
Documentation
Chores