Multi-provider AI library with intelligent model selection, type-safe context management, and comprehensive provider support.
@aeye (AI TypeScript) is a modern, type-safe AI library for Node.js and TypeScript applications. It provides a unified interface for working with multiple AI providers (OpenAI, OpenRouter, Replicate, etc.) with automatic model selection, cost tracking, streaming support, and extensible architecture.
To see a complex example of a CLI agent built with aeye - npm i -g @aeye/cletus and run cletus!
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
const openai = new OpenAIProvider({});
const ai = AI.with<MyContext>().providers({ openai }).create({ /* config */ })
const myTool = ai.tool({ /* name, description, instructions, schema, call, + */ });
const myPrompt = ai.prompt({ /* name, description, content, schema?, config, tools, metadata, + */ });
const myAgent = ai.agent({ /* name, description, refs, call, + */ });
myTool.run(input, ctx?); // same signature with myPrompt & myAgent
myPrompt.get('tools', input, ctx?); // prompts can have tool results, results, streaming, etc through get
ai.chat.get(request, ctx?) // or stream
ai.image.generate.get(request, ctx?) // or stream
ai.image.edit.get(request, ctx?) // or stream
ai.image.analyze.get(request, ctx?) // or stream
ai.speech.get(request, ctx?) // or stream
ai.transcribe.get(request, ctx?) // or stream
ai.embed.get(request, ctx?)
ai.models.list() // get(id), search(criteria), select(criteria), refresh()- 🎯 Multi-Provider Support - Single interface for OpenAI, OpenRouter, Replicate, and custom providers
- 🤖 Intelligent Model Selection - Automatic model selection based on capabilities, cost, speed, and quality
- 💰 Cost Tracking - Built-in token usage and cost calculation with provider-reported costs
- 🔄 Streaming Support - Full streaming support across all compatible capabilities
- 🛡️ Type-Safe - Strongly-typed context and metadata with compiler validation
- 🎨 Comprehensive APIs - Chat, Image Generation, Speech Synthesis, Transcription, Embeddings
- 🔌 Extensible - Custom providers, model handlers, and transformers
- 📊 Model Registry - Centralized model management with external sources
- ⚡ Provider Capability Detection - Automatic detection and validation of provider capabilities
- 🎣 Lifecycle Hooks - Intercept and modify operations at every stage
- 🔧 Model Overrides - Customize model properties without modifying providers
- 📦 Model Sources - External model sources (OpenRouter, custom APIs)
- 🌊 Context Management - Thread context through your entire AI operation
- 🎛️ Fine-Grained Control - Temperature, tokens, stop sequences, tool calling, and more
# Install core packages
npm install @aeye/ai @aeye/core
# Install provider packages as needed
npm install @aeye/openai openai # OpenAI
npm install @aeye/openrouter # OpenRouter (multi-provider)
npm install @aeye/replicate replicate # Replicate
npm install @aeye/aws # AWSimport { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
// Create providers
const openai = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!
});
// Create AI instance
const ai = AI.with()
.providers({ openai })
.create();
// Chat completion
const response = await ai.chat.get([
{ role: 'user', content: 'What is TypeScript?' }
]);
console.log(response.content);
// Streaming
for await (const chunk of ai.chat.stream([
{ role: 'user', content: 'Write a poem about AI' }
])) {
if (chunk.content) {
process.stdout.write(chunk.content);
}
}import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
import { OpenRouterProvider } from '@aeye/openrouter';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const openrouter = new OpenRouterProvider({ apiKey: process.env.OPENROUTER_API_KEY! });
const ai = AI.with()
.providers({ openai, openrouter })
.create({
// Automatic model selection criteria
defaultMetadata: {
weights: {
cost: 0.4,
speed: 0.3,
quality: 0.3,
}
}
});
// AI automatically selects the best provider/model
const response = await ai.chat.get([
{ role: 'user', content: 'Explain quantum computing' }
]);┌─────────────────────────────────────────────────────────┐
│ AI Class │
│ - Context Management │
│ - Model Registry │
│ - Lifecycle Hooks │
└─────────────────┬───────────────────────────────────────┘
│
┌────────┴─────────┐
│ │
┌────▼────┐ ┌─────▼──────┐
│ APIs │ │ Registry │
│ │ │ │
│ • Chat │ │ • Models │
│ • Image │ │ • Search │
│ • Speech│ │ • Select │
│ • Embed │ └────┬───────┘
└────┬────┘ │
│ ┌──────▼──────┐
│ │ Providers │
│ │ │
│ │ • OpenAI │
│ │ • OpenRouter│
│ │ • Replicate │
└─────────┴─────────────┘
Core types and interfaces for the @aeye framework. Defines the foundational types for requests, responses, providers, and streaming.
npm install @aeye/coreMain AI library with intelligent model selection, context management, and comprehensive APIs. Built on top of @aeye/core.
npm install @aeye/ai @aeye/coreOpenAI provider supporting GPT-4, GPT-3.5, DALL-E, Whisper, TTS, and embeddings. Serves as base class for OpenAI-compatible providers.
npm install @aeye/openai openaiFeatures:
- Chat completions (GPT-4, GPT-3.5 Turbo)
- Vision (GPT-4V)
- Reasoning models (o1, o3-mini)
- Image generation (DALL-E 2, DALL-E 3)
- Speech-to-text (Whisper)
- Text-to-speech (TTS)
- Embeddings
- Function calling
- Structured outputs
OpenRouter provider for unified access to multiple AI providers with automatic fallbacks and competitive pricing.
npm install @aeye/openrouterFeatures:
- Multi-provider access (OpenAI, Anthropic, Google, Meta, etc.)
- Automatic fallbacks
- Built-in cost tracking
- Zero Data Retention (ZDR) support
- Provider routing preferences
Replicate provider with flexible adapter system for running open-source AI models.
npm install @aeye/replicate replicateFeatures:
- Thousands of open-source models
- Model adapters for handling diverse schemas
- Image generation, transcription, embeddings
- Custom model support
import { AI } from '@aeye/ai';
import { OpenAIProvider } from '@aeye/openai';
const openai = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! });
const ai = AI.with()
.providers({ openai })
.create();
const imageResponse = await ai.image.generate.get({
prompt: 'A serene mountain landscape at sunset',
model: 'dall-e-3',
size: '1024x1024',
quality: 'hd'
});
console.log('Image URL:', imageResponse.images[0].url);import z from 'zod';
const response = await ai.chat.get([
{ role: 'user', content: 'What is the weather in San Francisco?' }
], {
tools: [
{
name: 'get_weather',
description: 'Get current weather for a location',
parameters: z.object({
location: z.string(),
unit: z.enum(['celsius', 'fahrenheit']).optional(),
}),
}
],
toolChoice: 'auto',
});
if (response.toolCalls) {
for (const toolCall of response.toolCalls) {
console.log('Tool:', toolCall.name);
console.log('Arguments:', JSON.parse(toolCall.arguments));
}
}import fs from 'fs';
const speechResponse = await ai.speech.get({
text: 'Hello! This is a text-to-speech example.',
model: 'tts-1',
voice: 'alloy',
});
fs.writeFileSync('output.mp3', speechResponse.audioBuffer);import fs from 'fs';
const audioBuffer = fs.readFileSync('audio.mp3');
const transcription = await ai.transcribe.get({
audio: audioBuffer,
model: 'whisper-1',
language: 'en',
});
console.log('Transcription:', transcription.text);const embeddingResponse = await ai.embed.get({
texts: [
'The quick brown fox jumps over the lazy dog',
'Machine learning is a subset of artificial intelligence',
],
model: 'text-embedding-3-small',
});
embeddingResponse.embeddings.forEach((item, i) => {
console.log(`Embedding ${i}:`, item.embedding.length, 'dimensions');
});interface AppContext {
userId: string;
sessionId: string;
timestamp: Date;
}
const ai = AI.with<AppContext>()
.providers({ openai })
.create({
defaultContext: {
timestamp: new Date(),
}
});
const response = await ai.chat.get([
{ role: 'user', content: 'Hello!' }
], {
userId: 'user123',
sessionId: 'session456',
});const ai = AI.with()
.providers({ openai })
.create({
hooks: {
beforeRequest: async (ctx, request, selected, estimatedTokens, estimatedCost) => {
console.log(`Using model: ${selected.model.id}`);
console.log(`Estimated tokens: ${estimatedTokens}`);
},
afterRequest: async (ctx, request, response, responseComplete, selected, usage, cost) => {
console.log(`Tokens used: ${usage.totalTokens}`);
console.log(`Cost: $${cost}`);
},
}
});// Explicit model selection
const response = await ai.chat.get(messages, {
metadata: { model: 'gpt-4-turbo' }
});
// Automatic selection with criteria
const response = await ai.chat.get(messages, {
metadata: {
required: ['chat', 'streaming', 'vision'],
optional: ['tools'],
weights: {
cost: 0.3,
speed: 0.4,
quality: 0.3,
},
minContextWindow: 32000,
}
});
// Provider filtering
const response = await ai.chat.get(messages, {
metadata: {
providers: {
allow: ['openai', 'anthropic'],
deny: ['low-quality-provider'],
}
}
});Create custom providers by implementing the Provider interface or extending existing providers:
import { OpenAIProvider, OpenAIConfig } from '@aeye/openai';
import OpenAI from 'openai';
class CustomProvider extends OpenAIProvider {
readonly name = 'custom';
protected createClient(config: OpenAIConfig) {
return new OpenAI({
apiKey: config.apiKey,
baseURL: 'https://custom-api.example.com/v1',
});
}
}Fetch models from external sources:
import { OpenRouterModelSource } from '@aeye/openrouter';
const ai = AI.with()
.providers({ openai })
.create({
fetchOpenRouterModels: true, // Auto-fetch all OpenRouter models
});
// Or manually
const source = new OpenRouterModelSource({
apiKey: process.env.OPENROUTER_API_KEY,
});
const models = await source.fetchModels();Customize model properties:
const ai = AI.with()
.providers({ openai })
.create({
modelOverrides: [
{
modelId: 'gpt-4',
overrides: {
pricing: {
text: { input: 30, output: 60 },
},
},
},
],
});import { getProviderCapabilities } from '@aeye/ai';
const openai = new OpenAIProvider({ apiKey: '...' });
const caps = getProviderCapabilities(openai);
console.log(caps);
// Set(['chat', 'streaming', 'image', 'audio', 'hearing', 'embedding'])
const openrouter = new OpenRouterProvider({ apiKey: '...' });
const caps = getProviderCapabilities(openrouter);
console.log(caps.has('image')); // false - OpenRouter doesn't support image generationinterface AIConfig<T> {
// Default context values
defaultContext?: Partial<T>;
// Provider to context loader
provideContext?: (required: T) => Promise<Partial<T>>;
// Default metadata for all requests
defaultMetadata?: Partial<AIBaseMetadata>;
// Model overrides
modelOverrides?: ModelOverride[];
// Default cost per million tokens
defaultCostPerMillionTokens?: number;
// External model sources
modelSources?: ModelSource[];
// Lifecycle hooks
hooks?: AIHooks<T>;
}Each provider has its own configuration:
// OpenAI
interface OpenAIConfig {
apiKey: string;
baseURL?: string;
organization?: string;
}
// OpenRouter
interface OpenRouterConfig extends OpenAIConfig {
defaultParams?: {
siteUrl?: string;
appName?: string;
allowFallbacks?: boolean;
providers?: {
prefer?: string[];
allow?: string[];
deny?: string[];
};
};
}
// Replicate
interface ReplicateConfig {
apiKey: string;
baseUrl?: string;
transformers?: Record<string, ModelTransformer>;
}@aeye provides comprehensive cost tracking:
const response = await ai.chat.get(messages);
// Token usage
console.log('Input tokens:', response.usage.text.input);
console.log('Output tokens:', response.usage.text.output);
// Cost (calculated or provider-reported)
console.log('Cost: $', response.usage.cost);
// For providers like OpenRouter, cost is provided by the API
// For others, it's calculated based on model pricingimport { ProviderError, RateLimitError } from '@aeye/openai';
try {
const response = await ai.chat.get(messages);
} catch (error) {
if (error instanceof RateLimitError) {
console.error('Rate limit exceeded');
console.log(`Retry after ${error.retryAfter} seconds`);
} else if (error instanceof ProviderError) {
console.error(`Provider error: ${error.message}`);
console.error('Cause:', error.cause);
} else {
console.error('Unknown error:', error);
}
}@aeye uses a capability system for model selection:
| Capability | Description | Example Providers |
|---|---|---|
chat |
Basic text completion | OpenAI, OpenRouter |
streaming |
Real-time response streaming | OpenAI, OpenRouter |
image |
Image generation | OpenAI (DALL-E), Replicate |
vision |
Image understanding | OpenAI (GPT-4V) |
audio |
Text-to-speech | OpenAI (TTS) |
hearing |
Speech-to-text | OpenAI (Whisper), Replicate |
embedding |
Text embeddings | OpenAI, Replicate |
tools |
Function/tool calling | OpenAI, OpenRouter |
json |
JSON output mode | OpenAI, OpenRouter |
structured |
Structured outputs | OpenAI |
reasoning |
Extended reasoning | OpenAI (o1 models) |
zdr |
Zero data retention | OpenRouter |
# Install dependencies
npm install
# Build all packages
npm run build
# Run tests
npm run test
# Clean build artifacts
npm run cleanaeye/
├── packages/
│ ├── core/ # Core types and interfaces
│ ├── ai/ # Main AI library
│ ├── openai/ # OpenAI provider
│ ├── openrouter/ # OpenRouter provider
│ ├── replicate/ # Replicate provider
├── package.json # Root package configuration
└── tsconfig.json # TypeScript configuration
-
API Key Security - Never hardcode API keys, use environment variables
-
Error Handling - Always wrap AI calls in try-catch blocks
-
Streaming - Use streaming for better UX with lengthy responses
-
Cost Monitoring - Monitor
response.usage.costto track expenses -
Model Selection - Use appropriate models for your use case
- GPT-4 for complex tasks
- GPT-3.5 for simple/fast tasks
- Specialized models (DALL-E, Whisper) for specific tasks
-
Context Management - Use context to thread data through operations
-
Provider Selection - Choose providers based on:
- Cost efficiency
- Feature availability
- Reliability/uptime
- Privacy requirements (ZDR)
- Anthropic Claude provider
- Built-in retry logic with exponential backoff
- Rate limiting utilities
- Caching layer
Contributions are welcome! Areas where we'd especially appreciate help:
- New Providers - Anthropic, Google, Cohere, etc.
- Model Adapters - For Replicate and other platforms
- Documentation - Examples, tutorials, guides
- Testing - Unit tests, integration tests
- Bug Fixes - Issue reports and fixes
Please see the main @aeye repository for contribution guidelines.
GPL-3.0 © ClickerMonkey
See LICENSE for details.
- GitHub Issues: https://github.com/ClickerMonkey/aeye/issues
- Documentation: https://github.com/ClickerMonkey/aeye
- Examples: See
/examplesdirectory (coming soon)
Built with ❤️ by ClickerMonkey and contributors.
Special thanks to:
- OpenAI for the OpenAI API
- OpenRouter for multi-provider access
- All the open-source AI model creators
- The TypeScript community
Made with TypeScript | GPL-3.0 Licensed | Production Ready