WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 170 additions & 0 deletions docs/guides/extend-adapter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
# Extending Adapters with Custom Models

The `extendAdapter` utility allows you to extend existing adapter factories (like `openaiText`, `anthropicText`) with custom model names while maintaining full type safety for input modalities and provider options.

## Basic Usage

```typescript
import { createModel, extendAdapter } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

// Define your custom models using createModel helper
const myOpenaiModel = createModel('my-fine-tuned-gpt4',['text', 'image']);
const myOpenaiModelButCooler = createModel('my-fine-tuned-gpt5',['text', 'image']);


// Create an extended adapter factory - simple API, no type parameters needed
const myOpenai = extendAdapter(openaiText, [
myOpenaiModel,
myOpenaiModelButCooler
])

// Use with original models - full type inference preserved
const gpt4Adapter = myOpenai('gpt-4o')

// Use with custom models - your custom types are applied
const customAdapter = myOpenai('my-fine-tuned-gpt4')

// Works seamlessly with chat()
import { chat } from '@tanstack/ai'

const stream = chat({
adapter: myOpenai('my-fine-tuned-gpt4'),
messages: [{ role: 'user', content: 'Hello!' }]
})
```

## The `createModel` Helper

The `createModel` function provides a clean way to define custom models with full type inference:

```typescript
import { createModel } from '@tanstack/ai'

// Arguments define name and input modalities
const model = createModel(
'my-model', // model name (literal type inferred)
['text', 'image'] // input modalities (tuple type inferred)
)
```


## Model Definition Structure

Each custom model definition has three properties:

### Defining Input Modalities

The `input` array specifies which content types your model supports:

```typescript
const models = [
createModel('text-only-model', ['text']),
createModel('multimodal-model', ['text', 'image', 'audio']),
] as const
```

Available modalities: `'text'`, `'image'`, `'audio'`, `'video'`, `'document'`

## Preserving Original Factory Behavior

`extendAdapter` fully preserves the original factory's signature, including any configuration parameters:

```typescript
import { createModel, extendAdapter } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

const myOpenai = extendAdapter(openaiText, customModels)

// Config parameter is preserved
const adapter = myOpenai('my-fine-tuned-gpt4', {
baseURL: 'https://my-proxy.com/v1',
timeout: 30000
})
```

## Type Safety

The extended adapter provides full type safety:

```typescript
import { extendAdapter, createModel } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

const myOpenai = extendAdapter(openaiText, [createModel('custom-model', ['text'])])

// βœ… Original models work with their original types
const a1 = myOpenai('gpt-4o')

// βœ… Custom models work with your defined types
const a2 = myOpenai('custom-model')

// ❌ Type error: invalid model name
// Note: Type checking works when you assign the result to a variable
const invalid = myOpenai('nonexistent-model') // TypeScript error!
```


## Runtime Behavior

At runtime, `extendAdapter` simply passes through to the original factory:

- No validation is performed on custom model names
- The original factory receives exactly what you pass
- This allows the original provider's API to handle the model name

This design is intentional - it allows you to:
- Use fine-tuned model names that the provider accepts but TypeScript doesn't know about
- Proxy requests to different backends that accept custom model identifiers
- Add type safety without runtime overhead

## Example: OpenAI-Compatible Proxy

A common use case is typing models for an OpenAI-compatible proxy:

```typescript
import { extendAdapter, createModel } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

// Models available through your proxy
const proxyModels = [
createModel(
'llama-3.1-70b',
['text']
),
createModel(
'mixtral-8x7b',
['text']
),
] as const

const proxyAdapter = extendAdapter(openaiText, proxyModels)

// Use with your proxy's base URL
const adapter = proxyAdapter('llama-3.1-70b', {
baseURL: 'https://my-llm-proxy.com/v1'
})
```

## Example: Fine-tuned Models

Adding type safety for your fine-tuned models:

```typescript
import { createModel, extendAdapter } from '@tanstack/ai'
import { anthropicText } from '@tanstack/ai-anthropic'

const fineTunedModels = [
createModel(
'ft:claude-3-opus:my-org:custom-task:abc123',
['text', 'image']
),
] as const

const myAnthropic = extendAdapter(anthropicText, fineTunedModels)

chat({
adapter: myAnthropic('ft:claude-3-opus:my-org:custom-task:abc123'),
messages: [{ role: 'user', content: 'Analyze this...' }]
})
```
1 change: 1 addition & 0 deletions packages/typescript/ai-anthropic/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ export type {
AnthropicChatModelProviderOptionsByName,
AnthropicModelInputModalitiesByName,
} from './model-meta'
export { ANTHROPIC_MODELS } from './model-meta'
export type {
AnthropicTextMetadata,
AnthropicImageMetadata,
Expand Down
1 change: 1 addition & 0 deletions packages/typescript/ai-gemini/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ export {
} from './adapters/tts'

// Re-export models from model-meta for convenience
export { GEMINI_MODELS } from './model-meta'
export { GEMINI_MODELS as GeminiTextModels } from './model-meta'
export { GEMINI_IMAGE_MODELS as GeminiImageModels } from './model-meta'
export { GEMINI_TTS_MODELS as GeminiTTSModels } from './model-meta'
Expand Down
1 change: 1 addition & 0 deletions packages/typescript/ai-openai/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ export type {
OpenAIModelInputModalitiesByName,
} from './model-meta'
export {
OPENAI_CHAT_MODELS,
OPENAI_IMAGE_MODELS,
OPENAI_TTS_MODELS,
OPENAI_TRANSCRIPTION_MODELS,
Expand Down
182 changes: 182 additions & 0 deletions packages/typescript/ai/src/extend-adapter.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
import type { Modality } from './types'

// ===========================
// Extended Model Definition
// ===========================

/**
* Definition for a custom model to add to an adapter.
*
* @template TName - The model name as a literal string type
* @template TInput - Array of supported input modalities
* @template TOptions - Provider options type for this model
*
* @example
* ```typescript
* const customModels = [
* createModel('my-custom-model', ['text', 'image']),
* ] as const
* ```
*/
export interface ExtendedModelDef<
TName extends string = string,
TInput extends ReadonlyArray<Modality> = ReadonlyArray<Modality>,
TOptions = unknown,
> {
/** The model name identifier */
name: TName
/** Supported input modalities for this model */
input: TInput
/** Type brand for provider options - use `{} as YourOptionsType` */
modelOptions: TOptions
}

/**
* Creates a custom model definition for use with `extendAdapter`.
*
* This is a helper function that provides proper type inference without
* requiring manual `as const` casts on individual properties.
*
* @template TName - The model name (inferred from argument)
* @template TInput - The input modalities array (inferred from argument)
*
* @param name - The model name identifier (literal string)
* @param input - Array of supported input modalities
* @returns A properly typed model definition for use with `extendAdapter`
*
* @example
* ```typescript
* import { extendAdapter, createModel } from '@tanstack/ai'
* import { openaiText } from '@tanstack/ai-openai'
*
* // Define custom models with full type inference
* const customModels = [
* createModel('my-fine-tuned-gpt4', ['text', 'image']),
* createModel('local-llama', ['text']),
* ] as const
*
* const myOpenai = extendAdapter(openaiText, customModels)
* ```
*/
export function createModel<
const TName extends string,
const TInput extends ReadonlyArray<Modality>,
>(name: TName, input: TInput): ExtendedModelDef<TName, TInput> {
return {
name,
input,
modelOptions: {} as unknown,
}
}

// ===========================
// Type Extraction Utilities
// ===========================

/**
* Extract the model name union from an array of model definitions.
*/
type ExtractCustomModelNames<TDefs extends ReadonlyArray<ExtendedModelDef>> =
TDefs[number]['name']

// ===========================
// Factory Type Inference
// ===========================

/**
* Infer the model parameter type from an adapter factory function.
* For generic functions like `<T extends Union>(model: T)`, this gets `T` which
* TypeScript treats as the constraint union when used in parameter position.
*/
type InferFactoryModels<TFactory> = TFactory extends (
model: infer TModel,
...args: Array<any>
) => any
? TModel extends string
? TModel
: string
: string

/**
* Infer the config parameter type from an adapter factory function.
*/
type InferConfig<TFactory> = TFactory extends (
model: any,
config?: infer TConfig,
) => any
? TConfig
: undefined

/**
* Infer the adapter return type from a factory function.
*/
type InferAdapterReturn<TFactory> = TFactory extends (
...args: Array<any>
) => infer TReturn
? TReturn
: never

// ===========================
// extendAdapter Function
// ===========================

/**
* Extends an existing adapter factory with additional custom models.
*
* The extended adapter accepts both original models (with full original type inference)
* and custom models (with types from your definitions).
*
* At runtime, this simply passes through to the original factory - no validation is performed.
* The original factory's signature is fully preserved, including any config parameters.
*
* @param factory - The original adapter factory function (e.g., `openaiText`, `anthropicText`)
* @param models - Array of custom model definitions with `name` and `input`
* @returns A new factory function that accepts both original and custom models
*
* @example
* ```typescript
* import { extendAdapter, createModel } from '@tanstack/ai'
* import { openaiText } from '@tanstack/ai-openai'
*
* // Define custom models
* const customModels = [
* createModel('my-fine-tuned-gpt4', ['text', 'image']),
* createModel('local-llama', ['text']),
* ] as const
*
* // Create extended adapter
* const myOpenai = extendAdapter(openaiText, customModels)
*
* // Use with original models - full type inference preserved
* const gpt4 = myOpenai('gpt-4o')
*
* // Use with custom models
* const custom = myOpenai('my-fine-tuned-gpt4')
*
* // Type error: 'invalid-model' is not a valid model
* // myOpenai('invalid-model')
*
* // Works with chat()
* chat({
* adapter: myOpenai('my-fine-tuned-gpt4'),
* messages: [...]
* })
* ```
*/
export function extendAdapter<
TFactory extends (...args: Array<any>) => any,
const TDefs extends ReadonlyArray<ExtendedModelDef>,
>(
factory: TFactory,
_customModels: TDefs,
): (
model: InferFactoryModels<TFactory> | ExtractCustomModelNames<TDefs>,
...args: InferConfig<TFactory> extends undefined
? []
: [config?: InferConfig<TFactory>]
) => InferAdapterReturn<TFactory> {
// At runtime, we simply pass through to the original factory.
// The _customModels parameter is only used for type inference.
// No runtime validation - users are trusted to pass valid model names.
return factory as any
}
4 changes: 4 additions & 0 deletions packages/typescript/ai/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -112,3 +112,7 @@ export type {
ToolResultState,
JSONParser,
} from './activities/chat/stream/index'

// Adapter extension utilities
export { createModel, extendAdapter } from './extend-adapter'
export type { ExtendedModelDef } from './extend-adapter'
Loading
Loading