Skip to main content

API Reference

ai(prompt, options?)

The main function. Calls an AI model and returns the response.

import { ai } from 'aiclientjs';

// Simple
const res = await ai('Hello');

// With messages
const res = await ai([
{ role: 'system', content: 'Be helpful' },
{ role: 'user', content: 'Hello' },
]);

// Streaming
const stream = await ai('Hello', { stream: true });

// Structured output
const res = await ai('List colors', { schema: mySchema });

Parameters

ParameterTypeDescription
promptstring | Message[]Text prompt or conversation messages

Options

OptionTypeDefaultDescription
providerstring'openai'Provider name
modelstringvariesModel identifier
apiKeystringenv varAPI key
baseURLstringprovider defaultCustom API endpoint
systemstringSystem prompt
streambooleanfalseEnable streaming
schemaJsonSchema | ZodSchemaStructured output schema
schemaNamestring'response'Name for the schema
toolsRecord<string, ToolDefinition>Tool definitions
temperaturenumberSampling temperature (0-2)
maxTokensnumberMax response tokens
stopstring[]Stop sequences
signalAbortSignalCancellation signal
headersRecord<string, string>Custom HTTP headers

Returns

Without streaming: Promise<AIResponse>

interface AIResponse {
text: string;
toolCalls: ToolCall[];
toolResults: ToolResult[];
usage: TokenUsage;
model: string;
finishReason: FinishReason;
raw: unknown;
}

With schema: Promise<AIStructuredResponse<T>>

interface AIStructuredResponse<T> extends AIResponse {
data: T; // parsed, validated JSON
}

With stream: true: Promise<AIStream>

interface AIStream extends AsyncIterable<string> {
text(): Promise<string>;
response(): Promise<AIResponse>;
toReadableStream(): ReadableStream<string>;
abort(): void;
}

createAIClient(config)

Create a preconfigured ai function with defaults baked in.

import { createAIClient } from 'aiclientjs';

const gpt = createAIClient({
provider: 'openai',
model: 'gpt-4o',
system: 'Be concise.',
});

const res = await gpt('Hello');

Config

OptionTypeDescription
providerstringDefault provider
modelstringDefault model
apiKeystringDefault API key
baseURLstringDefault base URL
systemstringDefault system prompt
temperaturenumberDefault temperature
maxTokensnumberDefault max tokens
headersRecord<string, string>Default headers

Per-call options override the defaults.


registerProvider(name, provider)

Register a custom provider.

import { registerProvider } from 'aiclientjs';

registerProvider('my-provider', {
name: 'my-provider',
async chat(request, config) { /* ... */ },
async *stream(request, config) { /* ... */ },
});

Types

Message

type Message =
| { role: 'system'; content: string }
| { role: 'user'; content: string | ContentPart[] }
| { role: 'assistant'; content: string | null; toolCalls?: ToolCall[] }
| { role: 'tool'; content: string; toolCallId: string };

type ContentPart =
| { type: 'text'; text: string }
| { type: 'image'; url: string; mimeType?: string };

ToolDefinition

interface ToolDefinition<TParams = unknown> {
description?: string;
parameters: JsonSchema | ZodSchema;
execute?: (params: TParams) => unknown | Promise<unknown>;
}

ToolCall

interface ToolCall {
id: string;
name: string;
arguments: Record<string, unknown>;
}

TokenUsage

interface TokenUsage {
promptTokens: number;
completionTokens: number;
totalTokens: number;
}

FinishReason

type FinishReason = 'stop' | 'length' | 'tool_calls' | 'content_filter' | 'unknown';

AIError

class AIError extends Error {
code: AIErrorCode;
statusCode?: number;
provider?: string;
}

type AIErrorCode =
| 'AUTH_ERROR'
| 'RATE_LIMIT'
| 'PROVIDER_ERROR'
| 'INVALID_CONFIG'
| 'UNKNOWN_PROVIDER'
| 'SCHEMA_VALIDATION'
| 'STREAM_ABORTED'
| 'NETWORK_ERROR'
| 'TOOL_EXECUTION_ERROR';

AIProvider

For building custom providers:

interface AIProvider {
readonly name: string;
chat(request: ProviderRequest, config: ProviderConfig): Promise<ProviderResponse>;
stream(request: ProviderRequest, config: ProviderConfig): AsyncIterable<ProviderStreamEvent>;
}