OpenAI adapter
import OpenAI from "openai";
import { meter } from "@amit641/llmmeter/openai";
const openai = meter(new OpenAI());
await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }],
});
Wraps chat.completions.create, responses.create, and embeddings.create. Streaming is fully supported — llmmeter injects stream_options: { include_usage: true } so token counts arrive in the final chunk.
Captured per call
tokens.input/tokens.outputtokens.cachedInput(fromprompt_tokens_details.cached_tokens)tokens.reasoning(fromcompletion_tokens_details.reasoning_tokensforo1/o3models)ttftMsfor streamed responses- Errors with class + status code
Subpath imports
import { meter } from "@amit641/llmmeter/openai"; // explicit, smaller bundle
import { meter } from "@amit641/llmmeter"; // umbrella, auto-detects shape