Concepts
Recorder
Every adapter (llmmeter-openai, llmmeter-anthropic, …) builds a Recorder from your MeterOptions. The recorder:
- Generates a ULID for every call.
- Pulls context out of
AsyncLocalStorage(userId,feature,traceId,meta). - Hashes the prompt to
promptHash. - Optionally redacts and stores the prompt + completion (off by default).
- Calls
priceFor(...)to compute USD cost from the bundled price table. - Enforces
maxDailySpendUsd. - Hands the finalised
LLMCallRecordto yourSink.
Sinks
A sink is a thing that persists records. Sinks all implement { write, flush, close }. Mix and match with multiSink:
import { meter, multiSink, jsonlSink } from "@amit641/llmmeter";
import { sqliteSink } from "@amit641/llmmeter/sqlite";
import { otelSink } from "llmmeter-otel";
const sink = multiSink(
sqliteSink({ filePath: "./.llmmeter/llmmeter.db" }),
otelSink({ tracer: trace.getTracer("my-app") }),
jsonlSink({ dir: "./logs" }), // backup
);
const openai = meter(new OpenAI(), { sink });
Context propagation
withContext uses Node's AsyncLocalStorage so context flows naturally through async boundaries — including framework code you don't control:
app.use(async (req, res, next) => {
await withContext({ userId: req.user?.id, feature: req.path }, () => next());
});
Every LLM call made downstream inherits the context.
Budget guards
meter(new OpenAI(), {
maxDailySpendUsd: 100,
onBudgetExceeded: "throw", // or "warn"
});
Spend is tracked per-process (in-memory) for now; for multi-instance deployments use the same sink across instances and run the cap check at the collector.
Privacy
By default, llmmeter records promptHash (SHA-256), token counts, and metadata — never the raw prompt or response. Opt in per call with recordPayload: true and a payloadSampleRate: 0.05 to capture 5% of payloads. The default redactor masks emails, credit cards, JWTs, and common API-key formats.
Pricing
Prices live in packages/core/src/pricing.ts. They're versioned and updated weekly via a GitHub Action. To override:
import { PRICE_TABLE } from "@amit641/llmmeter";
PRICE_TABLE.push({ provider: "openai", model: "ft:my-model", inputPer1M: 3, outputPer1M: 12 });