Adapters Overview
Adapters let you send logs to external observability platforms. evlog provides built-in adapters for popular services, and you can create custom adapters for any destination.
How Adapters Work
Adapters receive a DrainContext after each request completes and send the data to an external service. The drain runs in fire-and-forget mode, meaning it never blocks the HTTP response.
How you wire an adapter depends on your framework:
// server/plugins/evlog-drain.ts
import { createAxiomDrain } from 'evlog/axiom'
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:drain', createAxiomDrain())
})
import { createAxiomDrain } from 'evlog/axiom'
app.use(evlog({ drain: createAxiomDrain() }))
import { createAxiomDrain } from 'evlog/axiom'
app.use(evlog({ drain: createAxiomDrain() }))
import { createAxiomDrain } from 'evlog/axiom'
await app.register(evlog, { drain: createAxiomDrain() })
import { createAxiomDrain } from 'evlog/axiom'
app.use(evlog({ drain: createAxiomDrain() }))
import { createAxiomDrain } from 'evlog/axiom'
EvlogModule.forRoot({ drain: createAxiomDrain() })
import { createAxiomDrain } from 'evlog/axiom'
initLogger({ drain: createAxiomDrain() })
waitUntil() to ensure drains complete before the runtime terminates. No additional configuration needed.Available Adapters
Standalone Usage
In plain TypeScript or Bun scripts (no HTTP framework), use the drain option in initLogger. Every emitted event is drained automatically.
import type { DrainContext } from 'evlog'
import { initLogger, log, createRequestLogger } from 'evlog'
import { createAxiomDrain } from 'evlog/axiom'
import { createDrainPipeline } from 'evlog/pipeline'
const pipeline = createDrainPipeline<DrainContext>()
const drain = pipeline(createAxiomDrain())
initLogger({
env: { service: 'my-script' },
drain,
})
log.info({ action: 'job_started' }) // drained automatically
const reqLog = createRequestLogger({ method: 'POST', path: '/process' })
reqLog.set({ processed: 42 })
reqLog.emit() // drained automatically
await drain.flush()
Multiple Destinations
Send logs to multiple services simultaneously by composing drains:
import { createAxiomDrain } from 'evlog/axiom'
import { createOTLPDrain } from 'evlog/otlp'
const axiom = createAxiomDrain()
const otlp = createOTLPDrain()
const drain = async (ctx) => {
await Promise.allSettled([axiom(ctx), otlp(ctx)])
}
Then pass drain to your framework:
// server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:drain', drain)
})
app.use(evlog({ drain }))
app.use(evlog({ drain }))
await app.register(evlog, { drain })
app.use(evlog({ drain }))
EvlogModule.forRoot({ drain })
initLogger({ drain })
Drain Context
Every adapter receives a DrainContext with:
| Field | Type | Description |
|---|---|---|
event | WideEvent | The complete log event with all accumulated context |
request | object | Request metadata (method, path, requestId) |
headers | object | Safe HTTP headers (sensitive headers are filtered) |
authorization, cookie, x-api-key, etc.) are automatically filtered and never passed to adapters.Zero-Config Setup
All adapters support automatic configuration via environment variables. No code changes needed when deploying to different environments.
Each adapter reads from NUXT_* prefixed variables (for Nuxt/Nitro runtimeConfig) and unprefixed fallbacks (for any framework):
# Axiom (NUXT_AXIOM_* or AXIOM_*)
AXIOM_TOKEN=xaat-xxx
AXIOM_DATASET=my-logs
# OTLP (NUXT_OTLP_* or OTEL_*)
OTLP_ENDPOINT=https://otlp.example.com
# PostHog (NUXT_POSTHOG_* or POSTHOG_*)
POSTHOG_API_KEY=phc_xxx
# Sentry (NUXT_SENTRY_* or SENTRY_*)
SENTRY_DSN=https://key@o0.ingest.sentry.io/123
# Better Stack (NUXT_BETTER_STACK_* or BETTER_STACK_*)
BETTER_STACK_SOURCE_TOKEN=your-source-token
Adapters auto-read from these variables — just call createXDrain() with no arguments.
Client Logging
Capture browser events with structured logging. Same API as the server, with automatic console styling, user identity context, and optional server transport.
Axiom
Send wide events to Axiom for powerful querying, dashboards, and alerting. Zero-config setup with environment variables and automatic batching.