Node.js SDK

Official WatchLLM documentation for node.js sdk.

Node.js SDK#

TypeScript/Node.js SDK for AI Observability and LLM Monitoring

Complete SDK for instrumenting Node.js applications with WatchLLM's semantic caching and cost optimization layer. Monitor LLM calls, agent steps, errors, and performance metrics with automatic PII redaction and intelligent batching.

Quick Start#

npm install watchllm-sdk-node
import { WatchLLMClient, EventType, Status } from 'watchllm-sdk-node';
 
const client = new WatchLLMClient({
  apiKey: 'your-api-key',
  projectId: 'your-project-id'
});
 
// Track LLM calls
await client.track({
  type: EventType.LLM_CALL,
  status: Status.SUCCESS,
  model: 'gpt-4',
  input: 'Hello, how are you?',
  output: 'I am doing well, thank you!',
  tokens: { prompt: 10, completion: 20 },
  cost: 0.002
});

Complete Documentation#

For comprehensive guides, API reference, and advanced usage patterns, see the full Node.js SDK Documentation.

Key Features#

  • Full TypeScript Support: Complete type safety with TypeScript definitions
  • Automatic Batching: Efficient event batching for optimal performance
  • PII Redaction: Automatic sensitive data protection
  • Agent Monitoring: Track complex agent workflows and step execution
  • Error Tracking: Comprehensive error monitoring and debugging
  • Cost Estimation: Real-time cost calculation and optimization insights

Installation Options#

# npm
npm install watchllm-sdk-node
 
# yarn
yarn add watchllm-sdk-node
 
# pnpm
pnpm add watchllm-sdk-node

Basic Usage#

import { WatchLLMClient, Status } from 'watchllm-sdk-node';
 
const client = new WatchLLMClient({
  apiKey: process.env.WATCHLLM_API_KEY!,
  projectId: process.env.WATCHLLM_PROJECT_ID!
});
 
// Simple LLM call tracking
await client.track({
  type: 'llm_call',
  status: Status.SUCCESS,
  model: 'gpt-4',
  input: 'What is the capital of France?',
  output: 'The capital of France is Paris.',
  tokens: { prompt: 8, completion: 7 },
  cost: 0.0014
});

Learn More#

Visit the complete Node.js SDK documentation for:

  • Advanced configuration options
  • Agent workflow monitoring
  • Error handling patterns
  • Performance optimization
  • Troubleshooting guides

© 2025 WatchLLM. All rights reserved.