Our commitment to protecting your AI infrastructure.
🔒 Enterprise-grade security. SOC2 certification in progress.
All data transmitted between your application and WatchLLM is encrypted using TLS 1.3. Sensitive information, including provider API keys, is encrypted at rest using AES-256 via Supabase Vault patterns.
WatchLLM is designed as a pass-through proxy. By default, prompt and response bodies are NOT stored permanently unless you explicitly enable caching. Even then, we prioritize storing semantic hashes (embeddings) rather than raw text.
Our edge workers run on Cloudflare's global network, benefiting from their industry-leading DDoS protection and isolated V8 isolates. We perform regular security audits and dependency updates.
You control retention windows for cache entries and analytics. Default settings prioritize minimal retention with automatic expiration.
We maintain a documented incident response plan with rapid triage, customer notification, and post-incident review.
We welcome security research and coordinate fixes quickly with responsible disclosure practices.
We regularly review infrastructure and dependencies with automated scans and external audits where applicable.
If you believe you have found a security vulnerability in WatchLLM, please report it to us at support@watchllm.dev. We will respond as quickly as possible.