Observability
Envoy AI Gateway extends the capabilities of Envoy Gateway, and as you run Envoy Gateway you have access to the foundational observability in the Envoy Gateway system. We recommend you familiarize yourself with the Envoy Gateway Observability Documentation.
AI/LLM Observability Features
The Envoy AI Gateway provides specialized observability capabilities for AI and LLM workloads:
- GenAI Metrics - Prometheus metrics following OpenTelemetry Gen AI semantic conventions for monitoring token usage, latency, and model performance.
- GenAI Tracing - OpenTelemetry integration with OpenInference semantic conventions for LLM request tracing and evaluation.