Access Logs With AI/LLM Metadata
It is possible to configure Envoy Access Logs to include metadata generated by the AI Gateway such as the selected model and token consumption. This guide walks through the steps to configure the AI Gateway Routes to generate LLM token consumption metadata and to configure the Envoy Access Logs to include the AI/LLM details.
Overview
Envoy AI Gateway populates information in the filter dynamic metadata under the io.envoy.ai_gateway
namespace.
This metadata includes information about the selected model, prompt and completion token usage, and other
details about the LLM request, and can be extracted and included in the Envoy Access Logs.
AIGatewayRoute configuration
The contents of the dynamic metadata are configured in the AIGatewayRoute
resource under the llmRequestCosts
field. The llmRequestCosts
field configures a list of metadata keys to populate in the Envoy filter metadata, to
be later accessed in the access logs. For example:
apiVersion: ai-gateway.envoyproxy.io/v1alpha1
kind: AIGatewayRoute
metadata:
name: ai-gateway-route
spec:
(...)
llmRequestCosts:
- metadataKey: llm_input_token
type: InputToken
- metadataKey: llm_output_token
type: OutputToken
- metadataKey: llm_total_token
type: TotalToken
rules:
(...)
EnvoyProxy Configuration
Once the AIGatewayRoute
is configured to populate the dynamic metadata, the Envoy Access Logs can be configured
to include the metadata in the log entries. This can be done by configuring the access logs in the EnvoyProxy
resource:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: ai-access-logs
namespace: default
spec:
telemetry:
accessLog:
settings:
- sinks:
- type: File
file:
path: /dev/stdout
format:
type: JSON
json:
# AI specific fields. The properties in the dynamic metadata expressions must match the ones
# defined in the AIGatewayRoute llmRequestCosts field.
genai_model_name: "%REQ(X-AI-EG-MODEL)%"
genai_model_name_override: "%DYNAMIC_METADATA(io.envoy.ai_gateway:model_name_override)%"
genai_backend_name: "%DYNAMIC_METADATA(io.envoy.ai_gateway:backend_name)%"
genai_tokens_total: "%DYNAMIC_METADATA(io.envoy.ai_gateway:llm_total_token)%"
genai_tokens_input: "%DYNAMIC_METADATA(io.envoy.ai_gateway:llm_input_token)%"
genai_tokens_output: "%DYNAMIC_METADATA(io.envoy.ai_gateway:llm_output_token)%"
# Default fields
start_time: "%START_TIME%"
method: "%REQ(:METHOD)%"
x-envoy-origin-path: "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%"
protocol: "%PROTOCOL%"
response_code: "%RESPONSE_CODE%"
response_flags: "%RESPONSE_FLAGS%"
response_code_details: "%RESPONSE_CODE_DETAILS%"
connection_termination_details: "%CONNECTION_TERMINATION_DETAILS%"
upstream_transport_failure_reason: "%UPSTREAM_TRANSPORT_FAILURE_REASON%"
bytes_received: "%BYTES_RECEIVED%"
bytes_sent: "%BYTES_SENT%"
duration: "%DURATION%"
x-envoy-upstream-service-time: "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%"
x-forwarded-for: "%REQ(X-FORWARDED-FOR)%"
user-agent: "%REQ(USER-AGENT)%"
x-request-id: "%REQ(X-REQUEST-ID)%"
":authority": "%REQ(:AUTHORITY)%"
upstream_host: "%UPSTREAM_HOST%"
upstream_cluster: "%UPSTREAM_CLUSTER%"
upstream_local_address: "%UPSTREAM_LOCAL_ADDRESS%"
downstream_local_address: "%DOWNSTREAM_LOCAL_ADDRESS%"
downstream_remote_address: "%DOWNSTREAM_REMOTE_ADDRESS%"
requested_server_name: "%REQUESTED_SERVER_NAME%"
route_name: "%ROUTE_NAME%"
In this example we're adding the genai_*
properties with the values extracted from the filter metadata populated
by the AI Gateway. The EnvoyProxy
resource must be attached to the Gateway
object or to the GatewayClass
. For
example:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: envoy-ai-gateway-basic
namespace: default
spec:
gatewayClassName: envoy-ai-gateway-basic
listeners:
- name: http
protocol: HTTP
port: 80
infrastructure:
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: ai-access-logs
With this configuration, the access log entries will include the AI Gateway metadata and look like this:
{
":authority": "api.router.tetrate.ai",
"bytes_received": 105,
"bytes_sent": 432,
"connection_termination_details": null,
"downstream_local_address": "127.0.0.1:1975",
"downstream_remote_address": "127.0.0.1:64484",
"duration": 1526,
"genai_backend_name": "default/tars/route/aigw-run/rule/0/ref/0",
"genai_model_name": "gpt-4o-mini",
"genai_model_name_override": null,
"genai_tokens_input": 15,
"genai_tokens_output": 7,
"genai_tokens_total": 22,
"method": "POST",
"protocol": "HTTP/1.1",
"requested_server_name": null,
"response_code": 200,
"response_code_details": "via_upstream",
"response_flags": "-",
"route_name": "httproute/default/aigw-run/rule/0/match/0/*",
"start_time": "2025-08-30T16:51:37.894Z",
"upstream_cluster": "httproute/default/aigw-run/rule/0",
"upstream_host": "34.128.142.77:443",
"upstream_local_address": "192.168.1.38:64485",
"upstream_transport_failure_reason": null,
"user-agent": "curl/8.7.1",
"x-envoy-origin-path": "/v1/chat/completions",
"x-envoy-upstream-service-time": "451",
"x-forwarded-for": "192.168.1.38",
"x-request-id": "83b63da3-45a9-4196-98b8-1a7f697d6d8d"
}
Trying it out
You can deploy the example to quickly try the access log configuration against a local backend:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/ai-gateway/main/examples/access-log/basic.yaml
Once everything is applied you can send requests to the gateway and see the access logs in the gateway pod logs.