Connect OpenAI
This guide will help you configure Envoy AI Gateway to work with OpenAI's models.
Prerequisites
Before you begin, you'll need:
- An OpenAI API key from OpenAI's platform
- Basic setup completed from the Basic Usage guide
- Basic configuration removed as described in the Advanced Configuration overview
Configuration Steps
Ensure you have followed the steps in Connect Providers
1. Configure OpenAI Credentials
Edit the basic.yaml
file to replace the OpenAI placeholder value:
- Find the section containing
OPENAI_API_KEY
- Replace it with your actual OpenAI API key
Make sure to keep your API key secure and never commit it to version control. The key will be stored in a Kubernetes secret.
2. Apply Configuration
Apply the updated configuration and wait for the Gateway pod to be ready. If you already have a Gateway running, then the secret credential update will be picked up automatically in a few seconds.
kubectl apply -f basic.yaml
kubectl wait pods --timeout=2m \
-l gateway.envoyproxy.io/owning-gateway-name=envoy-ai-gateway-basic \
-n envoy-gateway-system \
--for=condition=Ready
3. Test the Configuration
You should have set $GATEWAY_URL
as part of the basic setup before connecting to providers.
See the Basic Usage page for instructions.
curl -H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "Hi."
}
]
}' \
$GATEWAY_URL/v1/chat/completions
Troubleshooting
If you encounter issues:
-
Verify your API key is correct and active
-
Check pod status:
kubectl get pods -n envoy-gateway-system
-
View controller logs:
kubectl logs -n envoy-ai-gateway-system deployment/ai-gateway-controller
-
View External Processor Logs
kubectl logs services/ai-eg-route-extproc-envoy-ai-gateway-basic
-
Common errors:
- 401: Invalid API key
- 429: Rate limit exceeded
- 503: OpenAI service unavailable
Next Steps
After configuring OpenAI:
- Connect AWS Bedrock to add another provider