Skip to main content
Before routing traffic through Traceport, the Gateway needs authorization to communicate with your downstream LLM providers. The Integrations page lets you manage these connections. Navigate to Integrations under API Management in the sidebar.

Connecting a New Provider

1

Navigate to Integrations

Open the Integrations page and click the Available tab to see providers you haven’t connected yet.
2

Select a Provider

Click on your intended provider to begin setup.
3

Add Your API Key

Provide your provider API key from their developer console. Traceport securely stores and uses this key to forward requests.
4

Save

Save the connection. Connected providers move to the Connected tab.

Supported Providers

ProviderSlugCapabilitiesDescription
OpenAIopenaiChat, Embed, BatchGPT-5.4 series, GPT-4o, DALL-E 3
AnthropicanthropicChatClaude 4.6 (Opus, Sonnet, Haiku), Claude 3.5
Google GeminigoogleChat, EmbedGemini 3.1 (Pro, Flash), Gemini 1.5
Google Vertex AIvertexChat, EmbedVertex-hosted foundation models
GroqgroqChat, Batch, FilesUltra-fast LPUs for Llama 4 and Mistral
Mistral AImistralChat, Embed, BatchNative flagship models (Large 3, Small 4)
Together AItogetherChat, EmbedLarge-scale hosting for specialized OSS models
DeepSeekdeepseekChatHigh-performance reasoning (V3.2, R1)
Fireworks AIfireworksChat, EmbedUltra-fast inference for curated open models
PerplexityperplexityChat (Citations)Search-augmented AI with live citations
OpenRouteropenrouterChat (Unified)Aggregated access to 200+ flagship models

πŸ€– Latest Models Catalog (April 2026)

Traceport supports the following frontier models across native and aggregated provider connections:

OpenAI

  • gpt-5.4-pro: Next-gen flagship with 2M context window.
  • gpt-5.4-thinking: Specialized for complex, long-form reasoning.
  • gpt-5.4-mini: High-speed frontier model for efficient tasks.

Anthropic

  • claude-4-6-opus: Multi-modal flagship with 1M context.
  • claude-4-6-sonnet: The industry standard for balanced speed and intelligence.
  • claude-4-6-haiku: Ultra-fast multi-modal reasoning.

Google

  • gemini-3.1-pro: Advanced multi-modal reasoning with 2M context.
  • gemini-3.1-flash: Optimized for speed and high-throughput multi-modal tasks.

Open Source & Specialized Ecosystem

We provide high-performance inference for the leading open-weight models:
  • Llama 4: Supported via Groq and Together AI (405B Maverick, 70B Scout).
  • DeepSeek: DeepSeek V3.2 and R1 (Reasoning-focused).
  • Mistral: Mistral Large 3 and Small 4 native integrations.
  • Perplexity: Sonar Pro and Sonar Reasoning Pro with live search capability.

Managing Connected Providers

Connected providers appear in the Connected tab. From there you can:
  • Update API keys if they’ve been rotated.
  • Disconnect a provider if it’s no longer needed.
Connect multiple providers to take full advantage of Configs β€” set up fallback routing so your app automatically switches providers during outages.