Skip to main content

Get Up and Running

1

Create an Account

Navigate to app.traceport.ai and sign up for a free account.
2

Connect Your Providers

Traceport routes your LLM requests to the appropriate model providers. Connect your preferred providers first.
  1. Go to Integrations in the left sidebar.
  2. Select your provider under the “Available” section.
  3. Input your provider’s API key — it is securely stored and used to forward your prompts.
Start with one provider (e.g., OpenAI) and add more later.
3

Generate a Traceport API Key

  1. Navigate to API Keys in the sidebar.
  2. Click Create New Key.
  3. Name your key (e.g., “Production API”).
  4. Copy the generated key.
Store your API key securely — it won’t be shown again after creation.
4

Update Your Codebase

Integrating Traceport requires minimal changes. Simply point your SDK to Traceport’s gateway URL and use your Traceport API Key.
from openai import OpenAI

client = OpenAI(
    api_key="<YOUR_TRACEPORT_API_KEY>",
    base_url="https://api.traceport.ai/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "What is observability?"}
    ]
)

print(response.choices[0].message.content)
Traceport provides an OpenAI-compatible Gateway API, so you can use standard OpenAI SDKs (Python, Node.js) or any framework like LangChain or LlamaIndex that supports custom base URLs.
5

View Your Logs

Run your code, then return to your Traceport Dashboard and navigate to Logs. You’ll instantly see your captured completions, latency, and cost in real-time.

What’s Next?

API Logs

Deep-dive into individual request details, tokens, and costs.

Workflow Traces

Monitor complex multi-step AI workflows end-to-end.

Model Playground

Test and compare models interactively before writing code.

API Reference

Explore the full Traceport Gateway API documentation.