The Playground is an interactive environment for rapid testing and refinement of your LLM interactions — before you write a single line of code.
Navigate to Playground under AI Tools in the sidebar.
Interface Layout
The Playground is split into sections:
- Model Selector — Choose your provider and model from a dropdown (supports OpenAI, Anthropic, Google, and more).
- System Prompt — Define the system instructions that set the model’s behavior and context.
- User Message — Write sample user messages to test how the model responds.
- Output Panel — View the model’s response in real-time.
Multi-Model Comparisons
One of Traceport’s standout features is the ability to compare multiple models side-by-side:
Add Comparison
Click the Add Comparison button to add another model column.
Select Models
Choose different providers and models for each column (e.g., GPT-4o vs Claude 3.5 Sonnet).
Run
Hit Run to dispatch your prompts to all selected models simultaneously.
Compare Results
Review responses side-by-side to evaluate quality, speed, and cost tradeoffs.
Use multi-model comparison to quickly find the best model for your use case before committing to it in production.
Save as Prompt
Once satisfied with a configuration, click Save as Prompt to save the entire setup — system instructions, model, and parameters — to your Prompts library for version-controlled management and API access.