RA.Aid/docs/docs/quickstart/open-models.md

6.2 KiB

sidebar_position
3

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

Open Models Configuration

RA.Aid supports a variety of open source and compatible model providers. This guide covers configuration options and best practices for using different models with RA.Aid.

Overview

RA.Aid supports these model providers:

Provider Description Key Features
DeepSeek Chinese hedge fund who creates sophisticated LLMs Strong, open models like R1
OpenRouter Multi-model gateway service Access to 100+ models, unified API interface, pay-per-token
OpenAI-compatible Self-hosted model endpoints Compatible with Llama, Mistral and other open models
Anthropic Claude model series 200k token context, strong tool use, JSON/XML parsing
Gemini Google's multimodal models Code generation in 20+ languages, parallel request support

Basic Configuration

  1. Set your provider's API key:
# Choose the appropriate provider
export DEEPSEEK_API_KEY=your_key
export OPENROUTER_API_KEY=your_key
export OPENAI_API_KEY=your_key
export ANTHROPIC_API_KEY=your_key
export GEMINI_API_KEY=your_key
  1. Run RA.Aid with your chosen provider:
ra-aid -m "Your task" --provider <provider> --model <model>

Provider Configuration

DeepSeek Models

DeepSeek offers powerful reasoning models optimized for complex tasks.

# Environment setup
export DEEPSEEK_API_KEY=your_api_key_here

# Basic usage
ra-aid -m "Your task" --provider deepseek --model deepseek-reasoner

# With temperature control
ra-aid -m "Your task" --provider deepseek --model deepseek-reasoner --temperature 0.7

Available Models:

  • deepseek-reasoner: Optimized for reasoning tasks
  • Access via OpenRouter: deepseek/deepseek-r1

OpenRouter Integration

OpenRouter provides access to multiple open source models through a single API.

# Environment setup
export OPENROUTER_API_KEY=your_api_key_here

# Example commands
ra-aid -m "Your task" --provider openrouter --model mistralai/mistral-large-2411
ra-aid -m "Your task" --provider openrouter --model deepseek/deepseek-r1

Popular Models:

  • mistralai/mistral-large-2411
  • anthropic/claude-3
  • deepseek/deepseek-r1

OpenAI-compatible Endpoints

Use OpenAI-compatible API endpoints with custom hosting solutions.

# Environment setup
export OPENAI_API_KEY=your_api_key_here
export OPENAI_API_BASE=https://your-api-endpoint

# Usage
ra-aid -m "Your task" --provider openai-compatible --model your-model-name

Configuration Options:

  • Set custom base URL with OPENAI_API_BASE
  • Supports temperature control
  • Compatible with most OpenAI-style APIs

Google Gemini Models

Google's Gemini models offer powerful multimodal capabilities with extensive code generation support.

# Environment setup
export GEMINI_API_KEY=your_api_key_here

# Basic usage
ra-aid -m "Your task" --provider gemini --model gemini-1.5-pro-latest

# With temperature control
ra-aid -m "Your task" --provider gemini --model gemini-1.5-flash-latest --temperature 0.5

Available Models:

  • gemini-pro: Original Gemini Pro model
  • gemini-1.5-flash-latest: Latest Gemini 1.5 Flash model (fast responses)
  • gemini-1.5-pro-latest: Latest Gemini 1.5 Pro model (strong reasoning)
  • gemini-1.5-flash: Gemini 1.5 Flash release
  • gemini-1.5-pro: Gemini 1.5 Pro release
  • gemini-1.0-pro: Original Gemini 1.0 Pro model

Configuration Notes:

  • All Gemini models support a 128,000 token context window
  • Temperature control is supported for creative vs. deterministic responses
  • Obtain your API key from AI Studio

Advanced Configuration

Expert Tool Configuration

Configure the expert model for specialized tasks; this usually benefits from a more powerful, slower, reasoning model:

# DeepSeek expert
export EXPERT_DEEPSEEK_API_KEY=your_key
ra-aid -m "Your task" --expert-provider deepseek --expert-model deepseek-reasoner

# OpenRouter expert
export EXPERT_OPENROUTER_API_KEY=your_key
ra-aid -m "Your task" --expert-provider openrouter --expert-model mistralai/mistral-large-2411

# Gemini expert
export EXPERT_GEMINI_API_KEY=your_key
ra-aid -m "Your task" --expert-provider gemini --expert-model gemini-2.0-flash-thinking-exp-1219

Temperature Settings

Control model creativity vs determinism:

# More deterministic (good for coding)
ra-aid -m "Your task" --temperature 0.2

# More creative (good for brainstorming)
ra-aid -m "Your task" --temperature 0.8

Note: Not all models support temperature control. Check provider documentation.

Best Practices

  • Set environment variables in your shell configuration file
  • Use lower temperatures (0.1-0.3) for coding tasks
  • Test different models to find the best fit for your use case
  • Consider using expert mode for complex programming tasks

Environment Variables

Complete list of supported environment variables:

Variable Provider Purpose
OPENROUTER_API_KEY OpenRouter Main API access
DEEPSEEK_API_KEY DeepSeek Main API access
OPENAI_API_KEY OpenAI-compatible API access
OPENAI_API_BASE OpenAI-compatible Custom endpoint
ANTHROPIC_API_KEY Anthropic API access
GEMINI_API_KEY Gemini API access
EXPERT_OPENROUTER_API_KEY OpenRouter Expert tool
EXPERT_DEEPSEEK_API_KEY DeepSeek Expert tool
EXPERT_GEMINI_API_KEY Gemini Expert tool

Troubleshooting

  • Verify API keys are set correctly
  • Check endpoint URLs for OpenAI-compatible setups
  • Monitor API rate limits and quotas