Use Models Directly
This example shows how to use Athena’s language models directly when building with our Python SDK.
- Use any model available in your workspace
- Fully compatible with LangChain, a popular open-source library for LLM apps
- Get started with
athena.llm.invoke("Hello!")
- Efficient parallel processing and support for high volumes with
llm.batch()
Available Models
Specify models explicitly using with_config()
. The default model is Claude.
Available models include:
claude_3_7_sonnet
: Claude 3.7 Sonnetclaude_4_sonnet
: Claude 4 Sonnetclaude_4_opus
: Claude 4 Opus (default)openai_gpt_4_5
: OpenAI GPT-4.5 Previewopenai_gpt_4
: OpenAI GPT-4openai_gpt_4_turbo
: OpenAI GPT-4 Turboopenai_gpt_4_turbo_preview
: OpenAI GPT-4 Turbo Previewopenai_gpt_4o
: OpenAI GPT-4oopenai_gpt_4o_mini
: OpenAI GPT-4o Miniopenai_o3_mini
: OpenAI o3 Miniopenai_o3_low_reasoning
: OpenAI o3 (Low Reasoning)openai_o3_medium_reasoning
: OpenAI o3 (Medium Reasoning)openai_o3_high_reasoning
: OpenAI o3 (High Reasoning)openai_o3_mini
: OpenAI o3 Miniopenai_o3_mini_low_reasoning
: OpenAI o3 Mini (Low Reasoning)openai_o3_mini_high_reasoning
: OpenAI o3 Mini (High Reasoning)openai_o4_mini
: OpenAI o4 Mini