Use Models Directly

This example shows how to use Athena’s language models for simple text generation when building with our Python SDK.

  • Use any model available in your workspace
  • Simple input/output for basic LLM calls
  • Get started with athena.agents.general.invoke()

This guide focuses on simple text generation. For multi-step workflows with tools like web browsing and search, see Build with Agents.

1

Install Package

1!pip install -U athena-intelligence
2

Set Up Client

1from athena import GeneralAgentConfig, GeneralAgentRequest
2from athena.client import Athena
3
4# Initialize client
5athena = Athena(api_key="<YOUR_API_KEY>")
3

Basic Usage

For simple text generation, use the General Agent with no tools enabled:

1# Create a simple request with no tools
2config = GeneralAgentConfig(enabled_tools=[])
3
4# Simple question
5response = athena.agents.general.invoke(
6 request=GeneralAgentRequest(
7 config=config,
8 messages=[{"type": "human", "content": "What is the capital of France?"}]
9 )
10)
11
12# Get the response text
13print(response.messages[-1]["kwargs"]["content"])
4

Available Models

Specify models explicitly using the model parameter in the config. The default model is Claude.

Available models include:

  • claude_3_7_sonnet: Claude 3.7 Sonnet
  • claude_4_sonnet: Claude 4 Sonnet
  • claude_4_opus: Claude 4 Opus (default)
  • openai_gpt_4_5: OpenAI GPT-4.5 Preview
  • openai_gpt_4: OpenAI GPT-4
  • openai_gpt_4_turbo: OpenAI GPT-4 Turbo
  • openai_gpt_4_turbo_preview: OpenAI GPT-4 Turbo Preview
  • openai_gpt_4o: OpenAI GPT-4o
  • openai_gpt_4o_mini: OpenAI GPT-4o Mini
  • openai_o3_mini: OpenAI o3 Mini
  • openai_o3_low_reasoning: OpenAI o3 (Low Reasoning)
  • openai_o3_medium_reasoning: OpenAI o3 (Medium Reasoning)
  • openai_o3_high_reasoning: OpenAI o3 (High Reasoning)
  • openai_o3_mini: OpenAI o3 Mini
  • openai_o3_mini_low_reasoning: OpenAI o3 Mini (Low Reasoning)
  • openai_o3_mini_high_reasoning: OpenAI o3 Mini (High Reasoning)
  • openai_o4_mini: OpenAI o4 Mini
1# Specify a model
2config = GeneralAgentConfig(
3 enabled_tools=[],
4 model="claude_3_7_sonnet"
5)
6
7response = athena.agents.general.invoke(
8 request=GeneralAgentRequest(
9 config=config,
10 messages=[{"type": "human", "content": "Who are you?"}]
11 )
12)
13print(response.messages[-1]["kwargs"]["content"])
14
15# Use another model
16config_gpt4 = GeneralAgentConfig(
17 enabled_tools=[],
18 model="openai_gpt_4o"
19)
20
21response = athena.agents.general.invoke(
22 request=GeneralAgentRequest(
23 config=config_gpt4,
24 messages=[{"type": "human", "content": "Explain quantum computing briefly"}]
25 )
26)
27print(response.messages[-1]["kwargs"]["content"])
5

Multiple Questions

Process multiple questions by making separate requests:

1prompts = [
2 "Explain the theory of relativity",
3 "What is machine learning?",
4 "How does photosynthesis work?",
5 "Describe the water cycle"
6]
7
8config = GeneralAgentConfig(enabled_tools=[])
9
10for i, prompt in enumerate(prompts):
11 response = athena.agents.general.invoke(
12 request=GeneralAgentRequest(
13 config=config,
14 messages=[{"type": "human", "content": prompt}]
15 )
16 )
17 print(f"Response {i+1}:")
18 print(response.messages[-1]["kwargs"]["content"])
19 print("-" * 40)
6

Multi-turn Conversations

Maintain context by passing the full message history:

1from langchain_core.messages import HumanMessage
2from langchain_core.load import load
3
4config = GeneralAgentConfig(enabled_tools=[])
5
6# First message
7response = athena.agents.general.invoke(
8 request=GeneralAgentRequest(
9 config=config,
10 messages=[{"type": "human", "content": "What is Python?"}]
11 )
12)
13
14# Continue the conversation with context
15messages = load(response.messages) + [HumanMessage(content="What are its main use cases?")]
16
17continued_response = athena.agents.general.invoke(
18 request=GeneralAgentRequest(
19 config=config,
20 messages=messages
21 )
22)
23print(continued_response.messages[-1]["kwargs"]["content"])