Call
Simple one-off LLM calls without history or token tracking.
Quick Start
from irouter import Call
from dotenv import load_dotenv
load_dotenv() # Load OPENROUTER_API_KEY from .env
# Single model
c = Call("moonshotai/kimi-k2:free")
response = c("Who played guitar on Steely Dan's Kid Charlemagne?")
print(response)
Single Model Usage
Initialize with a model slug:
c = Call("moonshotai/kimi-k2:free")
# or with explicit API key
c = Call("moonshotai/kimi-k2:free", api_key="your_api_key")
response = c("Your question here")
Get raw ChatCompletion object:
Multiple Models
Call multiple models simultaneously:
models = ["moonshotai/kimi-k2:free", "google/gemma-3n-e2b-it:free"]
mc = Call(models)
responses = mc("Who played guitar on Kid Charlemagne?")
# Returns: {"moonshotai/kimi-k2:free": "Larry Carlton", "google/gemma-3n-e2b-it:free": "David Spengler"}
# Access specific model response
print(responses["moonshotai/kimi-k2:free"])
Message Format
Pass conversation history as list of messages:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the capital of France?"},
{"role": "assistant", "content": "The capital of France is Amsterdam."},
{"role": "user", "content": "No, why would you say Amsterdam?"}
]
c = Call("moonshotai/kimi-k2:free")
response = c(messages)
Note: When using message format, the system
parameter is ignored. Include system messages in your message list.
Image Support
Image URLs and local images are supported. Supports .jpg
, .jpeg
, .png
and .webp
formats.
Pass images with text for vision-capable models:
c = Call("gpt-4o-mini")
# Image URL + text
response = c(["https://www.petlandflorida.com/wp-content/uploads/2022/04/shutterstock_1290320698-1-scaled.jpg",
"What's in this image?"])
# Example output: "The image shows a cute puppy..."
# Local image + text
response = c(["../assets/puppy.jpg", "Describe this photo"])