Skip to main content

/interactions

FeatureSupportedNotes
Logging✅Works across all integrations
Streaming✅
Loadbalancing✅Between supported models
Supported LLM providersAll LiteLLM supported providersopenai, anthropic, bedrock, vertex_ai, gemini, azure, azure_ai etc.

LiteLLM Python SDK Usage​

Quick Start​

Create Interaction
from litellm import create_interaction
import os

os.environ["GEMINI_API_KEY"] = "your-api-key"

response = create_interaction(
model="gemini/gemini-2.5-flash",
input="Tell me a short joke about programming."
)

print(response.outputs[-1].text)

Async Usage​

Async Create Interaction
from litellm import acreate_interaction
import os
import asyncio

os.environ["GEMINI_API_KEY"] = "your-api-key"

async def main():
response = await acreate_interaction(
model="gemini/gemini-2.5-flash",
input="Tell me a short joke about programming."
)
print(response.outputs[-1].text)

asyncio.run(main())

Streaming​

Streaming Interaction
from litellm import create_interaction
import os

os.environ["GEMINI_API_KEY"] = "your-api-key"

response = create_interaction(
model="gemini/gemini-2.5-flash",
input="Write a 3 paragraph story about a robot.",
stream=True
)

for chunk in response:
print(chunk)

LiteLLM AI Gateway (Proxy) Usage​

Setup​

Add this to your litellm proxy config.yaml:

config.yaml
model_list:
- model_name: gemini-flash
litellm_params:
model: gemini/gemini-2.5-flash
api_key: os.environ/GEMINI_API_KEY

Start litellm:

litellm --config /path/to/config.yaml

# RUNNING on http://0.0.0.0:4000

Test Request​

Create Interaction
curl -X POST "http://localhost:4000/v1beta/interactions" \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini/gemini-2.5-flash",
"input": "Tell me a short joke about programming."
}'

Streaming:

Streaming Interaction
curl -N -X POST "http://localhost:4000/v1beta/interactions" \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini/gemini-2.5-flash",
"input": "Write a 3 paragraph story about a robot.",
"stream": true
}'

Get Interaction:

Get Interaction by ID
curl "http://localhost:4000/v1beta/interactions/{interaction_id}" \
-H "Authorization: Bearer sk-1234"

Request/Response Format​

Request Parameters​

ParameterTypeRequiredDescription
modelstringYesModel to use (e.g., gemini/gemini-2.5-flash)
inputstringYesThe input text for the interaction
streambooleanNoEnable streaming responses
toolsarrayNoTools available to the model
system_instructionstringNoSystem instructions for the model
generation_configobjectNoGeneration configuration
previous_interaction_idstringNoID of previous interaction for context

Response Format​

{
"id": "interaction_abc123",
"object": "interaction",
"model": "gemini-2.5-flash",
"status": "completed",
"created": "2025-01-15T10:30:00Z",
"updated": "2025-01-15T10:30:05Z",
"role": "model",
"outputs": [
{
"type": "text",
"text": "Why do programmers prefer dark mode? Because light attracts bugs!"
}
],
"usage": {
"total_input_tokens": 10,
"total_output_tokens": 15,
"total_tokens": 25
}
}

Calling non-Interactions API endpoints (/interactions to /responses Bridge)​

LiteLLM allows you to call non-Interactions API models via a bridge to LiteLLM's /responses endpoint. This is useful for calling OpenAI, Anthropic, and other providers that don't natively support the Interactions API.

Python SDK Usage​

SDK Usage
import litellm
import os

# Set API key
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"

# Non-streaming interaction
response = litellm.interactions.create(
model="gpt-4o",
input="Tell me a short joke about programming."
)

print(response.outputs[-1].text)

LiteLLM Proxy Usage​

Setup Config:

Example Configuration
model_list:
- model_name: openai-model
litellm_params:
model: gpt-4o
api_key: os.environ/OPENAI_API_KEY

Start Proxy:

Start LiteLLM Proxy
litellm --config /path/to/config.yaml

# RUNNING on http://0.0.0.0:4000

Make Request:

non-Interactions API Model Request
curl http://localhost:4000/v1beta/interactions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "openai-model",
"input": "Tell me a short joke about programming."
}'

Supported Providers​

ProviderLink to Usage
Google AI StudioUsage
All other LiteLLM providersBridge Usage