Quickstart

Track your first agent
in 60 seconds.

One wrapper. Four lines. Full visibility into token counts, model costs, latency, retries, and dollar spend — across every agent, workflow, and service you run.

1
Install the SDK

Add Spendara to your project. The SDK wraps your existing AI client calls with zero changes to your logic.

Open source & on PyPI. Install the Python SDK in one command. Zero dependencies, full type hints, MIT license. View on GitHub
shell
pip install spendara
shell
npm install spendara
shell
pip install spendara langchain
2
Set your API key

Set SPENDARA_API_KEY as an environment variable. Never hard-code it.

shell
export SPENDARA_API_KEY="spr_live_xxxxxxxxxxxxxxxxxxxxxxxx"
.env
SPENDARA_API_KEY=spr_live_xxxxxxxxxxxxxxxxxxxxxxxx
3
Wrap your agent

Add Spendara to your existing code. Pick your runtime below — the wrapper captures model, tokens, latency, retries, and dollar cost without touching your agent logic.

Use the @spendara.track() decorator or the spendara.run() context manager. Both capture the same data.

Python — OpenAI
import spendara
from openai import OpenAI

# Initialize once at startup
spendara.init()  # reads SPENDARA_API_KEY from env

client = OpenAI()

@spendara.track(
    agent_id="research-agent",
    workflow_id="market-analysis-v2",
)
def run_research(query: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": query}]
    )
    return response.choices[0].message.content

# That's it. Every call now streams cost events to Spendara.
result = run_research("Summarize Q1 earnings for NVDA")
Python — Anthropic
import spendara
import anthropic

spendara.init()
client = anthropic.Anthropic()

# Use context manager for fine-grained run boundaries
def run_agent(task: str) -> str:
    with spendara.run(
        agent_id="summarizer-agent",
        workflow_id="daily-digest",
        tags={"env": "production", "team": "content"},
    ) as run:
        message = client.messages.create(
            model="claude-opus-4-5",
            max_tokens=1024,
            messages=[{"role": "user", "content": task}]
        )
        # run.cost, run.tokens, run.latency_ms available here
        return message.content[0].text

Wrap any OpenAI or Anthropic SDK call. Works with async/await and streams.

TypeScript — OpenAI
import { Spendara } from 'spendara';
import OpenAI from 'openai';

// Initialize once
const spendara = new Spendara(); // reads SPENDARA_API_KEY from env
const openai = new OpenAI();

async function runAgent(query: string): Promise<string> {
  return spendara.track(
    { agentId: 'research-agent', workflowId: 'market-analysis-v2' },
    async (ctx) => {
      const response = await openai.chat.completions.create({
        model: 'gpt-4o',
        messages: [{ role: 'user', content: query }],
      });

      // ctx.cost, ctx.tokens, ctx.latencyMs available
      return response.choices[0].message.content ?? '';
    }
  );
}
TypeScript — Anthropic
import { Spendara } from 'spendara';
import Anthropic from '@anthropic-ai/sdk';

const spendara = new Spendara();
const anthropic = new Anthropic();

async function summarize(text: string): Promise<string> {
  return spendara.track(
    {
      agentId: 'summarizer-agent',
      workflowId: 'daily-digest',
      tags: { env: 'production', team: 'content' },
    },
    async () => {
      const message = await anthropic.messages.create({
        model: 'claude-opus-4-5',
        max_tokens: 1024,
        messages: [{ role: 'user', content: text }],
      });

      return (message.content[0] as Anthropic.TextBlock).text;
    }
  );
}

SpendaraCallbackHandler hooks into LangChain's callback system. Every LLM call, tool call, and retrieval is captured automatically — no decorator needed.

Python — LangChain
import spendara
from spendara.integrations.langchain import SpendaraCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

spendara.init()

# One handler. Automatically captures every LLM call,
# tool call, retrieval, and chain step in the workflow.
handler = SpendaraCallbackHandler(
    agent_id="research-agent",
    workflow_id="market-analysis-v2",
    tags={"env": "production"},
)

llm = ChatOpenAI(
    model="gpt-4o",
    callbacks=[handler],  # ← that's it
)

response = llm.invoke([HumanMessage(content="What drove NVDA revenue in Q1?")])
print(response.content)
Python — LangGraph
from spendara.integrations.langchain import SpendaraCallbackHandler
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI

handler = SpendaraCallbackHandler(
    agent_id="graph-agent",
    workflow_id="multi-step-research",
)

llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])

# Add handler to graph config — all nodes tracked automatically
def research_node(state):
    response = llm.invoke(state["messages"])
    return {"messages": state["messages"] + [response]}

graph = StateGraph(dict)
graph.add_node("research", research_node)
graph.set_entry_point("research")
graph.add_edge("research", END)
app = graph.compile()

# Each graph invocation = one tracked workflow run in Spendara
result = app.invoke({"messages": [HumanMessage("Analyze market trends")]})
4
View your spend

After the first tracked call, your data appears in the Spendara dashboard within seconds. No additional config — agent spend, workflow breakdowns, and anomaly detection are all live.

shell
# Verify events are flowing
spendara verify
# → ✓ API key valid
# → ✓ 3 events received (last: 0.4s ago)
# → ✓ Dashboard: https://spendara.polsia.app/dashboard

Open the live demo dashboard → to see what your data will look like. The demo runs against real mock traffic across 10 agents.

What gets tracked automatically

Spendara captures these fields on every tracked call — no extra instrumentation needed.

💰
Dollar cost
Per call and per workflow
🔢
Token counts
Input, output, cached
Latency
TTFT + total duration
🤖
Model + provider
gpt-4o, claude-3-5, etc.
🔁
Retries
Count and backoff reason
🛠️
Tool calls
Name, args, and cost share
🗄️
Embeddings
Vector calls and batch cost
🔍
Vector DB queries
Pinecone, Weaviate, pgvector

Open source. MIT licensed. pip install spendara.

The Python SDK is live on PyPI. Zero runtime dependencies — add it to any project in 30 seconds.

spendara.polsia.app/dashboard
Your dashboard after step 4
$2,841
30-day spend
10
Agents tracked
$94
Avg / day
3
Anomalies
Daily spend — last 30 days
Top agents by cost
research-agent $842
summarizer-agent $629
graph-agent $511
Open live demo →
Real mock data. No login required.