Warning
This framework is experimental. It is not stable and is not guaranteed to be maintained in the future. For evaluation purposes only.
Python toolkit for building model-powered apps and agent loops.
uv add vercel-ai-sdkimport aiimport asyncio
import ai
@ai.tool
async def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"Sunny, 72F in {city}"
async def main() -> None:
model = ai.ai_gateway("anthropic/claude-sonnet-4")
agent = ai.agent(tools=[get_weather])
messages = [
ai.system_message("You are a helpful weather assistant."),
ai.user_message("What's the weather in Tokyo?"),
]
async for msg in agent.run(model, messages):
if msg.text_delta:
print(msg.text_delta, end="", flush=True)
print()
if __name__ == "__main__":
asyncio.run(main())ai.openai(model_id) provider — callable, returns Model
ai.anthropic(model_id) provider — callable, returns Model
ai.ai_gateway(model_id) provider — callable, returns Model
provider.list() list available model IDs from the provider API
ai.stream(model, messages, ...) streaming generation (supports tools=, output_type=)
ai.generate(model, messages, p) non-streaming generation (ImageParams, VideoParams)
ai.check_connection(model) verify credentials and model availability
ai.Client(base_url=, api_key=) explicit client — pass to provider call: ai.openai("gpt-5.4", client=c)
ai.agent(tools=[...]) agent with tool loop
ai.tool decorator: schema gen + validation + execution
ai.hook(name, payload=, ...) suspension point; resolve with ai.resolve_hook(...)
ai.resolve_hook(name, value) resolve a pending hook from outside the loop
ai.cancel_hook(name) cancel a pending hook
ai.yield_from(...) forward nested agent / streaming tool output
ai.system_message ai.user_message ai.assistant_message ai.tool_message
ai.tool_result ai.file_part ai.thinking
ai.Middleware base class — subclass and override wrap methods
agent.run(..., middleware=[m]) run-scoped, first = outermost
Five wrap surfaces: wrap_agent_run, wrap_model, wrap_generate, wrap_tool, wrap_hook.
Each receives a context dataclass and a next callable. Default implementations pass through.
ai.mcp.get_http_tools(url, ...) expose an MCP server as tools
ai.ai_sdk_ui AI SDK UI streaming adapter
Override the default loop when you need approval gates, routing, or custom orchestration:
@agent.loop
async def custom(context: ai.Context):
while True:
s = await ai.stream(
context.model, context.messages, tools=context.tools
)
async for msg in s:
yield msg
tool_calls = context.resolve(s.tool_calls)
if not tool_calls:
return
results = [await tc() for tc in tool_calls]
yield ai.tool_message(*results)Small focused samples live in examples/samples/. End-to-end demos:
examples/fastapi-vite/-- FastAPI backend + Vite frontend with hook-based tool approvalexamples/multiagent-textual/-- Textual TUI with parallel agents and interactive hook resolutionexamples/temporal-direct/-- durable agent with a custom loop (every I/O call is a Temporal activity)examples/temporal-middleware/-- durable agent via middleware (default loop, I/O routed to Temporal activities)examples/samples/middleware_simple.py-- middleware that logs all five execution surfaces