Use cases
Reliable request execution where LLM calls get fragile.
Use ReqRun when one model request should survive timeouts, rate limits, client retries, and worker restarts.
Reliable LLM Calls Inside Agent Tasks
An AI agent request is a model call that may continue after the user action, job runner, or network connection that started it.
Use ReqRun when an agent task depends on an OpenAI request that should not disappear on timeout, rate limit, or worker restart.
Open use caseReliable LLM Steps In API Flows
API orchestration coordinates dependent API calls; ReqRun only makes the OpenAI-compatible request step durable and visible.
Use ReqRun as the reliability boundary for the OpenAI call inside an API flow you already own.
Open use caseDurable LLM Tasks For Automation
Automation code often triggers LLM work; ReqRun makes that model request durable without owning the rest of your automation.
Use ReqRun when an automated process starts an OpenAI request that must finish or fail visibly.
Open use case