Where ReqRun fits
ReqRun is intentionally narrower than an orchestration platform. It is an LLM request reliability layer for the OpenAI-compatible step inside a flow you own.
That narrow boundary keeps the product simple while still solving a painful production reliability problem: lost, duplicated, or invisible LLM calls.
When to use it
Use ReqRun when your API flow depends on an LLM call that needs retries, idempotency, and request status.