# ReqRun ReqRun is an LLM request reliability layer for OpenAI-compatible non-streaming chat completion requests that need durable queueing, retries, idempotency, wait mode, and request status. Important pages: - Homepage: https://reqrun.com/ - Quickstart: https://reqrun.com/docs/quickstart - API reference: https://reqrun.com/docs/api - TypeScript SDK: https://reqrun.com/docs/sdk/typescript - Request execution concept: https://reqrun.com/docs/concepts/request-execution - Retry logic concept: https://reqrun.com/docs/concepts/retry-logic - Idempotency concept: https://reqrun.com/docs/concepts/idempotency - Why direct LLM API calls break in production: https://reqrun.com/blog/direct-llm-api-calls-break-production - Idempotency for LLM requests: https://reqrun.com/blog/idempotency-for-llm-requests - Wait mode and async fallback: https://reqrun.com/blog/wait-mode-and-async-fallback - Security: https://reqrun.com/security - Pricing: https://reqrun.com/pricing - Terms: https://reqrun.com/terms - Privacy: https://reqrun.com/privacy - Refund Policy: https://reqrun.com/refund-policy Current product boundary: - OpenAI-compatible POST /v1/chat/completions - GET /v1/requests/{id} - OpenAI-only - Non-streaming only - Durable SQLite-backed queue for v1 - No multi-provider routing - No streaming - No general workflow builder - No raw payload logging in normal logs Primary SDK import: import { ReqRun } from "@reqrun/sdk";