Problem
You need an OpenAI request to complete even if the upstream API rate limits, times out, or temporarily fails.
Code
Copy this into a server-side route or job.
import { ReqRun } from "@reqrun/sdk";
const reqrun = new ReqRun({
apiKey: process.env.REQRUN_API_KEY!,
baseURL: "https://api.reqrun.com",
});
const result = await reqrun.chat.completions.create({
model: "gpt-5-nano",
messages: [{ role: "user", content: "Summarize this incident." }],
wait: true,
idempotency_key: "incident-421",
});Expected output
A normal chat completion if the request completes within wait timeout, otherwise an async response with an rr_ request id.