POST /v1/chat/completions
Send the same non-streaming chat completion body you would send to OpenAI, authenticated with a ReqRun project API key.
- Supported fields: model, messages, temperature, max_tokens, top_p, presence_penalty, frequency_penalty, wait, idempotency_key.
- wait=false returns immediately after the request is queued.
- wait=true returns the completed response if it finishes within the wait timeout, otherwise it falls back to the async shape.
- stream=true is rejected in v1.
- Authentication uses Bearer reqrun_live_ or reqrun_test_ API keys.
POST /v1/chat/completions
Authorization: Bearer reqrun_live_your_project_key
Content-Type: application/json
{
"model": "gpt-5-nano",
"messages": [{ "role": "user", "content": "Hello" }],
"wait": true,
"idempotency_key": "hello-001"
}Async response
When a request is accepted but not completed yet, ReqRun returns a stable async shape. The id is a ReqRun request id, not an OpenAI chat completion id.
{
"id": "rr_abc123",
"object": "chat.completion.async",
"status": "queued",
"created": 1710000000
}GET /v1/requests/{id}
Returns id, status, attempts, created_at, updated_at, last_error_code, and result when completed.
{
"id": "rr_abc123",
"status": "completed",
"attempts": 2,
"created_at": "2026-04-20T10:00:00Z",
"updated_at": "2026-04-20T10:00:07Z",
"last_error_code": null,
"result": {
"id": "chatcmpl_...",
"object": "chat.completion"
}
}Error shape
Errors use one small response model across validation, authentication, unknown request ids, and internal failures.
{
"error": {
"message": "Invalid API key",
"type": "authentication_error",
"code": "invalid_api_key"
}
}