Blog

Reliable Webhook-Triggered LLM Work

Reliable webhook-triggered LLM work means accepting repeated webhook delivery without duplicating the model request.

Webhook senders retry delivery. If a webhook triggers LLM work, your handler needs idempotency and a durable model request record.

The sharp edge

Webhook senders often retry delivery. That is a feature, not a bug. It helps the sender recover when your endpoint is slow, unavailable, or returns an error.

But if your handler calls an LLM directly, repeated delivery can create repeated model work for the same event.

ReqRun does not receive the webhook

Your application still receives, verifies, and authorizes the webhook. ReqRun v1 does not receive callbacks, verify signatures, or route webhook events.

ReqRun fits inside the handler at the point where the event triggers an OpenAI-compatible LLM request.

Use the event id as the idempotency key

Most webhook providers send a stable event id. That event id is usually the right idempotency key for LLM work triggered by the event.

If the sender retries the webhook, ReqRun sees the same project plus idempotency key and returns the same durable request instead of creating duplicate work.

TypeScript
export async function POST(request: Request) {
  const event = await verifyAndParseWebhook(request);

  const response = await reqrun.chat.completions.create({
    model: "gpt-5-nano",
    messages: [{ role: "user", content: "Summarize the event." }],
    wait: false,
    idempotency_key: event.id,
  });

  return Response.json({ request_id: response.id });
}

Return quickly, inspect later

The webhook handler can acknowledge the event after ReqRun accepts the request. Your app can store the rr_ id and inspect the result later.

This separates webhook receipt from LLM request execution. The provider gets a timely response, and the model work still has a durable lifecycle.

Common mistakes

Do not use a random idempotency key per webhook delivery. That defeats deduplication.

Do not put raw webhook payloads into normal logs just to debug LLM work. Prefer metadata: event id, request id, status, attempts, and safe error codes.