Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.oneperfectslice.ai/llms.txt

Use this file to discover all available pages before exploring further.

The OnePerfectSlice API enforces concurrency limits on slice runs and has server-side timeouts on long-running queries. Per-request rate limits are not currently enforced but may be introduced in the future.

Concurrency limits

Each team can run up to 5 slice runs at the same time. Attempting to start a 6th returns a 429 error:
{
  "error": {
    "code": "CONCURRENT_RUN_LIMIT",
    "message": "Maximum of 5 concurrent slice runs per team. Wait for existing runs to complete."
  }
}
To avoid hitting this limit, poll your active runs with GET /slice-runs?status=running before starting new ones.

Per-request rate limits

The API does not currently enforce per-request rate limits. When they are introduced, this page will be updated with:
  • Per-token request limits and window duration
  • Rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset)
  • Recommended backoff strategies
Your client should be prepared to handle 429 Too Many Requests responses with a retry-after delay.

Timeouts

Some endpoints involve database queries or downstream processing that can time out under load. When this happens, the API returns 503 with a specific error code:
Error codeEndpointWhat happened
PREVIEW_COUNT_TIMEOUTPreview countFilter count query took too long
RUN_LIST_TIMEOUTList runsRun history query took too long
RUN_LOOKUP_TIMEOUTGet runRun detail lookup took too long
EVIDENCE_TIMEOUTGet evidenceEvidence retrieval took too long
POST_SEARCH_TIMEOUTSearch postsPost search took too long
FILTER_LOOKUP_TIMEOUTFilter endpointsFilter lookup took too long
How to handle timeouts: Retry with exponential backoff — wait 1s, then 2s, then 4s. If the timeout persists, try narrowing your filters or reducing the date range.