Skip to Content
APIPlaybooksAI Channel Playbooks (Public v1)

AI Channel Playbooks (Public v1)

Status: Active
Version: v1.0.0
Last updated: 2026-03-04

This document provides reusable, public-safe playbooks for operating RGL8R through assistant channels.

Companion docs:

  • Public API contract: docs/api/public-api-contract-v1.md
  • OpenAPI contract: docs/api/openapi/rgl8r-public-api-v1.2.0.yaml
  • Agent integration kit: docs/api/agent-integration-kit-v1.md
  • Validation summary: docs/api/playbooks/ai-channel-validation-summary-v1.md

Common prerequisites

Set these environment variables before running any channel flow:

export BASE_URL="<YOUR_RGL8R_API_BASE_URL>" export RGL8R_INTEGRATION_KEY="sk_int_..."

Examples:

  • Staging: https://rgl8r-staging-api.onrender.com
  • Production: https://rgl8r-production-api.onrender.com

Canonical flow:

  1. Exchange integration key (POST /api/auth/token/integration)
  2. Enqueue work (POST /api/sima/batch)
  3. Poll job (GET /api/jobs/{id})
  4. Fetch results (GET /api/sima/results?limit=20)

Expected terminal response shape

{ "token": "issued", "jobId": "<uuid>", "finalStatus": "COMPLETED", "resultsCount": 20, "error": null }

Claude playbook

Prompt template

You are integrating with the RGL8R public API. Run this sequence exactly: token exchange -> enqueue SIMA batch -> poll job -> fetch SIMA results. Use only documented endpoints and return JSON only: {token, jobId, finalStatus, resultsCount, error}. If an error occurs, return the canonical envelope fields {code, message, details}.

ChatGPT playbook

Prompt template

Execute the RGL8R canonical API flow using the public OpenAPI contract: 1) POST /api/auth/token/integration 2) POST /api/sima/batch 3) GET /api/jobs/:id until terminal 4) GET /api/sima/results?limit=20 Output JSON only. Do not invent endpoints.

GitHub Copilot Chat playbook

Prompt template

Generate and run a minimal script that executes: token exchange -> enqueue -> poll -> fetch using RGL8R public API endpoints only. Return final JSON summary and canonical error envelope on failure.

Gemini playbook

Prompt template

Act as an API integration operator for RGL8R. Execute token -> enqueue -> poll -> fetch using the public OpenAPI contract. Return only JSON output with terminal status and result count. Surface canonical error envelope fields on failure.

Troubleshooting

SymptomLikely causeAction
401 INVALID_API_KEYbad integration keyrotate/reissue key and retry token exchange
401 INVALID_TOKENexpired or invalid bearerrerun token exchange
403 SCOPE_DENIEDkey missing required scopereissue key with required scopes
400 INVALID_REQUESTmalformed payloadvalidate payload against OpenAPI
poll timeoutlong-running async workcontinue polling with backoff and jitter

Evidence policy

Channel validation requires native runs for:

  • Claude
  • ChatGPT
  • GitHub Copilot Chat
  • Gemini

See docs/api/playbooks/ai-channel-validation-summary-v1.md for the public redacted status matrix.