Skip to main content

Documentation Index

Fetch the complete documentation index at: https://apixo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What this model is for

Use this page after trying Sora 2 in the APIXO playground. It shows the model ID, request body, result format, and the shared async workflow links you need for API integration.

Model ID, endpoint, and auth

  • Model ID: sora-2
  • Base URL: https://api.apixo.ai/api/v1
MethodEndpointDescription
POST/api/v1/generateTask/sora-2Create generation task
GET/api/v1/statusTask/sora-2Query task status

Authentication

All requests require an API Key in the header:
Authorization: Bearer YOUR_API_KEY

Request Body

{
  "request_type": "async",
  "callback_url": "https://...",
  "provider": "official",
  "input": {
    "mode": "text-to-video",
    "prompt": "...",
    "duration": 12,
    "aspect_ratio": "landscape",
    "image_urls": ["..."],
    "remove_watermark": true
  }
}

Parameters

request_type
string
default:"async"
async (polling) or callback (webhook)
callback_url
string
Callback URL, required when request_type=callback (conditional)
provider
string
default:"official"
Routing strategy. Only official is supported; always set provider=official explicitly.
input
object
required
Model input parameters
Mode Options:
  • text-to-video — Generate video from text
  • image-to-video — Generate video from image

Copy-paste quickstart

Text-to-Video
curl -X POST "https://api.apixo.ai/api/v1/generateTask/sora-2" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "request_type": "async",
    "provider": "official",
    "input": {
      "mode": "text-to-video",
      "prompt": "a cinematic flyover of a futuristic city at sunrise, golden hour lighting, slow camera movement revealing towering skyscrapers with holographic advertisements",
      "duration": 12,
      "aspect_ratio": "landscape",
      "remove_watermark": true
    }
  }'
Image-to-Video
curl -X POST "https://api.apixo.ai/api/v1/generateTask/sora-2" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "request_type": "callback",
    "callback_url": "https://your-server.com/callback",
    "provider": "official",
    "input": {
      "mode": "image-to-video",
      "prompt": "extend this scene into a smooth camera pan with dramatic lighting, the character slowly turns to face the camera with emotional expression",
      "duration": 8,
      "aspect_ratio": "portrait",
      "image_urls": ["https://example.com/ref.jpg"],
      "remove_watermark": true
    }
  }'

Response and result format

POST /api/v1/generateTask/sora-2

Returns taskId on success for subsequent status queries. Success:
{
  "code": 200,
  "message": "success",
  "data": {
    "taskId": "task_12345678"
  }
}
Error:
{
  "code": 400,
  "message": "Insufficient credits",
  "data": null
}

GET /api/v1/statusTask/sora-2

Query task execution status and results via taskId.
curl -X GET "https://api.apixo.ai/api/v1/statusTask/sora-2?taskId=task_12345678" \
  -H "Authorization: Bearer YOUR_API_KEY"
Success:
{
  "code": 200,
  "message": "success",
  "data": {
    "taskId": "task_12345678",
    "state": "success",
    "resultJson": "{\"resultUrls\":[\"https://r2.apixo.ai/video.mp4\"]}",
    "createTime": 1767965610929,
    "completeTime": 1767965652317,
    "costTime": 41388
  }
}
Failed:
{
  "code": 200,
  "message": "success",
  "data": {
    "taskId": "task_12345678",
    "state": "failed",
    "failCode": "CONTENT_VIOLATION",
    "failMsg": "Content does not meet safety guidelines"
  }
}

Polling result fields

taskId
string
Unique task identifier.
state
string
Current task state: pending, processing, success, or failed.
resultJson
string
JSON string containing resultUrls array. Only present on success. Parse with JSON.parse().
failCode
string
Error code. Only present when state is failed. See Error Codes.
failMsg
string
Human-readable error message. Only present when state is failed.
createTime
integer
Task creation timestamp (Unix milliseconds).
completeTime
integer
Task completion timestamp (Unix milliseconds).
costTime
integer
Processing duration in milliseconds.

Polling and webhook result retrieval

Use request_type: "async" with the status endpoint when your app wants to poll for completion. Use request_type: "callback" with callback_url when your production service should receive the final result automatically. See Webhooks for delivery details.

Common errors

CodeDescription
400Invalid parameters or request error
401Invalid or missing API Key
429Rate limit exceeded
Fail CodeDescription
CONTENT_VIOLATIONContent violates safety guidelines
INVALID_IMAGE_URLCannot access provided image URL

Rate Limits

LimitValue
Requests10000 / minute
Concurrent tasks1000
Exceeding limits returns 429 error. Wait and retry.

Tips

  • Generation time:
    • 4-second video: ~2-4 minutes
    • 8-second video: ~3-5 minutes
    • 12-second video: ~4-6 minutes
    • Sora 2 generation takes longer, always use callback mode
    • Submit task, wait 180 seconds, then poll every 10 seconds
  • Cinematic quality: Sora 2 is known for ultra-high quality and cinematic narrative capabilities, one of the most advanced video generation models.
  • Callback mode: Due to long generation times (several minutes), strongly recommend using callback mode.
  • Video expiration: Result URLs are valid for 15 days. Download promptly.
  • Content moderation: Prompts must comply with content safety guidelines.
  • Aspect ratio selection:
    • landscape: Landscape (default), for traditional video platforms, films, desktop viewing
    • portrait: Portrait, for short video platforms, mobile viewing, social media
  • Watermark handling:
    • remove_watermark: true: Auto-remove watermark (default, highly recommended)
    • remove_watermark: false: Keep original watermark
  • Long prompts: Sora 2 supports up to 10000-character prompts with exceptional understanding for complex narratives and detailed descriptions.
  • Prompt best practices:
    • Camera description: Use film terminology (e.g., “slow dolly in”, “orbiting shot”, “aerial view”, “handheld”)
    • Lighting effects: Describe lighting and atmosphere (e.g., “golden hour”, “dramatic lighting”, “soft diffused light”, “backlit silhouette”)
    • Emotional tone: Convey mood (e.g., “warm”, “tense”, “epic”, “dreamlike”)
    • Action details: Describe character/object movements (e.g., “slowly turns”, “flowing hair”, “swaying leaves”)
    • Scene transitions: Describe how shots unfold (e.g., “zoom from wide to close”, “descend through clouds”)
  • Image-to-video:
    • Extends a single image into a complete video
    • Reference image becomes start frame or key frame
    • Describe desired animation effects, camera movement, and narrative development in prompt
    • Ideal for character animation, scene expansion, bringing still images to life
  • Duration selection:
    • 4 seconds: Fast preview and quick iteration
    • 8 seconds: Balanced cost and narrative length
    • 12 seconds: Richer scene progression and storytelling
  • Image formats: image_urls supports JPG, PNG, WebP, max 10MB per image.
  • Professional use:
    • Ideal for movie trailers, commercials, music videos, proof of concept
    • Supports complex narratives and character interactions
    • Can generate coherent multi-shot scenes
  • Performance tips:
    • Avoid submitting many tasks during peak hours
    • Clear, structured prompts improve success rates
    • Test with 4-second first, then scale to 8-second or 12-second once satisfied

Video generation takes longer than images — use callback mode for production workloads. Result URLs expire after 15 days; download important outputs promptly.