/APIXO Docs

Gemini 3 Pro

Google's flagship multimodal reasoning model - 100% API compatible, just change baseURL

100% Google Gemini API Compatible

APIXO implements the standard Google Gemini API. Simply change the baseURL to https://api.apixo.ai and use your APIXO authkey. Works seamlessly with any Gemini-compatible SDK or tool - no code changes needed.

Gemini 3 Pro is Google's flagship multimodal reasoning model, supporting text, image, video, and audio inputs with advanced capabilities including long-context conversations, function calling, and structured outputs.

Quick Start

Direct API Call

curl -X POST https://api.apixo.ai/api/v1/google/models/gemini-3-pro:generateContent \
  -H "Authorization: Bearer YOUR_APIXO_AUTHKEY" \
  -H "Content-Type: application/json" \
  -d '{
    "contents": [
      {
        "role": "user",
        "parts": [{"text": "Explain quantum computing in simple terms."}]
      }
    ],
    "generationConfig": {
      "maxOutputTokens": 1024,
      "temperature": 0.7
    }
  }'

JavaScript/TypeScript

const response = await fetch(
  "https://api.apixo.ai/api/v1/google/models/gemini-3-pro:generateContent",
  {
    method: "POST",
    headers: {
      "Authorization": "Bearer YOUR_APIXO_AUTHKEY",
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      contents: [
        {
          role: "user",
          parts: [{ text: "What is artificial intelligence?" }]
        }
      ],
      generationConfig: {
        temperature: 0.7,
        maxOutputTokens: 1024
      }
    })
  }
);
 
const result = await response.json();
console.log(result.candidates[0].content.parts[0].text);

Endpoints

APIXO uses the standard Google Gemini API endpoint format:

  • Non-streaming: POST /api/v1/google/models/{model}:generateContent
  • Streaming: POST /api/v1/google/models/{model}:streamGenerateContent

Where {model} is the model name (e.g., gemini-3-pro, gemini-3-flash, etc.)


Third-Party SDK Integration

APIXO is fully compatible with Google's official SDKs. Just configure the base URL and API key:

Python (Google SDK)

import google.generativeai as genai
 
genai.configure(
    api_key="YOUR_APIXO_AUTHKEY",
    transport="rest",
    client_options={
        "api_endpoint": "https://api.apixo.ai"
    }
)
 
model = genai.GenerativeModel('gemini-3-pro')
response = model.generate_content("What is quantum computing?")
print(response.text)

Node.js (Google SDK)

const { GoogleGenerativeAI } = require("@google/generative-ai");
 
const genAI = new GoogleGenerativeAI("YOUR_APIXO_AUTHKEY");
genAI.baseUrl = "https://api.apixo.ai";
 
const model = genAI.getGenerativeModel({ model: "gemini-3-pro" });
const result = await model.generateContent("Explain neural networks");
console.log(result.response.text());

LangChain

from langchain_google_genai import ChatGoogleGenerativeAI
 
llm = ChatGoogleGenerativeAI(
    model="gemini-3-pro",
    google_api_key="YOUR_APIXO_AUTHKEY",
    base_url="https://api.apixo.ai"
)
 
response = llm.invoke("What are transformers in AI?")

Request Format

Basic Request

{
  "contents": [
    {
      "role": "user",
      "parts": [
        {"text": "Your message here"}
      ]
    }
  ],
  "generationConfig": {
    "temperature": 0.7,
    "maxOutputTokens": 1024
  }
}

With System Instruction

{
  "systemInstruction": {
    "parts": [
      {"text": "You are a helpful assistant specialized in..."}
    ]
  },
  "contents": [
    {
      "role": "user",
      "parts": [{"text": "Your message"}]
    }
  ]
}

Multimodal Input (Image Analysis)

{
  "contents": [
    {
      "role": "user",
      "parts": [
        {"text": "Analyze this image and describe what you see."},
        {
          "fileData": {
            "mimeType": "image/jpeg",
            "fileUri": "https://example.com/image.jpg"
          }
        }
      ]
    }
  ]
}

Parameters

Parameter
Required
Default
Description
contentsarray
Array of conversation messages with role and parts
systemInstructionobject
System prompt to guide model behavior
generationConfigobject
Generation settings

Content Structure

Each message in contents has:

  • role: "user", "model", or "tool"
  • parts: Array of content parts (text, images, videos, etc.)

Response Format

Non-streaming Response

{
  "candidates": [
    {
      "content": {
        "role": "model",
        "parts": [
          {"text": "Generated response text..."}
        ]
      },
      "finishReason": "STOP",
      "safetyRatings": [...]
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 10,
    "candidatesTokenCount": 150,
    "totalTokenCount": 160
  }
}

Streaming Response

Streaming returns Server-Sent Events (SSE) with incremental chunks. Each chunk has the same structure, with the last chunk containing finishReason: "STOP".


Multi-turn Conversations

Include previous messages in the contents array to maintain conversation context:

{
  "contents": [
    {
      "role": "user",
      "parts": [{"text": "What is machine learning?"}]
    },
    {
      "role": "model",
      "parts": [{"text": "Machine learning is a subset of AI that..."}]
    },
    {
      "role": "user",
      "parts": [{"text": "Can you give me an example?"}]
    }
  ]
}

thoughtSignature Handling

If a response contains a thoughtSignature field, you must include it verbatim in subsequent requests to maintain reasoning context.


Streaming

To enable streaming, use the streaming endpoint:

const response = await fetch(
  "https://api.apixo.ai/api/v1/google/models/gemini-3-pro:streamGenerateContent",
  {
    method: "POST",
    headers: {
      "Authorization": "Bearer YOUR_APIXO_AUTHKEY",
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      contents: [
        {
          role: "user",
          parts: [{ text: "Write a short story about AI." }]
        }
      ]
    })
  }
);
 
// Process SSE stream
const reader = response.body.getReader();
const decoder = new TextDecoder();
 
while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  
  const chunk = decoder.decode(value);
  console.log(chunk);
}

Advanced Features

APIXO supports all Google Gemini API advanced features:

  • Function Calling (Tools): Enable models to call external functions
  • Structured Output: Define JSON Schema for return format
  • Thinking Mode: Enable reasoning traces
  • Safety Settings: Configure content filtering
  • Grounding: Connect to external knowledge sources

For detailed documentation on advanced features, see the Google Gemini API Official Documentation.


Token Limits

  • Input tokens: Up to 1,000,000+ tokens (long context)
  • Output tokens: Up to 8,192 tokens (configurable via maxOutputTokens)

Migration Guide

From Google Cloud

Change two configuration values:

  1. Base URL: https://generativelanguage.googleapis.comhttps://api.apixo.ai
  2. API Key: Google API key → APIXO authkey

Everything else remains identical.

From OpenAI

APIXO uses Google's API format which differs from OpenAI's. Key differences:

AspectOpenAIGoogle/APIXO
Endpoint/v1/chat/completions/api/v1/google/models/{model}:generateContent
Message formatmessages: [{role, content}]contents: [{role, parts: [{text}]}]
Model fieldmodel: "gpt-4"URL path: /models/gemini-3-pro:...

Notes

  • API Format: 100% Google Gemini API standard - no custom wrappers
  • Authentication: Bearer token with APIXO authkey
  • Base URL: https://api.apixo.ai
  • SDKs: Works with official Google SDKs by configuring base URL

On this page