跳转到主要内容
100% Google Gemini API 兼容 — APIXO 实现标准 Google Gemini API。只需将 baseURL 改为 https://api.apixo.ai 并使用 APIXO authkey。可与任何 Gemini 兼容 SDK 或工具无缝集成,无需修改代码。
Gemini 3 Pro 是 Google 旗舰多模态推理模型,支持文本、图像、视频和音频输入,具备长上下文对话、函数调用和结构化输出等能力。

快速开始

Direct API Call

curl -X POST https://api.apixo.ai/api/v1/google/models/gemini-3-pro:generateContent \
  -H "Authorization: Bearer YOUR_APIXO_AUTHKEY" \
  -H "Content-Type: application/json" \
  -d '{
    "contents": [
      {
        "role": "user",
        "parts": [{"text": "Explain quantum computing in simple terms."}]
      }
    ],
    "generationConfig": {
      "maxOutputTokens": 1024,
      "temperature": 0.7
    }
  }'

JavaScript/TypeScript

const response = await fetch(
  "https://api.apixo.ai/api/v1/google/models/gemini-3-pro:generateContent",
  {
    method: "POST",
    headers: {
      "Authorization": "Bearer YOUR_APIXO_AUTHKEY",
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      contents: [
        {
          role: "user",
          parts: [{ text: "What is artificial intelligence?" }]
        }
      ],
      generationConfig: {
        temperature: 0.7,
        maxOutputTokens: 1024
      }
    })
  }
);

const result = await response.json();
console.log(result.candidates[0].content.parts[0].text);

接口

APIXO 使用标准 Google Gemini API 接口格式:
  • Non-streaming: POST /api/v1/google/models/{model}:generateContent
  • Streaming: POST /api/v1/google/models/{model}:streamGenerateContent
其中 {model} 为模型名称(如 gemini-3-progemini-3-flash 等)

第三方 SDK 集成

APIXO 与 Google 官方 SDK 完全兼容。只需配置 base URL 和 API key:

Python (Google SDK)

import google.generativeai as genai

genai.configure(
    api_key="YOUR_APIXO_AUTHKEY",
    transport="rest",
    client_options={
        "api_endpoint": "https://api.apixo.ai"
    }
)

model = genai.GenerativeModel('gemini-3-pro')
response = model.generate_content("What is quantum computing?")
print(response.text)

Node.js (Google SDK)

const { GoogleGenerativeAI } = require("@google/generative-ai");

const genAI = new GoogleGenerativeAI("YOUR_APIXO_AUTHKEY");
genAI.baseUrl = "https://api.apixo.ai";

const model = genAI.getGenerativeModel({ model: "gemini-3-pro" });
const result = await model.generateContent("Explain neural networks");
console.log(result.response.text());

LangChain

from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(
    model="gemini-3-pro",
    google_api_key="YOUR_APIXO_AUTHKEY",
    base_url="https://api.apixo.ai"
)

response = llm.invoke("What are transformers in AI?")

Request Format

Basic Request

{
  "contents": [
    {
      "role": "user",
      "parts": [
        {"text": "Your message here"}
      ]
    }
  ],
  "generationConfig": {
    "temperature": 0.7,
    "maxOutputTokens": 1024
  }
}

With System Instruction

{
  "systemInstruction": {
    "parts": [
      {"text": "You are a helpful assistant specialized in..."}
    ]
  },
  "contents": [
    {
      "role": "user",
      "parts": [{"text": "Your message"}]
    }
  ]
}

Multimodal Input (Image Analysis)

{
  "contents": [
    {
      "role": "user",
      "parts": [
        {"text": "Analyze this image and describe what you see."},
        {
          "fileData": {
            "mimeType": "image/jpeg",
            "fileUri": "https://example.com/image.jpg"
          }
        }
      ]
    }
  ]
}

Parameters

contents
array
必填
Array of conversation messages with role and parts
systemInstruction
object
System prompt to guide model behavior
generationConfig
object
Generation settings

Content Structure

Each message in contents has:
  • role: "user", "model", or "tool"
  • parts: Array of content parts (text, images, videos, etc.)

Response Format

Non-streaming Response

{
  "candidates": [
    {
      "content": {
        "role": "model",
        "parts": [
          {"text": "Generated response text..."}
        ]
      },
      "finishReason": "STOP",
      "safetyRatings": [...]
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 10,
    "candidatesTokenCount": 150,
    "totalTokenCount": 160
  }
}

Streaming Response

Streaming returns Server-Sent Events (SSE) with incremental chunks. Each chunk has the same structure, with the last chunk containing finishReason: "STOP".

Multi-turn Conversations

Include previous messages in the contents array to maintain conversation context:
{
  "contents": [
    {
      "role": "user",
      "parts": [{"text": "What is machine learning?"}]
    },
    {
      "role": "model",
      "parts": [{"text": "Machine learning is a subset of AI that..."}]
    },
    {
      "role": "user",
      "parts": [{"text": "Can you give me an example?"}]
    }
  ]
}
thoughtSignature Handling — If a response contains a thoughtSignature field, you must include it verbatim in subsequent requests to maintain reasoning context.

Streaming

To enable streaming, use the streaming endpoint:
const response = await fetch(
  "https://api.apixo.ai/api/v1/google/models/gemini-3-pro:streamGenerateContent",
  {
    method: "POST",
    headers: {
      "Authorization": "Bearer YOUR_APIXO_AUTHKEY",
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      contents: [
        {
          role: "user",
          parts: [{ text: "Write a short story about AI." }]
        }
      ]
    })
  }
);

// Process SSE stream
const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  
  const chunk = decoder.decode(value);
  console.log(chunk);
}

Advanced Features

APIXO supports all Google Gemini API advanced features:
  • Function Calling (Tools): Enable models to call external functions
  • Structured Output: Define JSON Schema for return format
  • Thinking Mode: Enable reasoning traces
  • Safety Settings: Configure content filtering
  • Grounding: Connect to external knowledge sources
For detailed documentation on advanced features, see the Google Gemini API Official Documentation.

Token Limits

  • Input tokens: Up to 1,000,000+ tokens (long context)
  • Output tokens: Up to 8,192 tokens (configurable via maxOutputTokens)

Migration Guide

From Google Cloud

Change two configuration values:
  1. Base URL: https://generativelanguage.googleapis.comhttps://api.apixo.ai
  2. API Key: Google API key → APIXO authkey
Everything else remains identical.

From OpenAI

APIXO uses Google’s API format which differs from OpenAI’s. Key differences:
AspectOpenAIGoogle/APIXO
Endpoint/v1/chat/completions/api/v1/google/models/{model}:generateContent
Message formatmessages: [{role, content}]contents: [{role, parts: [{text}]}]
Model fieldmodel: "gpt-4"URL path: /models/gemini-3-pro:...

Notes

  • API Format: 100% Google Gemini API standard - no custom wrappers
  • Authentication: Bearer token with APIXO authkey
  • Base URL: https://api.apixo.ai
  • SDKs: Works with official Google SDKs by configuring base URL