Google's latest and most capable Gemini model in preview. Features dynamic pricing that adjusts based on context length, with enhanced pricing for inputs over 200K tokens.
Copy usage instructions for Claude, ChatGPT, or other AI
| Token Type | Credits | USD Equivalent |
|---|---|---|
| Input Tokens | 2,000 | $2.00 |
| Output Tokens | 12,000 | $12.00 |
* 1 credit โ $0.001 (actual charges may vary based on usage)
curl -X POST "https://api.core.today/llm/gemini/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gemini-3.1-pro-preview",
"messages": [
{
"role": "user",
"content": "Analyze the current state of quantum error correction research and identify the most promising approaches for achieving fault-tolerant quantum computing by 2030."
}
],
"max_tokens": 4096,
"temperature": 0.5
}'| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
messages | array | Yes | - | Array of message objects (OpenAI format) |
temperature | float | No | 1 | Sampling temperature (0-2) |
top_p | float | No | 0.95 | Nucleus sampling parameter |
max_tokens | integer | No | - | Maximum output tokens |
stream | boolean | No | false | Enable Server-Sent Events streaming |
In-depth analysis with Gemini 3.1 Pro
curl -X POST "https://api.core.today/llm/gemini/v1beta/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gemini-3.1-pro-preview",
"messages": [
{
"role": "user",
"content": "Analyze the current state of quantum error correction research and identify the most promising approaches for achieving fault-tolerant quantum computing by 2030."
}
],
"max_tokens": 4096,
"temperature": 0.5
}'POST /llm/gemini/v1beta/openai/chat/completions