Skip to main content
Core.Today
|
GoogleMediumUltra

Gemini 3.1 Pro Preview

Google's latest and most capable Gemini model in preview. Features dynamic pricing that adjusts based on context length, with enhanced pricing for inputs over 200K tokens.

500 credits
per request
Dynamic pricing (standard / long-context >200K)
Advanced reasoning and analysis
Native multimodal (text + vision + audio + video)
1M+ token context window
Function calling & JSON mode

Use with AI Assistant

Copy usage instructions for Claude, ChatGPT, or other AI

llms.txt

Model Specifications

Context Window
1M
tokens
Max Output
66K
tokens
Training Cutoff
January 2025
Compatible SDK
OpenAI, Google AI

Capabilities

Vision
Function Calling
Streaming
JSON Mode
System Prompt

Token Pricing (per 1M tokens)

Token TypeCreditsUSD Equivalent
Input Tokens2,000$2.00
Output Tokens12,000$12.00

* 1 credit โ‰ˆ $0.001 (actual charges may vary based on usage)

Quick Start

curl -X POST "https://api.core.today/llm/gemini/v1beta/openai/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "gemini-3.1-pro-preview",
  "messages": [
    {
      "role": "user",
      "content": "Analyze the current state of quantum error correction research and identify the most promising approaches for achieving fault-tolerant quantum computing by 2030."
    }
  ],
  "max_tokens": 4096,
  "temperature": 0.5
}'

Parameters

ParameterTypeRequiredDefaultDescription
messagesarrayYes-Array of message objects (OpenAI format)
temperaturefloatNo1Sampling temperature (0-2)
top_pfloatNo0.95Nucleus sampling parameter
max_tokensintegerNo-Maximum output tokens
streambooleanNofalseEnable Server-Sent Events streaming

Examples

Research Analysis

In-depth analysis with Gemini 3.1 Pro

curl -X POST "https://api.core.today/llm/gemini/v1beta/openai/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "gemini-3.1-pro-preview",
  "messages": [
    {
      "role": "user",
      "content": "Analyze the current state of quantum error correction research and identify the most promising approaches for achieving fault-tolerant quantum computing by 2030."
    }
  ],
  "max_tokens": 4096,
  "temperature": 0.5
}'

Tips & Best Practices

1Dynamic pricing: inputs >200K tokens use long-context pricing ($4.00/$18.00 per M)
2Keep inputs under 200K tokens when possible for standard pricing
3Excellent for research, analysis, and complex reasoning tasks
4Combine with vision inputs for document and diagram analysis

Use Cases

Complex reasoning and research tasks
Long document analysis and summarization
Multimodal content understanding
Advanced code generation and review
Scientific and mathematical problem solving

Model Info

ProviderGoogle
Version3.1-preview
CategoryLLM
Price500 credits

API Endpoint

POST /llm/gemini/v1beta/openai/chat/completions
Try in PlaygroundBack to Docs