Skip to main content
Core.Today
|
OpenAIFastHigh

GPT-5.1 (2025-11-13)

Dated snapshot of GPT-5.1 for reproducible results. Supports cached input tokens for cost savings on repeated context. Ideal for production deployments requiring model version pinning.

3 credits
per request
Fixed model snapshot for reproducibility
Cached input tokens for cost savings
Strong reasoning and coding performance
Function calling & JSON mode
Streaming support

Use with AI Assistant

Copy usage instructions for Claude, ChatGPT, or other AI

llms.txt

Model Specifications

Context Window
1M
tokens
Max Output
33K
tokens
Training Cutoff
2025-03
Compatible SDK
OpenAI

Capabilities

Vision
Function Calling
Streaming
JSON Mode
System Prompt

Token Pricing (per 1M tokens)

Token TypeCreditsUSD Equivalent
Input Tokens1,250$1.25
Output Tokens10,000$10.00
Cached Tokens125$0.13

* 1 credit β‰ˆ $0.001 (actual charges may vary based on usage)

Quick Start

curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "gpt-5.1-2025-11-13",
  "messages": [
    {
      "role": "system",
      "content": "You are a data analyst. Provide consistent, structured analysis."
    },
    {
      "role": "user",
      "content": "Analyze the key factors driving cloud computing adoption in 2026."
    }
  ],
  "temperature": 0,
  "max_completion_tokens": 2000
}'

Parameters

ParameterTypeRequiredDefaultDescription
messagesarrayYes-Array of message objects with role and content
modelstringYesgpt-5.1-2025-11-13Model identifier
max_completion_tokensintegerNo4096Maximum tokens in response (up to 32768). Note: use max_completion_tokens, not max_tokens
temperaturefloatNo1.0Sampling temperature (0-2)
streambooleanNofalseEnable Server-Sent Events streaming
top_pfloatNo1.0Nucleus sampling threshold (0-1)

Examples

Reproducible Analysis

Pin model version for consistent results across runs

curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "gpt-5.1-2025-11-13",
  "messages": [
    {
      "role": "system",
      "content": "You are a data analyst. Provide consistent, structured analysis."
    },
    {
      "role": "user",
      "content": "Analyze the key factors driving cloud computing adoption in 2026."
    }
  ],
  "temperature": 0,
  "max_completion_tokens": 2000
}'

Tips & Best Practices

1Use this dated snapshot for reproducible results in production
2Leverage cached input tokens to reduce costs on repeated context
3Temperature 0 ensures deterministic outputs for evaluation
4Ideal for regression testing when comparing model versions

Use Cases

Production systems requiring version pinning
Reproducible experiments and evaluations
Cost-optimized batch processing with caching
Enterprise applications with audit requirements
Regression testing for AI features

Model Info

ProviderOpenAI
Version2025-11-13
CategoryLLM
Price3 credits

API Endpoint

POST /llm/openai/v1/chat/completions
Try in PlaygroundBack to Docs