Dated snapshot of GPT-5.1 for reproducible results. Supports cached input tokens for cost savings on repeated context. Ideal for production deployments requiring model version pinning.
Copy usage instructions for Claude, ChatGPT, or other AI
| Token Type | Credits | USD Equivalent |
|---|---|---|
| Input Tokens | 1,250 | $1.25 |
| Output Tokens | 10,000 | $10.00 |
| Cached Tokens | 125 | $0.13 |
* 1 credit β $0.001 (actual charges may vary based on usage)
curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gpt-5.1-2025-11-13",
"messages": [
{
"role": "system",
"content": "You are a data analyst. Provide consistent, structured analysis."
},
{
"role": "user",
"content": "Analyze the key factors driving cloud computing adoption in 2026."
}
],
"temperature": 0,
"max_completion_tokens": 2000
}'| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
messages | array | Yes | - | Array of message objects with role and content |
model | string | Yes | gpt-5.1-2025-11-13 | Model identifier |
max_completion_tokens | integer | No | 4096 | Maximum tokens in response (up to 32768). Note: use max_completion_tokens, not max_tokens |
temperature | float | No | 1.0 | Sampling temperature (0-2) |
stream | boolean | No | false | Enable Server-Sent Events streaming |
top_p | float | No | 1.0 | Nucleus sampling threshold (0-1) |
Pin model version for consistent results across runs
curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gpt-5.1-2025-11-13",
"messages": [
{
"role": "system",
"content": "You are a data analyst. Provide consistent, structured analysis."
},
{
"role": "user",
"content": "Analyze the key factors driving cloud computing adoption in 2026."
}
],
"temperature": 0,
"max_completion_tokens": 2000
}'POST /llm/openai/v1/chat/completions