Ultra-lightweight and fastest GPT-5.4 variant with 400K context and 128K output. Designed for high-throughput, low-latency applications at minimal cost. Supports MCP for tool integration.
Copy usage instructions for Claude, ChatGPT, or other AI
| Token Type | Credits | USD Equivalent |
|---|---|---|
| Input Tokens | 200 | $0.20 |
| Output Tokens | 1,250 | $1.25 |
* 1 credit โ $0.001 (actual charges may vary based on usage)
curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gpt-5.4-nano",
"messages": [
{
"role": "system",
"content": "Classify the following text into one of these categories: positive, negative, neutral. Respond with only the category."
},
{
"role": "user",
"content": "The new product launch exceeded all expectations, with record-breaking sales in the first week."
}
],
"max_completion_tokens": 50,
"temperature": 0
}'| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
messages | array | Yes | - | Array of message objects with role and content |
model | string | Yes | gpt-5.4-nano | Model identifier |
max_completion_tokens | integer | No | 4096 | Maximum tokens in response (up to 128000). Note: use max_completion_tokens, not max_tokens |
temperature | float | No | 1.0 | Sampling temperature (0-2) |
stream | boolean | No | false | Enable Server-Sent Events streaming |
top_p | float | No | 1.0 | Nucleus sampling threshold (0-1) |
High-speed text classification with GPT-5.4 Nano
curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gpt-5.4-nano",
"messages": [
{
"role": "system",
"content": "Classify the following text into one of these categories: positive, negative, neutral. Respond with only the category."
},
{
"role": "user",
"content": "The new product launch exceeded all expectations, with record-breaking sales in the first week."
}
],
"max_completion_tokens": 50,
"temperature": 0
}'POST /llm/openai/v1/chat/completions