OpenAI's fastest and cheapest model. Optimized for classification, autocompletion, and low-latency tasks. Ultra-affordable at $0.10/1M input tokens.
Copy usage instructions for Claude, ChatGPT, or other AI
| Token Type | Credits | USD Equivalent |
|---|---|---|
| Input Tokens | 100 | $0.10 |
| Output Tokens | 400 | $0.40 |
| Cached Tokens | 25 | $0.03 |
* 1 credit โ $0.001 (actual charges may vary based on usage)
curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gpt-4.1-nano",
"messages": [
{
"role": "system",
"content": "Classify the sentiment as positive, negative, or neutral. Respond with JSON: {\"sentiment\": \"...\", \"confidence\": 0.0}"
},
{
"role": "user",
"content": "The new update is amazing! Everything runs so much smoother now."
}
],
"response_format": {
"type": "json_object"
},
"max_tokens": 50
}'| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
messages | array | Yes | - | Array of message objects with role and content |
model | string | Yes | gpt-4.1-nano | Model identifier |
temperature | float | No | 1.0 | Sampling temperature (0-2). Lower = more focused, higher = more creative |
max_tokens | integer | No | 4096 | Maximum tokens in response (up to 32768) |
stream | boolean | No | false | Enable Server-Sent Events streaming |
response_format | object | No | - | Format of response: { type: 'json_object' } for JSON mode |
tools | array | No | - | List of tools (functions) the model can call |
top_p | float | No | 1.0 | Nucleus sampling threshold (0-1) |
Ultra-fast sentiment classification
curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gpt-4.1-nano",
"messages": [
{
"role": "system",
"content": "Classify the sentiment as positive, negative, or neutral. Respond with JSON: {\"sentiment\": \"...\", \"confidence\": 0.0}"
},
{
"role": "user",
"content": "The new update is amazing! Everything runs so much smoother now."
}
],
"response_format": {
"type": "json_object"
},
"max_tokens": 50
}'Low-latency text completion
curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer cdt_your_api_key" \
-d '{
"model": "gpt-4.1-nano",
"messages": [
{
"role": "system",
"content": "Complete the user's sentence naturally. Keep it brief."
},
{
"role": "user",
"content": "The main advantage of microservices architecture is"
}
],
"temperature": 0.3,
"max_tokens": 100
}'POST /llm/openai/v1/chat/completions