Skip to main content
Core.Today
|
OpenAIFastStandard

GPT-5.4 Nano

Ultra-lightweight and fastest GPT-5.4 variant with 400K context and 128K output. Designed for high-throughput, low-latency applications at minimal cost. Supports MCP for tool integration.

1 credits
per request
400K context window
128K max output tokens
Ultra-fast inference
Lowest cost GPT-5.4 variant
MCP (Model Context Protocol) support

Use with AI Assistant

Copy usage instructions for Claude, ChatGPT, or other AI

llms.txt

Model Specifications

Context Window
400K
tokens
Max Output
128K
tokens
Training Cutoff
2025-08
Compatible SDK
OpenAI

Capabilities

Vision
Function Calling
Streaming
JSON Mode
System Prompt

Token Pricing (per 1M tokens)

Token TypeCreditsUSD Equivalent
Input Tokens200$0.20
Output Tokens1,250$1.25

* 1 credit โ‰ˆ $0.001 (actual charges may vary based on usage)

Quick Start

curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "gpt-5.4-nano",
  "messages": [
    {
      "role": "system",
      "content": "Classify the following text into one of these categories: positive, negative, neutral. Respond with only the category."
    },
    {
      "role": "user",
      "content": "The new product launch exceeded all expectations, with record-breaking sales in the first week."
    }
  ],
  "max_completion_tokens": 50,
  "temperature": 0
}'

Parameters

ParameterTypeRequiredDefaultDescription
messagesarrayYes-Array of message objects with role and content
modelstringYesgpt-5.4-nanoModel identifier
max_completion_tokensintegerNo4096Maximum tokens in response (up to 128000). Note: use max_completion_tokens, not max_tokens
temperaturefloatNo1.0Sampling temperature (0-2)
streambooleanNofalseEnable Server-Sent Events streaming
top_pfloatNo1.0Nucleus sampling threshold (0-1)

Examples

Quick Classification

High-speed text classification with GPT-5.4 Nano

curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "gpt-5.4-nano",
  "messages": [
    {
      "role": "system",
      "content": "Classify the following text into one of these categories: positive, negative, neutral. Respond with only the category."
    },
    {
      "role": "user",
      "content": "The new product launch exceeded all expectations, with record-breaking sales in the first week."
    }
  ],
  "max_completion_tokens": 50,
  "temperature": 0
}'

Tips & Best Practices

1Most cost-effective model in the GPT-5.4 series
2Ultra-fast response times ideal for real-time applications
3Supports MCP for seamless tool integration
4Perfect for classification, routing, and simple extraction tasks

Use Cases

High-volume classification and routing
Real-time chat applications
Quick text generation and completion
Lightweight data extraction
Auto-tagging and categorization

Model Info

ProviderOpenAI
Version2026-03
CategoryLLM
Price1 credits

API Endpoint

POST /llm/openai/v1/chat/completions
Try in PlaygroundBack to Docs