Skip to main content
Core.Today
|
OpenAIFastHigh

OpenAI o3-mini

Efficient reasoning model that delivers strong performance at lower cost. Ideal for tasks requiring reasoning without the overhead of larger models.

2 credits
per 1K tokens (avg)
Efficient reasoning capabilities
200K context window
100K max output tokens
Cost-effective
Fast inference

Use with AI Assistant

Copy usage instructions for Claude, ChatGPT, or other AI

llms.txt

Model Specifications

Context Window
200K
tokens
Max Output
100K
tokens
Training Cutoff
2024-10
Compatible SDK
OpenAI

Capabilities

Vision
Function Calling
Streaming
JSON Mode
System Prompt

Token Pricing (per 1M tokens)

Token TypeCreditsUSD Equivalent
Input Tokens2,200$2.20
Output Tokens8,800$8.80

* 1 credit โ‰ˆ $0.001 (actual charges may vary based on usage)

Quick Start

curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "o3-mini",
  "messages": [
    {
      "role": "user",
      "content": "If all roses are flowers and some flowers fade quickly, can we conclude that some roses fade quickly? Explain your reasoning step by step."
    }
  ],
  "max_completion_tokens": 8000,
  "reasoning_effort": "medium"
}'

Parameters

ParameterTypeRequiredDefaultDescription
messagesarrayYes-Array of message objects
max_completion_tokensintegerNo-Maximum tokens for completion
reasoning_effortstringNo-Reasoning depth level
minimallowmediumhigh

Examples

Logical Reasoning

Solve a logical reasoning problem

curl -X POST "https://api.core.today/llm/openai/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer cdt_your_api_key" \
  -d '{
  "model": "o3-mini",
  "messages": [
    {
      "role": "user",
      "content": "If all roses are flowers and some flowers fade quickly, can we conclude that some roses fade quickly? Explain your reasoning step by step."
    }
  ],
  "max_completion_tokens": 8000,
  "reasoning_effort": "medium"
}'

Tips & Best Practices

1Use reasoning_effort to control speed vs depth tradeoff
2Best for tasks requiring step-by-step reasoning
3More cost-effective than o3 for simpler reasoning tasks
4Use max_completion_tokens instead of max_tokens

Use Cases

Code generation and debugging
Mathematical reasoning
Logical analysis
Data processing
Educational tools

Model Info

ProviderOpenAI
Version2025
CategoryLLM
Price2 credits

API Endpoint

POST /llm/openai/v1/chat/completions
Try in PlaygroundBack to Docs