API Reference
NeuronGate is fully compatible with the OpenAI API format. Point any OpenAI SDK at our base URL and you are ready to go. Pay with crypto, access every model.
Quick Start
Get an API key
Create an account and generate a key from the dashboard. Keys start with sk-ng-.
Fund your account
Deposit USDC, USDT, or ETH. Your balance is debited per-request at listed model prices.
Make your first request
curl https://api.neurongate.io/v1/chat/completions \
-H "Authorization: Bearer sk-ng-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o",
"messages": [
{"role": "user", "content": "Hello, NeuronGate!"}
]
}'Same as OpenAI. NeuronGate implements the OpenAI-compatible chat completions API. If your code works with OpenAI, it works with NeuronGate. Just change the base URL and API key.
Authentication
All API requests require authentication. NeuronGate API keys follow the format sk-ng-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx. You can pass the key in two ways:
Authorization header (recommended)
Authorization: Bearer sk-ng-your-api-keyX-API-Key header
X-API-Key: sk-ng-your-api-keycurl https://api.neurongate.io/v1/chat/completions \
-H "Authorization: Bearer sk-ng-your-api-key" \
-H "Content-Type: application/json" \
-d '{"model": "openai/gpt-4o", "messages": [{"role": "user", "content": "Hi"}]}'Chat Completions
/v1/chat/completionsCreates a chat completion. Compatible with the OpenAI chat completions API.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model identifier, e.g. "openai/gpt-4o", "anthropic/claude-sonnet-4", "meta-llama/llama-3-70b". |
| messages | array | Yes | Array of message objects. Each has a "role" (system | user | assistant) and "content" (string). |
| stream | boolean | No | If true, returns a stream of server-sent events. Default: false. |
| temperature | number | No | Sampling temperature between 0 and 2. Higher values produce more random output. Default: 1. |
| max_tokens | integer | No | Maximum number of tokens to generate. If omitted, the model decides. |
| top_p | number | No | Nucleus sampling. An alternative to temperature. Default: 1. |
| stop | string | array | No | Up to 4 sequences where the model will stop generating further tokens. |
| frequency_penalty | number | No | Number between -2 and 2. Positive values penalise repeated tokens. Default: 0. |
| presence_penalty | number | No | Number between -2 and 2. Positive values penalise tokens already present. Default: 0. |
Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1713000000,
"model": "openai/gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 9,
"total_tokens": 21
}
}Streaming
Set stream: true to receive server-sent events. Each event contains a delta object:
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1713000000,"model":"openai/gpt-4o","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1713000000,"model":"openai/gpt-4o","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1713000000,"model":"openai/gpt-4o","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]Examples
curl https://api.neurongate.io/v1/chat/completions \
-H "Authorization: Bearer sk-ng-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in one sentence."}
],
"temperature": 0.7,
"max_tokens": 100
}'List Models
/v1/modelsReturns a list of all available models, including pricing and context window information.
Response
{
"object": "list",
"data": [
{
"id": "openai/gpt-4o",
"object": "model",
"created": 1713000000,
"owned_by": "openai",
"pricing": {
"prompt": "0.0025",
"completion": "0.0100"
},
"context_length": 128000
},
{
"id": "anthropic/claude-sonnet-4",
"object": "model",
"created": 1713000000,
"owned_by": "anthropic",
"pricing": {
"prompt": "0.0030",
"completion": "0.0150"
},
"context_length": 200000
}
]
}Example
curl https://api.neurongate.io/v1/models \
-H "Authorization: Bearer sk-ng-your-api-key"Rate Limits
Rate limits are applied per API key. Limits depend on your account tier and can be viewed in the dashboard.
| Limit Type | Default | Description |
|---|---|---|
| Requests/min | 60 | Maximum requests per minute per API key |
| Daily spend | $100 | Maximum spend per day. Configurable in the dashboard. |
| Monthly spend | $1,000 | Maximum spend per calendar month. Configurable in the dashboard. |
When a rate limit is exceeded, the API returns a 429 status with a Retry-After header indicating when you can retry.
{
"error": {
"type": "rate_limit_exceeded",
"message": "Rate limit exceeded. Please retry after 2 seconds.",
"code": 429
}
}Error Codes
NeuronGate uses standard HTTP status codes. Errors return a consistent JSON body.
| Code | Type | Description |
|---|---|---|
| 400 | bad_request | The request body is malformed or missing required fields. |
| 401 | unauthorized | Invalid or missing API key. |
| 402 | insufficient_funds | Your account balance is too low to process this request. |
| 404 | not_found | The requested model or resource does not exist. |
| 429 | rate_limit_exceeded | Too many requests or spend limit reached. Check Retry-After header. |
| 500 | internal_error | An unexpected error occurred on our side. Please retry. |
| 502 | upstream_error | The upstream model provider returned an error. Try again or use a different model. |
Error Response Format
{
"error": {
"type": "unauthorized",
"message": "Invalid API key. Please check your key and try again.",
"code": 401
}
}SDKs & Libraries
NeuronGate is fully compatible with the OpenAI API. You do not need a custom SDK. Just use the official OpenAI library and point it at NeuronGate.
Python
pip install openaiNode.js / TypeScript
npm install openaiPython Setup
from openai import OpenAI
client = OpenAI(
api_key="sk-ng-your-api-key",
base_url="https://api.neurongate.io/v1",
)
# Use exactly like the OpenAI SDK
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Streaming works the same way
stream = client.chat.completions.create(
model="anthropic/claude-sonnet-4",
messages=[{"role": "user", "content": "Tell me a story."}],
stream=True,
)
for chunk in stream:
content = chunk.choices[0].delta.content
if content:
print(content, end="", flush=True)Node.js Setup
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "sk-ng-your-api-key",
baseURL: "https://api.neurongate.io/v1",
});
// Use exactly like the OpenAI SDK
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);
// Streaming works the same way
const stream = await client.chat.completions.create({
model: "meta-llama/llama-3-70b",
messages: [{ role: "user", content: "Tell me a story." }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) process.stdout.write(content);
}Any OpenAI-compatible library works. LangChain, LlamaIndex, Vercel AI SDK, and other frameworks that support a custom base URL are all compatible. Just set the base URL to https://api.neurongate.io/v1 and provide your NeuronGate API key.