Overview

Default

Get started with BitRouter Cloud — no API keys to manage

BitRouter Cloud is the fastest way to access 200+ LLMs for your agent runtime. No provider API keys to manage — BitRouter handles provider access for you. Pay per token via your on-chain wallet.

Install & Launch

curl -fsSL https://install.bitrouter.ai | sh
npm install -g bitrouter
brew install bitrouter/tap/bitrouter

Then start the proxy:

bitrouter

On first launch with no configuration, BitRouter starts the interactive setup wizard.

Choose Your Provider

The wizard presents two options:

? Choose a provider mode:
  > Default (BitRouter Cloud)
    Bring Your Own Key

Select Default and press Enter. This configures BitRouter to route all requests through BitRouter Cloud.

Choose Bring Your Own Key if you want to use your own OpenAI, Anthropic, or Google API keys instead. See the Manual guide for that path.

Create Your Wallet

The wizard generates a local Ed25519 keypair — your wallet is your identity. No browser signup, no vendor approval.

bitrouter account --generate-key

This creates a master keypair stored locally at ~/.bitrouter/keys/ and displays your Solana and EVM addresses. You control your keys — BitRouter never holds them.

Learn more about wallet-based identity and multi-chain support in the Agentic Identity & Payment guide.

Fund Your Wallet

Deposit stablecoins to your embedded wallet to fund agent operations.

bitrouter swig create-wallet

Supported Networks:

  • Solana (USDC)
  • Tempo (USDC)

Fiat funding via Stripe is coming soon alongside the human console.

Authorize Your Agent

Derive an agent authority to let your agent spend from your wallet within boundaries you define, enforced on-chain:

bitrouter swig derive-agent \
  --label "my-coding-agent" \
  --per-tx-cap 5000000 \
  --cumulative-cap 50000000 \
  --expires-at 1735689600

Learn more about agent authorities, permission parameters, and advanced configurations in the Agentic Identity & Payment guide.

Verify

The wizard writes your config, starts the API server, and runs a health check:

✓ Configuration saved to ~/.bitrouter/bitrouter.yaml
✓ API server listening on 127.0.0.1:8787
✓ Health check passed

You can re-run the wizard at any time with bitrouter init.


Making Your First Request

The examples below use the chat completions endpoint. Support for tools (MCP) and agents (ACP) is coming soon.

Basic Example

curl -X POST "https://api.bitrouter.ai/v1/chat/completions" \
  -H "Authorization: Bearer <your-jwt>" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "moonshotai:kimi-k2.5",
    "messages": [
      {"role": "user", "content": "Hello! What can you help me with?"}
    ],
    "max_tokens": 200
  }'

Generate a JWT with bitrouter keygen --exp 30d --scope api --name default. See Agentic Identity & Payment for details.

Code Examples

const response = await fetch('https://api.bitrouter.ai/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer <your-jwt>',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'anthropic:claude-sonnet-4',
    messages: [
      { role: 'user', content: 'Explain how BitRouter works' }
    ],
    max_tokens: 300
  })
});

const data = await response.json();
console.log(data.choices[0].message.content);
import requests

response = requests.post(
    'https://api.bitrouter.ai/v1/chat/completions',
    headers={
        'Authorization': 'Bearer <your-jwt>',
        'Content-Type': 'application/json'
    },
    json={
        'model': 'moonshotai:kimi-k2.5',
        'messages': [
            {'role': 'user', 'content': 'What is quantum computing?'}
        ],
        'max_tokens': 300
    }
)

result = response.json()
print(result['choices'][0]['message']['content'])

Streaming

For real-time, token-by-token responses, add stream: true to your request. Responses use the Server-Sent Events format:

const response = await fetch('https://api.bitrouter.ai/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer <your-jwt>',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'anthropic:claude-sonnet-4',
    messages: [{ role: 'user', content: 'Tell me a short story' }],
    stream: true,
    max_tokens: 500
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const lines = decoder.decode(value).split('\n').filter(line => line.trim());
  for (const line of lines) {
    if (line.startsWith('data: ') && line.slice(6) !== '[DONE]') {
      const content = JSON.parse(line.slice(6)).choices[0]?.delta?.content;
      if (content) process.stdout.write(content);
    }
  }
}
import requests
import json

response = requests.post(
    'https://api.bitrouter.ai/v1/chat/completions',
    headers={
        'Authorization': 'Bearer <your-jwt>',
        'Content-Type': 'application/json'
    },
    json={
        'model': 'moonshotai:kimi-k2.5',
        'messages': [{'role': 'user', 'content': 'Tell me a story'}],
        'stream': True,
        'max_tokens': 500
    },
    stream=True
)

for line in response.iter_lines():
    if line:
        line = line.decode('utf-8')
        if line.startswith('data: '):
            data = line[6:]
            if data == '[DONE]':
                break
            content = json.loads(data)['choices'][0]['delta'].get('content', '')
            if content:
                print(content, end='', flush=True)

Each SSE message contains a JSON chunk with choices[0].delta.content. The stream ends with data: [DONE].

BitRouter also supports multi-modal APIs. See the API Reference for details.

Available Models

BitRouter offers 200+ LLMs for agent runtime. Use intelligent routing (e.g. moonshotai:kimi-k2.5) to automatically route to the best provider, or specify a provider directly.

For the full model list, discovery API, and routing options, see the API Reference.

Request Parameters

Standard OpenAI-compatible parameters are supported:

{
  "model": "moonshotai:kimi-k2.5",       // Required: Model identifier
  "messages": [                            // Required: Conversation messages
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  "stream": false,                         // Optional: Enable streaming (default: false)
  "max_tokens": 500,                       // Optional: Maximum tokens to generate
  "temperature": 0.7,                      // Optional: Sampling temperature (0.0-2.0)
  "top_p": 1.0,                            // Optional: Nucleus sampling
  "frequency_penalty": 0.0,                // Optional: Frequency penalty
  "presence_penalty": 0.0                  // Optional: Presence penalty
}

Authentication & Security

BitRouter uses self-signed JWTs for authentication — no central auth server. Generate and manage tokens via the CLI:

# Generate a JWT for API access
bitrouter keygen --exp 30d --scope api --name default

# List saved tokens
bitrouter keys --list

Store keys securely using environment variables. Never commit them to version control or expose them in client-side code. If a key is compromised, revoke it and generate a replacement.

For full auth management details including budget controls, model allowlists, and token scopes, see the Agentic Identity & Payment guide.

Usage & Billing

How Costs Work

Billed per token (input + output). Each provider sets their own per-token pricing. Use the quote endpoints to get exact price estimates before committing. See the API Reference for details.

Monitor Your Usage

Track your spending and usage via the BitRouter TUI:

bitrouter

The TUI shows your current balance, usage history, spending by model, and transaction history.

A human console for web-based usage monitoring is coming soon.

Funding Your Account

Fund your embedded wallet via stablecoin transfer.

Supported Networks:

  • Solana (USDC)
  • Tempo (USDC)

Fiat funding via Stripe is coming soon alongside the human console.

Troubleshooting

If you encounter any issues, please reach out to us on Discord or Twitter/X. Our team is happy to help you debug and resolve any problems.

What's Next?

How is this guide?

Last updated on

On this page