Agent API

Your agent, behind a clean API.

One endpoint. JSON in, JSON out. Streaming or not. Build it into your app, your Zap, your Raycast extension, your weird side project — three lines of code and you're live.

From hobby projects to production stacks
POSTcURL
$ curl https://api.pickaxe.co/agents/tunnel-permits
  -H "Authorization: Bearer …"
  -d '{
    "message": "Extending burrow 14 ft east into a
    neighbouring root system. Permit?",
    "burrow_id": "BR-7702"
  }'
200 OKResponse
414 ms
{
  "reply": "Two permits. §4-B Extension
  (30 acorns, 14d) + Root Easement.",
  "warnings": [
    "40% of first-submissions
    rejected for missing soil data"
  ],
  "actions": [
    { permits.draft: "ok" },
    { mole.queue: "ok" }
  ],
  "usage": { credits: 0.83 }
}
Auth and example code, ready to copy

Auth and example code, ready to copy

Every agent gets its own API key and a working code sample the moment you create it. cURL, Node, Python — pick your stack and ship.

Request body that's actually documented

Request body that's actually documented

Every parameter, every option, every response field — documented inline next to the endpoint. No hunting through six different doc sites to find the streaming flag.

All the agent. None of the plumbing.

When you call OpenAI directly, you also build prompt management, tool orchestration, conversation memory, and a cost dashboard. The Pickaxe Agent API hands you all of that as one endpoint — your agent, ready to integrate.

All the agent. None of the plumbing.

One endpoint per agent

No 'configure a model, paste a prompt, wire up tools' ceremony. Your agent already exists in Pickaxe — the API just exposes it.

Streaming and non-streaming on the same route

Add ?stream=true and you're done. Same response shape, same auth, same conversation context. No second endpoint to maintain.

Conversation IDs, server-side

Pass a conversation_id and the agent remembers — full thread history, tool calls, citations. No client-side memory juggling.

Token usage in every response

Each response includes a usage block — credits consumed, model used, latency. Predictable per-call cost, billed against Pickaxe credits.

Plus everything else you need to build and scale

Powerful features designed to help you create, deploy, and monetize AI tools that deliver real results.

cURL, Node, Python, Ruby, Go

Snippets in every language you'd actually ship. Copy from the docs, paste into your editor, swap your API key, run.

Webhooks for long-running agents

When an agent calls tools and takes 30+ seconds, fire-and-forget with a webhook callback instead of holding a connection.

Knowledge base by default

Every API call has access to the agent's full knowledge base — docs, URLs, files, integrations. No prompt-engineering on your end.

Actions inside the response

When the agent calls an Action — Linear, Stripe, your own webhook — the result is in the response payload. No extra round-trips.

Per-agent rate limits

Visible in your dashboard, not buried in headers. Rate limits scale with your plan and you can request increases inline.

API keys with scopes

Generate keys per agent, per environment, per integration. Revoke instantly, rotate without downtime, audit by key.

99.9% uptime target

Public status page, redundant model providers, idempotency keys for safe retries. Boring infrastructure on purpose.

See how others are building with Pickaxe

See how people are building, deploying, and monetizing AI agents with Pickaxe.

See our customers

Ship an agent into anything

Three lines of code, your API key, and you're calling a real production agent — knowledge, memory, tools, cost tracking included.

Frequently Asked Questions

The Pickaxe Agent API is a programmatic interface to any agent you build on Pickaxe. Instead of clicking through a chat UI or embedding a widget, you call a single HTTP endpoint with a message — and you get back the agent's reply, conversation memory, tool results, citations, and token usage in one clean JSON response.

It's designed for builders who want all the work an agent does — prompt management, knowledge base retrieval, tool orchestration, conversation memory, cost tracking — without writing the orchestration layer themselves. You build the agent visually in Pickaxe, then ship it into your app, your iOS extension, your Zapier flow, your Raycast plugin, your custom Slack app — three lines of code and you're calling a real production agent.

The API supports streaming and non-streaming on the same endpoint, server-side conversation IDs (so the agent remembers across calls), webhooks for long-running runs, idempotency keys for safe retries, and per-agent rate limits visible in your dashboard. Every response includes a usage block so per-call cost is predictable and forecastable. Pricing flows through Pickaxe credits — no BYOK, no surprise model-provider invoices.

Use the Agent API for custom mobile and web apps, internal tools, automation flows in Zapier or Make, custom Slack and Discord integrations, on-device voice assistants, and any case where you need an agent endpoint behind a programmatic call. Pair with Pickaxe's WhatsApp, email, Slack, embed, and Agent Pages deployments — all running off the same agent brain — for full omni-channel coverage.