Building in public - why we wrote documentation before code at Pickaxe

When we decided to rebuild Pickaxe from scratch, the first thing we did surprised a lot of people.

We didn't open a code editor. We didn't scaffold a project. We didn't start arguing about frameworks or databases.

We opened a blank document and started writing help docs.

The Counterintuitive First Step

Every instinct in a startup tells you to ship fast. Move quickly. Get something in front of users. We've lived that way for years. And it's good advice, most of the time.

But this time was different. We weren't adding a feature to an existing product. We were starting over. Building something new on top of everything we'd learned from running Pickaxe. And when you're starting from zero, the most dangerous thing you can do is start coding before you know what you're actually building.

So we wrote the documentation first. Not as an afterthought. Not as a parallel task. As the very first deliverable.

Here's why.

Documentation Forces You to Think

Writing help docs before the product exists sounds backwards. But it turns out to be one of the most powerful product design exercises you can do.

When you write a help article about a feature that doesn't exist yet, you're forced to answer hard questions immediately. How does a user set this up? What happens when they click this button? What if they make a mistake? What does the error message say?

You can't handwave through those details in documentation the way you can in a product spec or a Figma mockup. A help doc has to describe real steps that a real person would follow. If the steps don't make sense, you know your product design doesn't make sense either.

We found bugs in our product thinking before we wrote a single line of code. Entire workflows that seemed elegant in the abstract fell apart the moment we tried to write clear instructions for them.

That's the point. It's better to discover those problems in a Google Doc than in a codebase.

It Gives You a Complete Mental Model

One of the biggest risks when building something new is losing the forest for the trees. You start with a grand vision, but the moment you begin coding, you get pulled into the details. Database schemas. API endpoints. Edge cases in authentication flows. Two weeks in, you've built a beautiful auth system but lost track of what the product actually does.

Writing documentation first gave us something invaluable: a complete mental model of the entire product before anyone touched code.

We wrote docs for every major feature area. How agents get created. How builders configure deployments. How end users customize their experience. How billing works. How access control works. Every doc forced us to think through not just what the feature does, but how it connects to everything else.

By the time we finished, we had a clear picture of the whole system. Not a vague roadmap. Not a slide deck with boxes and arrows. A real, detailed description of how every piece works together, written in plain language that anyone could understand.

That clarity has been worth more than any technical design document we've ever written.

It Changes Everything About Agentic Coding

Here's the part that surprised us the most.

We use AI heavily in our development workflow. Claude Code, agentic coding sessions, automated code generation. The whole stack. And here's what we discovered: AI coding tools are dramatically more effective when they have documentation to work from.

Think about what happens when you ask an AI to build a feature from a vague description. It makes assumptions. It guesses at edge cases. It picks patterns that might not match your architecture. You spend half your time correcting its assumptions instead of building.

Now think about what happens when you hand an AI agent a detailed help doc that describes exactly how the feature should work from the user's perspective. Suddenly it has real constraints. It knows what the user sees. It knows what happens in each step. It knows what error states exist. The code it produces is dramatically closer to what you actually want.

Our documentation became the single best prompt engineering we've ever done. Not because we wrote it as prompts, but because good documentation is a precise, unambiguous specification written in natural language. That's exactly what AI coding tools need.

We've been running an experiment where we hand Claude Code a help doc and ask it to implement the feature the doc describes. The results have been remarkable. Less back-and-forth. Fewer misunderstandings. Code that actually matches the intended user experience on the first pass.

This has fundamentally changed how we think about the relationship between documentation and development. They're not separate phases. Documentation is the input. Code is the output.

The Amazon Approach, Adapted

This idea isn't entirely new. Amazon has famously used "working backwards" for years, writing press releases for products before building them. The press release forces you to articulate the customer benefit clearly before you invest in implementation.

We took that concept and pushed it further. Instead of a press release (which is still fairly high-level), we wrote the actual user-facing documentation. The help articles. The step-by-step guides. The troubleshooting pages.

A press release tells you what you're building and why. Documentation tells you how it works in practice. And the "how" is where most products succeed or fail.

What We Actually Wrote

To be specific, here's what our documentation-first process looked like:

  • Getting Started guide. The first thing a new user would read. This forced us to nail the onboarding flow before building it.
  • Feature-by-feature help articles. Each major capability got its own doc. Agent creation. Deployment options. Access control. Monetization. Knowledge bases.
  • API documentation. What endpoints would exist. What parameters they'd accept. What they'd return. This became our technical spec.
  • Troubleshooting guides. What could go wrong, and how users would fix it. Writing these upfront forced us to think about error handling as a first-class concern, not an afterthought.

The whole process took about three days. Three days that saved us months of rework, confusion, and misaligned assumptions.

The Tradeoffs

Honest answer: this approach isn't free.

Spending the first three days working on documentation instead of building felt like lost momentum. Especially in AI, where every day brings new capabilities and the pressure to ship is constant. Your brain is screaming at you to just start coding. To make something real. Sitting in a Google Doc writing help articles for a product that doesn't exist yet feels almost irresponsible when competitors are shipping every week.

And the docs aren't perfect. Some of them will change as we build and discover things we didn't anticipate. That's fine. The goal was never to write permanent documentation. It was to think through the product completely before building it.

The other risk is over-specifying. If you write documentation that's too rigid, you can constrain yourself unnecessarily. We tried to write docs that described the user experience clearly while leaving room for implementation flexibility. The help doc says "click the Deploy button and choose your platform" without specifying whether that's a dropdown, a modal, or a separate page. User intent is fixed. Implementation details stay flexible.

What We'd Tell Other Teams

If you're building something from scratch, especially if you're using AI coding tools in your workflow, try writing the documentation first.

Not a product spec. Not a PRD. Not a Notion page with bullet points. Write the actual help articles your users would read. Write them as if the product already exists and someone needs to learn how to use it.

You'll discover design problems early. You'll build a shared mental model across your team. And if you're using agentic coding workflows, you'll have the best possible input for your AI tools: clear, specific, user-centered descriptions of exactly what needs to be built.

For us, it turned documentation from the boring thing you do after launch into the most valuable thing you do before it.

We're still early in this rebuild. Plenty could still go wrong. But we're building on a foundation of clarity that we've never had before. And that foundation started with a blank document, not a blank codebase.

If you want to follow along as we build the next version of Pickaxe, we'll keep sharing what we learn, what breaks, and what surprises us. This is the second post in our building in public series. More coming soon.

Related Articles

Sailing into the unknown — Pickaxe Building in Public series
Building in Public

The Innovator’s Dilemma at Pickaxe: Charting our Course

We want to build on the cutting edge. But our best users have built real businesses on Pickaxe. Here's why we decided the answer isn't to choose — it's to build something new.

March 31, 2026Read more
Top AI platforms in 2026 - comparison and review of the best platforms
Comparisons & Reviews

Top AI Platforms in 2026: The 15 Best Platforms I've Actually Tested

I tested 15 of the top AI platforms in 2026 — from ChatGPT and Claude to Pickaxe and OpenClaw. Here's my honest breakdown of what each one does best, pricing, pros and cons, and who should use them.

March 31, 2026Read more
OpenClaw AI agent use cases and comparison with ChatGPT, Claude, and Codex
Comparisons & Reviews

OpenClaw: What Makes It Different From ChatGPT, Claude & Codex (Plus 15+ Real Use Cases)

OpenClaw isn't just another chatbot. Here's what actually makes it different from ChatGPT, Claude, Codex, and Claude Code — plus the real use cases we've tested at Pickaxe.

March 27, 2026Read more
White-labeling AI for clients - building an AI agency with Pickaxe
Strategy & Business

White-Labeling AI for Clients: The Exact Playbook I'd Use to Build a $100K/Month AI Agency

The complete guide to white-labeling AI agents for clients. Step-by-step playbook covering platform selection, niche targeting, white-label setup, monetization, and scaling to $100K+ per year.

March 24, 2026Read more
AI tools for realtors - comprehensive guide to the best AI platforms for real estate agents in 2026
Industry Spotlights

Top 15+ AI Tools for Realtors in 2026: The Complete Guide

I tested every major AI tool for real estate agents. Here's what actually works in 2026 -- from CRM and lead gen to virtual staging, chatbots, and content creation.

April 06, 2026Read more
Illustrated landscape with a glowing portal reflecting in a pond at sunset — representing AI image generation
Comparisons & Reviews

Top 16 AI Image Generators (What I Actually Use After Testing All of Them)

I spent six weeks and $400 testing every major AI image generator. Here is what actually worked, what didn't, and which ones earned a permanent spot in my workflow.

March 17, 2026Read more