Docs / Getting Started

Getting Started with ClawPipe

ClawPipe is an intelligent AI pipeline that sits between your app and LLM providers. Cut costs 30-50% with automatic routing, caching, and optimization. One SDK, every provider.

Install

npm install clawpipe-ai
pip install clawpipe-ai
go get github.com/finsavvyai/clawpipe-go

Quick Start

import { ClawPipe } from 'clawpipe-ai';

const pipe = new ClawPipe({
  apiKey: 'cp_xxx',
  projectId: 'my-app',
});

const result = await pipe.prompt('Explain recursion', {
  system: 'You are a helpful tutor',
  maxTokens: 2000,
});

console.log(result.text);
console.log(result.meta);
// { boosted: false, cached: false, contextSavings: '42%',
//   route: 'deepseek', model: 'deepseek-chat',
//   estimatedCostUsd: 0.003 }
from clawpipe import ClawPipe

pipe = ClawPipe(
    api_key="cp_xxx",
    project_id="my-app",
)

result = await pipe.prompt("Explain recursion",
    system="You are a helpful tutor",
    max_tokens=2000,
)

print(result.text)
print(result.meta)
# {'boosted': False, 'cached': False,
#  'context_savings': '42%', 'route': 'deepseek'}
package main

import (
    "context"
    "fmt"
    clawpipe "github.com/finsavvyai/clawpipe-go"
)

func main() {
    pipe := clawpipe.New(clawpipe.Config{
        APIKey:    "cp_xxx",
        ProjectID: "my-app",
    })

    result, err := pipe.Prompt(context.Background(),
        "Explain recursion",
        clawpipe.WithSystem("You are a helpful tutor"),
        clawpipe.WithMaxTokens(2000),
    )
    if err != nil { panic(err) }

    fmt.Println(result.Text)
    fmt.Println(result.Meta)
}

The Pipeline

Every request flows through up to eight stages. Each stage is independently toggleable and adds a specific optimization.

Booster
RAG
Pack
Cache
Route
Swarm
Call
Learn

What Happens on Each Call

  1. Booster checks if the prompt can be resolved without an LLM (math, dates, JSON, UUIDs).
  2. RAG retrieves relevant documents and prepends them as context.
  3. Packer compresses the context window, removing redundancy (20-60% token savings).
  4. Cache returns a cached result if the same or similar prompt was seen before.
  5. Router picks the cheapest model that meets quality requirements.
  6. Swarm optionally fans out to multiple models for consensus or speed.
  7. Gateway dispatches to the provider with circuit breaker protection.
  8. Learner records the outcome to improve future routing decisions.

Next Steps