Getting Started with ClawPipe
ClawPipe is an intelligent AI pipeline that sits between your app and LLM providers. Cut costs 30-50% with automatic routing, caching, and optimization. One SDK, every provider.
Install
npm install clawpipe-aipip install clawpipe-aigo get github.com/finsavvyai/clawpipe-goQuick Start
import { ClawPipe } from 'clawpipe-ai';
const pipe = new ClawPipe({
apiKey: 'cp_xxx',
projectId: 'my-app',
});
const result = await pipe.prompt('Explain recursion', {
system: 'You are a helpful tutor',
maxTokens: 2000,
});
console.log(result.text);
console.log(result.meta);
// { boosted: false, cached: false, contextSavings: '42%',
// route: 'deepseek', model: 'deepseek-chat',
// estimatedCostUsd: 0.003 }
from clawpipe import ClawPipe
pipe = ClawPipe(
api_key="cp_xxx",
project_id="my-app",
)
result = await pipe.prompt("Explain recursion",
system="You are a helpful tutor",
max_tokens=2000,
)
print(result.text)
print(result.meta)
# {'boosted': False, 'cached': False,
# 'context_savings': '42%', 'route': 'deepseek'}
package main
import (
"context"
"fmt"
clawpipe "github.com/finsavvyai/clawpipe-go"
)
func main() {
pipe := clawpipe.New(clawpipe.Config{
APIKey: "cp_xxx",
ProjectID: "my-app",
})
result, err := pipe.Prompt(context.Background(),
"Explain recursion",
clawpipe.WithSystem("You are a helpful tutor"),
clawpipe.WithMaxTokens(2000),
)
if err != nil { panic(err) }
fmt.Println(result.Text)
fmt.Println(result.Meta)
}
The Pipeline
Every request flows through up to eight stages. Each stage is independently toggleable and adds a specific optimization.
Booster
RAG
Pack
Cache
Route
Swarm
Call
Learn
What Happens on Each Call
- Booster checks if the prompt can be resolved without an LLM (math, dates, JSON, UUIDs).
- RAG retrieves relevant documents and prepends them as context.
- Packer compresses the context window, removing redundancy (20-60% token savings).
- Cache returns a cached result if the same or similar prompt was seen before.
- Router picks the cheapest model that meets quality requirements.
- Swarm optionally fans out to multiple models for consensus or speed.
- Gateway dispatches to the provider with circuit breaker protection.
- Learner records the outcome to improve future routing decisions.
Next Steps
- How It Works — deep dive into each pipeline stage
- TypeScript SDK — full config reference and all exports
- Python SDK — async and sync usage
- Go SDK — context-based API
- Integrations — LangChain, LlamaIndex, Vercel AI SDK