Migration Guides
Switch to ClawPipe from your current LLM setup. Most migrations take under 10 minutes.
From Direct OpenAI Calls
The most common migration. Replace the OpenAI client with ClawPipe's OpenAI-compatible wrapper or use the native SDK.
Option A: Drop-in Replacement (Zero Code Changes)
Use OpenAICompat to keep your existing OpenAI code unchanged.
// Before: direct OpenAI
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// After: ClawPipe drop-in (same API)
import { OpenAICompat } from 'clawpipe-ai';
const openai = new OpenAICompat({
apiKey: 'cp_xxx',
projectId: 'my-app',
});
// Your existing code stays exactly the same
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
});
Option B: Native SDK (Full Pipeline Access)
Use the ClawPipe SDK directly for full control over the pipeline.
// Before
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Explain recursion' },
],
max_tokens: 2000,
});
const text = completion.choices[0].message.content;
// After
import { ClawPipe } from 'clawpipe-ai';
const pipe = new ClawPipe({ apiKey: 'cp_xxx', projectId: 'my-app' });
const result = await pipe.prompt('Explain recursion', {
system: 'You are helpful.',
maxTokens: 2000,
});
const text = result.text;
// Bonus: result.meta shows cost savings, routing, cache hits
From LiteLLM
LiteLLM provides multi-provider access but lacks optimization stages. ClawPipe adds Booster, Packer, Cache, and self-learning routing.
Key Differences
| Feature | LiteLLM | ClawPipe |
|---|---|---|
| Multi-provider | Yes | Yes |
| Cost optimization | Manual model selection | Automatic routing + caching + packing |
| Caching | Hash-only | Hash + semantic similarity |
| Smart routing | No | Self-learning, improves over time |
| Context compression | No | 20-60% token reduction |
| Deterministic booster | No | Skips LLM for math, dates, JSON |
# Before: LiteLLM
from litellm import completion
response = completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
# After: ClawPipe
from clawpipe import ClawPipe
pipe = ClawPipe(api_key="cp_xxx", project_id="my-app")
result = await pipe.prompt("Hello")
# Automatic routing, caching, packing — no manual model selection
From Portkey
Portkey focuses on observability and gateway routing. ClawPipe adds client-side optimization stages (Booster, Packer, Cache) that reduce tokens before they reach the gateway.
Key Differences
| Feature | Portkey | ClawPipe |
|---|---|---|
| Gateway routing | Yes | Yes |
| Observability | Dashboard-focused | SDK-level telemetry + dashboard |
| Client-side optimization | No | Booster + Packer + Cache |
| Self-learning routing | No | Weights update from every call |
| Swarm orchestration | No | Fan out to N models |
| Offline fallback | No | Local LLM auto-detection |
// Before: Portkey
import Portkey from 'portkey-ai';
const portkey = new Portkey({ apiKey: 'pk_xxx' });
const response = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Hello' }],
model: 'gpt-4o',
});
// After: ClawPipe
import { ClawPipe } from 'clawpipe-ai';
const pipe = new ClawPipe({ apiKey: 'cp_xxx', projectId: 'my-app' });
const result = await pipe.prompt('Hello');
// Automatic: booster check, context packing, cache lookup,
// smart routing, then gateway call with circuit breaker
Migration Checklist
- Sign up at app.clawpipe.ai and create a project
- Install the SDK:
npm install clawpipe-ai - Replace your LLM client initialization (see examples above)
- Set your ClawPipe API key in environment variables
- Run your existing test suite to verify compatibility
- Monitor savings via
pipe.stats()or the analytics dashboard