Docs / Migration

Migration Guides

Switch to ClawPipe from your current LLM setup. Most migrations take under 10 minutes.

From Direct OpenAI Calls

The most common migration. Replace the OpenAI client with ClawPipe's OpenAI-compatible wrapper or use the native SDK.

Option A: Drop-in Replacement (Zero Code Changes)

Use OpenAICompat to keep your existing OpenAI code unchanged.

// Before: direct OpenAI
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// After: ClawPipe drop-in (same API)
import { OpenAICompat } from 'clawpipe-ai';
const openai = new OpenAICompat({
  apiKey: 'cp_xxx',
  projectId: 'my-app',
});

// Your existing code stays exactly the same
const completion = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello' }],
});

Option B: Native SDK (Full Pipeline Access)

Use the ClawPipe SDK directly for full control over the pipeline.

// Before
const completion = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are helpful.' },
    { role: 'user', content: 'Explain recursion' },
  ],
  max_tokens: 2000,
});
const text = completion.choices[0].message.content;

// After
import { ClawPipe } from 'clawpipe-ai';
const pipe = new ClawPipe({ apiKey: 'cp_xxx', projectId: 'my-app' });

const result = await pipe.prompt('Explain recursion', {
  system: 'You are helpful.',
  maxTokens: 2000,
});
const text = result.text;
// Bonus: result.meta shows cost savings, routing, cache hits

From LiteLLM

LiteLLM provides multi-provider access but lacks optimization stages. ClawPipe adds Booster, Packer, Cache, and self-learning routing.

Key Differences

FeatureLiteLLMClawPipe
Multi-providerYesYes
Cost optimizationManual model selectionAutomatic routing + caching + packing
CachingHash-onlyHash + semantic similarity
Smart routingNoSelf-learning, improves over time
Context compressionNo20-60% token reduction
Deterministic boosterNoSkips LLM for math, dates, JSON
# Before: LiteLLM
from litellm import completion

response = completion(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

# After: ClawPipe
from clawpipe import ClawPipe

pipe = ClawPipe(api_key="cp_xxx", project_id="my-app")
result = await pipe.prompt("Hello")
# Automatic routing, caching, packing — no manual model selection

From Portkey

Portkey focuses on observability and gateway routing. ClawPipe adds client-side optimization stages (Booster, Packer, Cache) that reduce tokens before they reach the gateway.

Key Differences

FeaturePortkeyClawPipe
Gateway routingYesYes
ObservabilityDashboard-focusedSDK-level telemetry + dashboard
Client-side optimizationNoBooster + Packer + Cache
Self-learning routingNoWeights update from every call
Swarm orchestrationNoFan out to N models
Offline fallbackNoLocal LLM auto-detection
// Before: Portkey
import Portkey from 'portkey-ai';
const portkey = new Portkey({ apiKey: 'pk_xxx' });
const response = await portkey.chat.completions.create({
  messages: [{ role: 'user', content: 'Hello' }],
  model: 'gpt-4o',
});

// After: ClawPipe
import { ClawPipe } from 'clawpipe-ai';
const pipe = new ClawPipe({ apiKey: 'cp_xxx', projectId: 'my-app' });
const result = await pipe.prompt('Hello');
// Automatic: booster check, context packing, cache lookup,
// smart routing, then gateway call with circuit breaker

Migration Checklist

  1. Sign up at app.clawpipe.ai and create a project
  2. Install the SDK: npm install clawpipe-ai
  3. Replace your LLM client initialization (see examples above)
  4. Set your ClawPipe API key in environment variables
  5. Run your existing test suite to verify compatibility
  6. Monitor savings via pipe.stats() or the analytics dashboard