TypeScript SDK
JavaScript/TypeScript SDK for integrating with MacPaw AI Gateway
@macpaw/ai-sdk
Thin Vercel AI SDK extension for MacPaw AI Gateway: OpenAI-compatible providers (createAIGatewayProvider, createGatewayProvider), a createGatewayFetch bridge for any HTTP client, shared auth / retry / middleware / errors, and optional NestJS wiring.
Core generation APIs stay on upstream ai and @ai-sdk/*. This package only adds Gateway-specific construction and the fetch pipeline.
Package entry points
| Import | Use for |
|---|---|
@macpaw/ai-sdk | Canonical — providers, createGatewayFetch, errors, config types |
@macpaw/ai-sdk/provider | Alias of the root entry (same dist; for older snippets) |
@macpaw/ai-sdk/nestjs | AIGatewayModule, @InjectAIGateway(), AIGatewayExceptionFilter |
Upstream ai, @ai-sdk/openai, @ai-sdk/react (or ai/react) remain the home for Vercel primitives and React hooks.
There is no published @macpaw/ai-sdk/client, @macpaw/ai-sdk/runtime, @macpaw/ai-sdk/types, or @macpaw/ai-sdk/testing in this version — use createGatewayFetch + fetch (or the OpenAI SDK with custom fetch) for raw HTTP and multipart. See MIGRATION.
Install
pnpm add @macpaw/ai-sdk
# or
npm install @macpaw/ai-sdkAlso install upstream packages you call directly, for example ai, @ai-sdk/openai, @ai-sdk/react.
This package does not pin or wrap the upstream UI/hooks API from ai / @ai-sdk/react. Follow the versioned upstream docs for the exact major version you install there. If your chosen upstream version requires version-specific imports or patterns (for example schema helpers), use the upstream guidance for those APIs.
Quick start (Vercel AI SDK)
import { generateText, streamText } from 'ai';
import { createAIGatewayProvider, ErrorCode } from '@macpaw/ai-sdk';
const gateway = createAIGatewayProvider({
env: 'production',
getAuthToken: async () => (await getSetappSession()).accessToken,
});
const { text } = await generateText({
model: gateway('openai/gpt-4.1-nano'),
prompt: 'Hello from AI Gateway',
});
const result = streamText({
model: gateway('openai/gpt-4.1-nano'),
prompt: 'Write a poem',
});
for await (const delta of result.textStream) {
process.stdout.write(delta);
}Features
- Vercel-first —
OpenAIProviderfrom@ai-sdk/openai+ customfetch - Auth —
getAuthToken(forceRefresh?); one automatic retry on 401 withforceRefresh === true - Retry — exponential backoff for 429 and 5xx (and some network errors); not for 401/402
- Middleware —
(config, next) => Promise<Response>chain beforefetch - Errors — Gateway JSON and OpenAI-shaped bodies →
AIGatewayErrorsubclasses +ErrorCode - Request ID —
X-Request-IDon Gateway requests when missing - Timeout — per attempt, combined with caller
AbortSignal - Tree-shakeable — ESM + CJS
Configuration (GatewayProviderSettings)
GatewayProviderSettings)Used by createAIGatewayProvider, createGatewayProvider, createGatewayFetch, and Nest AIGatewayModule.
| Field | Purpose |
|---|---|
getAuthToken | Required — Promise<string | null>; true = refresh after 401 |
env | 'production' → default base URL https://api.macpaw.com/ai |
baseURL | Override gateway root (staging, etc.) |
headers | Extra headers (do not set Authorization here) |
retry | RetryConfig or false |
timeout | ms per attempt (default 60000) |
middleware | Interceptor stack |
fetch | Custom fetch implementation |
Internal resolution: resolveConfig() in gateway-config.ts.
createGatewayFetch — raw HTTP / multipart
createGatewayFetch — raw HTTP / multipartSame auth, retry, middleware, and error normalization as the provider path. Use relative URLs under the gateway root (e.g. '/api/v1/images/edits') or absolute URLs that stay under the same gateway origin.
import { createGatewayFetch, resolveGatewayBaseURL } from '@macpaw/ai-sdk';
const baseURL = resolveGatewayBaseURL(undefined, 'production', 'gatewayFetch');
const gatewayFetch = createGatewayFetch({
baseURL,
getAuthToken: async () => token,
});
const form = new FormData();
form.append('image', imageBlob, 'photo.png');
form.append('prompt', 'Add a hat');
form.append('model', 'openai/dall-e-2');
const res = await gatewayFetch('/api/v1/images/edits', { method: 'POST', body: form });Non-gateway absolute URLs are passed through without injecting Bearer auth (placeholder key is stripped). See gateway-fetch.ts.
createGatewayFetch requires a resolved baseURL. Use the exported resolveGatewayBaseURL() helper if you want the same 'production' shortcut that provider factories support.
createGatewayProvider — prefixed model IDs
createGatewayProvider — prefixed model IDsBare model IDs get a default Gateway prefix per provider constant; IDs that already contain / are unchanged.
| Constant | Default prefix |
|---|---|
GATEWAY_PROVIDERS.ANTHROPIC | anthropic |
GATEWAY_PROVIDERS.GOOGLE | google |
GATEWAY_PROVIDERS.XAI | xai |
GATEWAY_PROVIDERS.GROQ | groq |
GATEWAY_PROVIDERS.MISTRAL | mistral |
GATEWAY_PROVIDERS.AMAZON_BEDROCK | bedrock |
GATEWAY_PROVIDERS.AZURE | azure |
GATEWAY_PROVIDERS.COHERE | cohere |
GATEWAY_PROVIDERS.PERPLEXITY | perplexity |
GATEWAY_PROVIDERS.DEEPSEEK | deepseek |
GATEWAY_PROVIDERS.TOGETHERAI | togetherai |
GATEWAY_PROVIDERS.OPENAI_COMPATIBLE | requires modelPrefix in options |
import { generateText } from 'ai';
import { createGatewayProvider, GATEWAY_PROVIDERS } from '@macpaw/ai-sdk';
const anthropic = createGatewayProvider(GATEWAY_PROVIDERS.ANTHROPIC, {
env: 'production',
getAuthToken: async () => token,
});
await generateText({
model: anthropic('claude-sonnet-4-20250514'),
prompt: 'Hello',
});Provider options (AIGatewayProviderOptions)
AIGatewayProviderOptions)Extends GatewayProviderSettings plus OpenAI provider settings (without apiKey / baseURL / fetch, which are wired by the SDK):
normalizeErrors— defaulttrue; non-OK Gateway responses throw typed errorscreateOpenAI— optional override ofcreateOpenAIfrom@ai-sdk/openai(tests/advanced)
Use normalizeErrors: false only when you intentionally want to inspect raw failed Response objects in provider-driven tests or adapters. Auth refresh and retry behavior still stay on; only typed non-OK error throwing is relaxed.
Middleware
import type { Middleware } from '@macpaw/ai-sdk';
const loggingMiddleware: Middleware = async (config, next) => {
const response = await next(config);
console.log(config.method, config.url, response.status);
return response;
};Error handling
ErrorCode | Typical HTTP | Meaning |
|---|---|---|
AuthRequired | 401 | Token missing / expired |
InsufficientCredits / SubscriptionExpired | 402 | Billing / subscription |
ModelNotAllowed | 403 | Model denied |
RateLimited | 429 | Rate limit (retryAfter when present) |
Validation | 422 | Validation body |
| … | … | See gateway-errors.ts |
import { AIGatewayError, ErrorCode, isAIGatewayError } from '@macpaw/ai-sdk';
try {
// ...
} catch (e) {
if (isAIGatewayError(e) && e.code === ErrorCode.InsufficientCredits) {
// e.metadata.paymentUrl, e.requestId, etc.
}
}NestJS
pnpm add @macpaw/ai-sdk @nestjs/common rxjsRegister once (global by default):
import { AIGatewayModule } from '@macpaw/ai-sdk/nestjs';
AIGatewayModule.forRoot({
env: 'production',
getAuthToken: async () => process.env.SETAPP_TOKEN!,
});If your Nest app uses TypeScript subpath exports strictly, make sure its tsconfig uses a modern resolver such as moduleResolution: "Node16", "NodeNext", or "bundler" so @macpaw/ai-sdk/nestjs resolves correctly.
Inject GatewayProviderSettings (not an HTTP client) and build providers in the service:
import { Injectable } from '@nestjs/common';
import { InjectAIGateway } from '@macpaw/ai-sdk/nestjs';
import type { GatewayProviderSettings } from '@macpaw/ai-sdk';
import { createAIGatewayProvider } from '@macpaw/ai-sdk';
import { generateText } from 'ai';
@Injectable()
export class ChatService {
constructor(@InjectAIGateway() private readonly config: GatewayProviderSettings) {}
async complete(prompt: string) {
const gateway = createAIGatewayProvider(this.config);
const { text } = await generateText({
model: gateway('openai/gpt-4.1-nano'),
prompt,
});
return text;
}
}AIGatewayExceptionFilter maps AIGatewayError to JSON HTTP responses. See examples/nestjs/ for a copy-paste skeleton.
Only documented root exports are public API. Source-level helpers such as parseErrorResponseFromResponse and parseStreamErrorPayload may exist internally, but they are not supported import targets unless exported from @macpaw/ai-sdk.
Examples
From the repo root:
pnpm build
pnpm example:providerSet AI_GATEWAY_TOKEN or SETAPP_TOKEN. Optional: AI_GATEWAY_BASE_URL, AI_GATEWAY_MODEL.
See examples/README.md.
Release & quality
- CI:
typecheck,lint,test, coverage,buildon Node 18 / 20 / 22 pnpm verify:release— full local gate before publishpnpm size:pack— dry-run npm pack
AI assistant setup
Templates for Cursor (.cursor/skills/), Claude Code (CLAUDE.md), and OpenAI Codex (AGENTS.md) ship under templates/. After installing the package:
pnpm exec macpaw-ai-setup
# or: npx macpaw-ai-setupUse macpaw-ai-setup cursor, claude, or codex to install only one target. Existing root CLAUDE.md / AGENTS.md files get Gateway sections appended, not replaced.
The installed instructions enforce the current package surface and the main auth guardrails:
- prefer
@macpaw/ai-sdk/@macpaw/ai-sdk/nestjs - keep generation primitives on upstream
ai/@ai-sdk/* - do not use removed subpaths such as
client,runtime,types, ortesting - do not invent a token source or expose gateway tokens to browser-only code
- use
baseURLfor staging/custom hosts;envsupports only'production'
Versioning
Semantic Versioning. Releases via semantic-release and Conventional Commits.
License
MIT © 2026 MacPaw Way Ltd. See LICENSE.
Updated about 2 hours ago
