AI Gateway
Integrate with AI Gateway and access multiple AI models through a unified API
Overview
The Setapp AI Gateway is a secure, unified proxy that enables your app to access multiple AI models from providers such as OpenAI, Anthropic, Google Gemini, and xAI without requiring users to provide their own API keys.
The AI Gateway standardizes various provider APIs into a single, OpenAI-API-compatible interface, making it easy to switch models or providers with a simple string change.
Core AI Gateway features
- OpenAI SDK Compatible – A seamless, drop-in replacement for the OpenAI SDK; just update your base URL and API key to get started.
- Multi-Provider Access – Connect to OpenAI, Anthropic, Google Gemini, xAI, and more via a single unified API.
- Streaming Support – Full support for Server-Sent Events (SSE) to deliver real-time responses across all compatible endpoints.
- Dynamic Model Discovery – Use the
/api/v1/model/infoendpoint to discover available models and their specific capabilities in real-time.
Core concepts & capabilities
The AI Gateway supports the latest industry standards for AI interaction:
- Responses API - A unified endpoint for text, reasoning, and tool use (recommended).
- Chat Completions - Standard OpenAI-API-compatible chat interface.
- Vision - Send images for analysis using multimodal models.
- Audio Processing - Transcription and translation with Whisper and GPT-4o models.
- Embeddings - Create vector representations of text for semantic search and RAG.
- Image Generation - Generate images from text prompts using DALL-E 3.
- Structured Output - Force the model to return valid JSON matching your schema.
- Function Calling - Define tools that the model can request to execute.
- Streaming - Server-Sent Events (SSE) for real-time responses.
Example request
curl -X POST https://api.macpaw.com/ai/api/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {Setapp Token}" \
-d '{
"model": "openai/gpt-4.1-mini",
"input": "Write a haiku about local development servers."
}'Quick start
To get up and running in minutes:
- Obtain Token - Get a Bearer access token received from the Vendor API Authorization flow with
ai.openaiscope (see Authorization guide). - Select Model - Choose a model using the
provider/model_nameformat (e.g.,openai/gpt-4). - Call API - Send a POST request to
https://api.macpaw.com/ai/api/v1/responses.
Authentication
The AI Gateway uses the standard Setapp OAuth 2.0 flow that you might already be familiar with from the Setapp Vendor API. The same OAuth tokens that work with other Setapp APIs work with the AI Gateway.
Requirements
- Header:
Authorization: Bearer <SETAPP_USER_TOKEN> - Scope Required:
ai.openai
How to get your token
If you're already integrated with Setapp via Vendor API:
- Use your existing OAuth 2.0 implementation.
- Ensure the user's access token includes the
ai.openaiscope. - Use the user's access token in the
Authorizationheader when calling AI Gateway endpoints.
If you're new to Vendor API integration:
- Implement the Setapp OAuth 2.0 authorization flow.
- Request the
ai.openaiscope during authorization. - Use the obtained user access token to authenticate AI Gateway requests.
Note: The AI Gateway handles all AI provider authentication internally. You never need to manage OpenAI API keys or other provider credentials - just use the Setapp user's access token obtained through OAuth.
Base URL
Production:
https://api.macpaw.com/ai
What's different from OpenAI
The AI Gateway is designed to be OpenAI-API-compatible, but has a few key differences:
1. Model names use provider prefixes
All model names must include the provider prefix (e.g., openai/gpt-5-mini instead of just gpt-5-mini). This allows the gateway to route requests to the correct AI provider:
openai/gpt-5-mini- GPT-5 Mini (fast, cost-effective)openai/gpt-4o- GPT-4 Optimized (vision, function calling)google/gemini-2.5-pro- Gemini 2.5 Pro (Google's latest)x-ai/grok-4- Grok 4 (xAI)
2. Extended model support
The gateway provides access to models from multiple providers (OpenAI, Anthropic, Google, xAI) through a single API, not just OpenAI models.
To get the current list of supported models and their capabilities, use the GET /api/v1/model/info endpoint (API specification ).
Response codes
| Code | Meaning |
|---|---|
| 200 | Successful request |
| 400 | Bad request. Check your JSON body or provider prefix |
| 401 | Unauthorized. Check your access token |
| 402 | Payment required. Insufficient credits to complete the request* |
| 500 | Gateway/Provider error |
*Important
402 Payment Required: When a customer has insufficient credits, the API returns a 402 error. Your application should handle this gracefully by directing users to add more credits.
Migration guide
If you are currently using the legacy OpenAI proxy (/resource/v1/ai/openai), follow these steps to upgrade:
| Feature | Legacy Proxy (Deprecated) | AI Gateway (New) |
|---|---|---|
| Endpoint URL | https://vendor-api.setapp.com/resource/v1/ai/openai | https://api.macpaw.com/ai/api/v1/... |
| Routing | Uses OpenAIPath header | Endpoint is part of the URL path (e.g., /api/v1/responses) |
| Model Names | gpt-4 | openai/gpt-4 (Provider prefix required) |
| Provider | OpenAI only | Multi-provider (Anthropic, Google, etc.) |
| Authentication | Same OAuth tokens | Same OAuth tokens (no changes needed) |
Timeline: The legacy endpoint will remain active for a limited time to allow for a smooth transition. All new models and features will be added exclusively to the new Gateway.
API reference
For detailed API specifications, see the OpenAPI Specification.
Updated 2 days ago
