AI integration overview

Setapp provides a unified AI platform that gives apps access to multiple AI providers through a single integration layer. Instead of integrating directly with OpenAI, Anthropic, Google Gemini, or other providers, your app connects to Setapp’s AI infrastructure. Setapp handles, credit management, and provider routing, so you can focus on building AI-powered features.

The integration method depends on your platform:

macOS and iOS apps > SetappAI SDK

macOS and iOS apps integrate with Setapp AI using the SetappAI SDK, which is a part of the Setapp Framework. The SDK provides a native Swift API and automatically handles:

  • OAuth authentication with the ai.openai scope
  • Credit usage and billing
  • Communication with the Setapp AI Gateway

The SDK is recommended for Apple platform apps that need a seamless, native integration with minimal backend work.


Web apps and browser extensions > Setapp AI Gateway

Web apps and browser extensions integrate directly with the AI Gateway API. The AI Gateway is an OpenAI-compatible HTTP API that:

  • Provides access to multiple AI providers through a single endpoint
  • Uses Setapp OAuth tokens for authentication
  • Manages credit usage centrally
  • Supports streaming, structured output, and tool use

This approach is recommended for server-side apps, web platforms, and browser extensions.


Rate limits

Setapp AI enforces per-user rate limits depending on the model category and subscription plan.

Request limits

GPT-4 and GPT-5 models

  • 400 requests per minute per user
  • 7,500 requests per day per user

Embedding models

  • 10,000 requests per hour per user
  • 10,000 requests per day per user

All other models

  • 10,000 requests per hour per user
  • 20,000 requests per day per user

Token limits

The maximum input size per request depends on the user’s subscription plan:

  • Enthusiast plan — up to 160,000 input tokens
  • Expert plan — up to 1,600,000 input tokens

Your application should handle rate limiting and token size validation gracefully, especially for high-volume or streaming scenarios.