Open AI integration overview
Change Log
December 12, 2025
- Added support for GPT-5.1-codex-max, GPT-5.2, GPT-5.2-2025-12-11, GPT-5.2-chat-latest, GPT-5.2-pro, and GPT-5.2-pro-2025-12-11.
December 1, 2025
- Added support for the new OpenAI API endpoint
v1/responses, required for modern GPT and O3 reasoning models. - Added support for the GPT-5.1 and o3-deep-research models. See:
- The rate limits were increased by 100 times:
- req/minute/user for GPT-4 and GPT-5: from 4 to 400
- req/day/user for GPT-4 and GPT-5: from 75 to 7500
- req/hour/user for other models: from 100 to 10,000
- req/day/user for other models: from 200 to 20,000
- tokens for the Enthusiast subscription plan: from 1600 to 160,000
- tokens for the Expert subscription plan: from 16,000 to 1,600,000
November 3, 2025
- New rate limits for the GPT-5 model are now in effect. Users can only send messages with up to 1,600 tokens and a maximum of 4 messages per minute.
October 23, 2025
- Added support for GPT-5, GPT-5-2025-08-07, GPT-5-mini, GPT-5-mini-2025-08-07, GPT-5-nano, GPT-5-nano-2025-08-07.
June 11, 2024
- Added support for DALL·E 2 and DALL·E 3.
February 28, 2024
- Released the AI+ subscription plans with new usage limits. See AI+ subscription plans.
September 15, 2023
- Added support for the new OpenAI endpoints:
v1/images/edits,v1/images/variations.
August 23, 2023
- Added support for the new OpenAI endpoints:
v1/images/generations,v1/audio/transcriptions,v1/audio/translations. - New rate limits for the GPT-4 model are now in effect. Users can only send messages with up to 1,600 tokens and a maximum of 4 messages per minute.
We encourage AI apps to join Setapp. Most AI apps depend on OpenAI, meaning each user must have their own OpenAI keys. This implies that a user must visit the OpenAI website, register, get the key, and know where to use it. The procedure is not obvious or easy for average users, but we know how to simplify it and encourage more people to use your AI app.
Setapp can be used as a “proxy” between your app and OpenAI. There is no need to generate an OpenAI key for each user — your app can use Setapp OpenAI key to access OpenAI. In other words, we receive a full request from your app, so add our OpenAI key and forward it.
How to integrate your AI app into Setapp
- Integrate Setapp Framework into your Mac app.
- Integrate our Vendor API Authorization with your back-end.
- Integrate our OpenAI API with your back-end.
- Replace client calls to OpenAI with requests for your back end.
Supported OpenAI endpoints
Setapp acts as a proxy between your app and OpenAI and forwards your requests to the OpenAI API.
We currently support the following OpenAI endpoints:
v1/responses– the unified responses endpoint for modern GPT and O3 reasoning models (recommended).v1/chat/completions– legacy chat completions endpoint (still supported for compatibility).
Use v1/responses for all new integrations and for all reasoning models such as o3-deep-research and GPT-5.1.
This is the example with the v1/chat/completions endpoint for existing integrations:
curl -X POST \
-H 'OpenAIPath: v1/chat/completions' \
-H 'Authorization: Bearer <Setapp-token>' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-d requestBodyWhere:
OpenAIPathheader contains the desired OpenAI endpoint (ex.v1/chat/completionsorv1/responses)Authorizationheader contains a Bearer access token received from the Vendor API Authorization flow withai.openaiscoperequestBodyis your regular request body for OpenAI API, for example:
{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}Find more examples of requests in OpenAI API documentation.
Main specs of API
- The API endpoint is available for app developers as https://vendor-api.setapp.com/resource/v1/ai/openai
- Full API endpoint specification: https://docs.setapp.com/reference/post_ai-openai
- Live list of supported models. The complete, up-to-date list of OpenAI model IDs supported by Setapp is available at the Vendor API.
- Available OpenAI endpoints
- Use a POST request with the access token received from the current auth flow and the full OpenAI request body (without the OpenAI key).
- Endpoint has its own scope
ai.openai - If a user wants to use their own OpenAI key, you should make requests directly to OpenAI; Setapp's endpoint won't accept such requests.
- The API endpoint has the following limits:
- 400 req/minute/user and 7500 req/day/user for GPT-4 and GPT-5.
- 10,000 req/hour/user and 10,000 req/day/user for embedding models.
- 10,000 req/hour/user and 20,000 req/day/user for all other models.
- 160,000 tokens input size for the Enthusiast subscription plan and 1,600,000 for the Expert subscription plan.
- Available responses:
- 200 — Successful request.
- 400 — Bad request. It's an issue with the request body or the OpenAI path header.
- 401 — Request unauthorized. Check your access token.
- 403 — Access is forbidden. Check your access token scope.
- 429 — Too many requests. Try again later.
- 500 — Unexpected server error. Try again later or contact the Setapp team.
How to integrate DALL·E in Setapp and calculate a fee
You can integrate the DALL·E model into your AI app and include it in your Setapp version app. Setapp now supports DALL·E 2 and DALL·E 3 alongside GPT-4o.
See OpenAI pricing when calculating your expenses.
API description
Refer to the relevant API description on the OpenAI website.
Use this endpoint for image generation in Setapp, similar to the case described in How to integrate your app into Setapp. There, you can define the image model, the number of images, and their size.
Updated 1 day ago
