SetappAI SDK integration

This guide will help you integrate Setapp AI capabilities into your iOS or macOS app.

Core сoncepts & сapabilities

The SetappAI SDK provides the following capabilities through the Responses API:

  • Model discovery: List available AI models from multiple providers.
  • Text streaming: Real-time streaming responses via Server-Sent Events.
  • Conversation context: Maintain multi-turn conversations with context continuity.
  • Structured outputs: Generate JSON-formatted responses with schema validation.
  • Function calling: Enable models to call defined functions/tools.
  • Reasoning models: Support for models with enhanced reasoning capabilities.

Each model exposes its supported capabilities through the capabilities property, allowing you to check feature support before use.

Setup

1. Configure  authentication

The SetappAI SDK handles user authentication automatically through Setapp's OAuth system with the ai.openai scope. To enable this, you need to configure the SDK with your OAuth client credentials obtained from your developer account:

import Setapp
import SetappAI

SetappManager.shared.ai.set(configuration: .init(
    authConfiguration: AuthConfiguration(
        oauthClientId: "your-oauth-client-id",
        oauthSecret: "your-oauth-secret"
    )
))

2. Access the AI API

The AI functionality is accessed through the SetappManager.shared.ai instance:

let ai = SetappManager.shared.ai

Basic usage

The examples below demonstrate common usage patterns of the Responses API.

Fetch available models

Retrieve the list of AI models available to your users:

let models = try await ai.models.list()

// Each model contains:
// - id: Unique identifier (e.g., "gpt-4o", "claude-3-opus")
// - mode: Operational mode (e.g., .chat, .embedding)
// - capabilities: Supported features (e.g., .vision, .functionCalling)

Сreate a streaming response

Create a streaming conversation with an AI model:

// Select a model from the list
let model = selectedModel // From models.list()

// Create a streaming request
let stream = try await SetappManager.shared.ai.responses.createStream(
    model: model,
    input: [.message("Hello, how are you?")]
)

// Process streaming events
// Note: If a streaming error occurs, it throws and terminates the stream
for try await event in stream {
    if case let .response(.outputText(.delta(delta))) = event {
        // Handle text delta (incremental response)
        let text = delta.delta
    }
}

Maintain conversation context

To maintain conversation context across multiple messages, use the previousResponseID parameter:

var conversationResponseId: String?

// First message
let stream1 = try await ai.responses.createStream(
    model: model,
    input: [.message("What is Swift?")],
    previousResponseID: nil,
    store: true
)

for try await event in stream1 {
    if case let .response(.responseLifecycle(.completed(lifecycleEvent))) = event {
        conversationResponseId = lifecycleEvent.response.id
    }
}

// Follow-up message (maintains context)
let stream2 = try await ai.responses.createStream(
    model: model,
    input: [.message("Can you give an example?")],
    previousResponseID: conversationResponseId,
    store: true
)

Provide input

The API supports various input types through the Input enum:

// Simple text message (automatically creates a user message)
let input1: SetappAIAPI.Responses.Input = .message("Your message here")

// String literal (shorthand for .message())
let input2: [SetappAIAPI.Responses.Input] = ["Your message here"]

// Custom message with role
let input3: SetappAIAPI.Responses.Input = .message(
    SetappAIAPI.Responses.Input.Message(
        content: [.text(.init("Your message"))],
        role: .user
    )
)

Error handling

Automatic error presentation (default)

By default, the SetappAI SDK uses .autoPresent mode, which automatically presents errors to users via UI with appropriate recovery options. You don't need to explicitly handle errors in this mode.

Manual error handling

If you need custom error handling, configure the SDK to propagate errors by setting mode: .propagate:

SetappManager.shared.ai.set(configuration: .init(
    authConfiguration: AuthConfiguration(
        oauthClientId: "your-oauth-client-id",
        oauthSecret: "your-oauth-secret"
    ),
    mode: .propagate  // Override default .autoPresent
))

Then catch and handle SetappAIError in your code:

do {
    let models = try await ai.models.list()
    // Handle success
} catch let error as SetappAIError {
    switch error.code {
    case .insufficientCredits:
        // Handle insufficient credits
        if let recoveryURL = error.recoveryURL {
            // Direct user to purchase credits
        }
    case .rateLimit:
        // Handle rate limit
    case .modelNotAllowed:
        // Handle model access issue
    case .general:
        // Handle general error
    }
} catch {
    // Handle other errors
}

SetappAI error codes

  • .general - General or unknown error
  • .rateLimit - Rate limit exceeded
  • .insufficientCredits - User needs to purchase more credits
  • .modelNotAllowed - Model not available or not allowed for this user

Handling stream cancellation

For streaming operations, you can handle cancellation:

let task = Task {
    do {
        let stream = try await ai.responses.createStream(...)
        for try await event in stream {
            guard !Task.isCancelled else {
                break
            }
            // Process event
        }
    } catch is CancellationError {
        // Handle cancellation
    } catch {
        // Handle other errors
    }
}

// Cancel the stream when needed
task.cancel()