Using the Swift OpenAPI Generator for the Jamf Pro API

For the past several months I have been learning Swift and SwiftUI and have finally reached the point where I want to build a few small, but capable apps to start putting together my new skills. One of these ideas requires interacting with the Jamf Pro API. I had not done much with network code at this point, but I remembered a session from WWDC 2023 that I was very interested in: “Meet Swift OpenAPI Generator.”

For the past several months I have been learning Swift and SwiftUI and have finally reached the point where I want to build a few small, but capable apps to start putting together my new skills. One of these ideas requires interacting with the Jamf Pro API. I had not done much with network code at this point, but I remembered a session from WWDC 2023 that I was very interested in: Meet Swift OpenAPI Generator.

These Swift packages create API clients and code models from OpenAPI documents. This is an approach known as “spec-driven development.” While it’s not a lot of work to get a few API requests written using URLSession, there’s a lot more effort that goes into the interfaces for those operations, and even more work to write the models that the responses become.

My approach to learning Swift has also been to focus on where we are going as platform engineers, and using OpenAPI to drive client code feels like the most correct approach.

There’s a bit of process to get through.

  • Xcode Setup: Install the Swift OpenAPI packages required and configure the build settings.
  • OpenAPI Doc: Copy the Jamf Pro OpenAPI document, update it, and configure the generator.
  • Auth Middleware: Requests need to be authenticated with an access token. This is handled by creating a middleware that will fetch and insert tokens into client requests.
  • Client Code: The generated client needs to be configured so that it can be used in the main application.

It is a bit of work up front, but I’ll showcase the benefits with a small example app, and how to extend these resources further.

This entire example project is now available on my GitHub.

The OpenAPI Generator

You will first need to install three packages using the Swift Package Manager. There is no central repository for packages with Swift as you might expect with languages like Python and Javascript. Swift packages are shared through git repositories. From the menu bar, select File > Add Package Dependencies… This brings up Xcode’s interface for the package manager.

There is a default collection of Apple Swift Packages in the sidebar. The OpenAPI packages are not included in this. You will need to copy and paste the GitHub URLs for three required packages into the search bar in the upper-right. I have provided them below:

When viewing a package you will see the Dependency Rule defaults to Up to Next Major Version. Swift packages follow semantic versioning. Each Swift OpenAPI package is at major version 1 at the time of this post. This dependency rule will pull in all updates for those packages up until they move to the next major version (2).

This is a sane default for your dependencies and I recommend keeping it.

For each package, paste in the URL to the search and click Add Package. There will be a pop-up window asking you to add package products to targets in your project.

When you install the Plugin there will be two products listed. Do not add them (leave at None).

When you install the Runtime and Transport they contain one product each and both should be added to your project’s target.

In Xcode’s sidebar a new Package Dependencies section will appear below your project files. It will list every package installed into your project. You’ll notice that there are a lot more than the three you just added. These are sub-dependencies the OpenAPI packages rely on.

Now the OpenAPI plugin must be added to the project target’s build phase. Navigate back to the project settingstarget. Go to the Build Phases section and expand Run Build Tool Plug-ins. Click the + button. In the pop-up window you will see under the swift-openapi-generator package a OpenAPIGenerator item with a bullseye icon. Select this and click Add.

Expand the Link Binary With Libraries section below and you should see both OpenAPIRuntime and OpenAPIURLSession already listed. This was done when you added the package products to the target.

The Xcode project is now setup and ready for the API client.

Jamf Pro OpenAPI Doc

This post will only cover the Pro API and not the Classic API.

Any Jamf Pro server has the OpenAPI document for its version available at the following URL:

  • https://<instance-name>.jamfcloud.com/api/schema/

As of version 11.7.1, this JSON file is 1.5 MB in size and over 45,000 lines long. If you tried to build the client using this raw file you’re going to encounter errors. and then encounter new errors as you start to patch them over.

Some of these errors are due to the Swift OpenAPI Generator not supporting an option that was defined, but most are errors when Jamf generated the document.

In the Appendix of this blog post I will include guidance for how to correct the errors I encountered in the 11.7.1 Pro OpenAPI doc.

There are still two remaining issues with Jamf’s OpenAPI doc to address before using it to generate client code.

For the overwhelming majority of paths there is no operationId. The OpenAPI generator would use this to create the method name in the client. Without this, it will autogenerate names that look like this: get_sol_v1_sol_computers_hyphen_inventory. That is the default generated name for GET /v1/computers-inventory. It makes for hard to read and hard to discover code.

The other issue is you are generating client code for hundreds of API endpoints that you will not use, and likely would never use. The generated Client file for the full OpenAPI doc was 57,000+ lines long, and the generated Types file a staggering 155,500+ lines. ~10 MB of unused Swift code.

The missing operationId properties can be manually addressed. For the APIs you intend to use you would add them into the path objects like so:

{
...
"/v1/computers-inventory" : {
  "get" : {
    "operationId": "ComputersInventoryGetV1",
    ...
}

The naming scheme I recommend is {Path}{Method}{Version} in capital-case without spaces, underscores, or hyphens as shown above. This naming scheme makes it easy to see all of the available methods and versions of an API when Xcode shows autocompletion options as you type.

The challenge of the large number of APIs you don’t intend to use is addressed by properly configuring the OpenAPI generator.

I have also tried creating a minimal OpenAPI document using openapi-extract to pull out the paths and schemas I wanted. I could then manually merge them into a single file after. This is, however, a very manual process, and it is a Javascript command line tool with very little instruction on how to setup.

Plugin Configuration

The next file you are going to add will be openapi-generator-config.yaml with the following contents:

generate:
  - client
  - types
filter:
  paths:
    - /v1/computers-inventory
    - /v1/jamf-pro-version

This file will instruct the generator to create client code from the OpenAPI doc and Swift types from the schemas. The types are critically important. These will be Swift structs returned by the client operations with properties that can be accessed through dot notation. Xcode’s autocomplete will show all of the possible values as you type making interacting with the response data simple and easy.

The third option for generate is server. You can create all of the stubs for the API itself using a web framework like Vapor. This will be worth exploring another day.

The filter property will only generate code for items that match the criteria. In the example above I am only asking for two APIs. At any time you can add additional paths to expand the capabilities of the client code.

Less code is the best code.

The First Build

Without adding a single Swift file you can now attempt the first build.

Go to the menu bar and select Product > Build or press ⌘ + B on your keyboard. For the very first time you use the plugin a dialog will appear asking for confirmation that you trust it. To continue, click Trust & Enable All.

If you encounter build errors at this step you will need to investigate the Issue and Reports navigators to find the cause. If there are errors related to the OpenAPI doc jump to the bottom of the blog in the Appendix where I have a section on how to address errors I encountered.

Sometimes your changes to the OpenAPI doc won’t reflect correctly in your code when you rebuild. You can clean the build caches by pressing ⌘ + ⌥ + ⇧ + K.

The Client Code

The generated Client, Operations, and Components objects are now available to import.

Were this API unauthenticated the Client could be used directly, but Jamf Pro requires authentication with an access token. The example in this post is going to focus on authentication using client credentials flow using a Jamf Pro API Client.

Press ⌘ + N and create a new Swift file in your project called JamfAPIClient.swift.

Add these imports:

// JamfAPIClient.swift

import Foundation
import HTTPTypes
import OpenAPIRuntime
import OpenAPIURLSession

A wrapper struct will be needed to handle all of the configuration and token management boilerplate code. This will become the main interface for the Jamf Pro API instead of using the Client directly.

struct JamfProAPIClient {
    let api: Client

    let clientId: String
    private let clientSecret: String

    init(hostname: String, clientID: String, clientSecret: String) {
        self.clientId = clientID
        self.clientSecret = clientSecret
        self.api = Client(
            serverURL: URL(string: "https://\(hostname):443/api")!,
            configuration: Configuration(dateTranscoder: .iso8601WithFractionalSeconds),
            transport: URLSessionTransport()
        )
    }
}

Where the inner Client is being instantiated a URL concatenated together from the passed hostname. The URLSessionTransport is the one installed with swift-openapi-urlsession package.

The Configuration being passed sets a different date transcoder than the default. Date strings in Jamf Pro contain fractional seconds*. This needs to be set or else decoding errors will occur for timestamps that include them.

Configuration(dateTranscoder: .iso8601WithFractionalSeconds)
  • See the Appendix for issues I encountered with ISO8601 date string decoding.

With all the work for setup now handled by the wrapper, here is the new client in action:

// Example use
let client = JamfProAPIClient(
    hostname: "dummy.jamfcloud.com",
    clientID: "43fd12fc...",
    clientSecret: "Fn96LFQP..."
)
print(client.clientId) // Inspect and identify clients
let jamfProVersion = try await client.api.JamfProVersionGetV1()

Adding Authentication

The code thus far does not yet include authentication. To do this a middleware must be created that handles obtaining access tokens using the client credentials and injecting that token into the requests. It should also cache the token, reusing it for its lifetime, and refresh the token in a way that is thread-safe.

The ClientMiddleware protocol allows custom code for inspecting and modifying requests before they are sent to the transport. Multiple middlewares can be passed to a client to handle different operations like logging, header manipulation, and authentication.

This is the minimal code to start:

struct APIClientMiddleware: ClientMiddleware {
    // Store the access token here
    func intercept(
        _ request: HTTPRequest,
        body: HTTPBody?,
        baseURL: URL,
        operationID: String,
        next: (HTTPRequest, HTTPBody?, URL) async throws -> (HTTPResponse, HTTPBody?)
    ) async throws -> (HTTPResponse, HTTPBody?) {
        var request = request
        // Retrieve and inject the access token here
        return try await next(request, body, baseURL)
    }
}

Because this is conforming to a protocol, Xcode can autocomplete the entire signature for intercept for you as you type.

The comments identify where the code for the token needs to be added. Before writing the code that calls POST /api/oauth/token there needs to be an object to store the token data from the response and evaluate if it is still valid.

This struct is written to be instantiated from the JSON response for client credentials authentication:

struct AccessToken: Codable {
    let access_token: String
    let expires_in: Int
    let expiration_date: Date

    var isExpired: Bool {
        return expiration_date < Date()
    }

    init(from decoder: any Decoder) throws {
        let container = try decoder.container(keyedBy: CodingKeys.self)
        self.access_token = try container.decode(String.self, forKey: .access_token)
        self.expires_in = try container.decode(Int.self, forKey: .expires_in)
        self.expiration_date = Date().addingTimeInterval(Double(expires_in))
    }
}

isExpired is a computed property that will return true if the calculated expiration exceeds the current time when it is called.

Because both the client and the middleware are asynchronous there is a risk of a race condition where multiple threads attempt to refresh the token at the same time. Implementing the AccessTokenManager as an Actor will help address this.

Actors are like classes, but access to their properties and methods are serialized. If multiple threads performing requests all trigger the creation of a new token only one needs to occur and the rest will queue until they retrieve the newly cached token.

actor AccessTokenManager {
    private let tokenURL: URL
    private let clientId: String
    private let clientSecret: String

    var currentToken: AccessToken?
    var activeTokenTask: Task<AccessToken, Error>?

    init(tokenURL: URL, clientId: String, clientSecret: String) {
        self.tokenURL = tokenURL
        self.clientId = clientId
        self.clientSecret = clientSecret
    }
}

The AccessTokenManager will take in the URL to request tokens from, the client ID, and client secret. Internally, it will store the current access token using the struct from above, and a Task. The task will be used to control concurrency on retrieving tokens.

The token manager requires its own network code apart from the API client. This is a custom error that will be throw if any part of the token requests fail:

enum JamfProAPIClientError: Error {
    case AuthError(String)
}

The method to request access tokens will look similar to many other examples of URLSession you may have seen. It is also a look at the verbose code we want to avoid having to write. Every API would require data model code (the AccessToken struct above), and HTTP request code.

This code follows Jamf’s recipe for client credentials auth on the developer portal.

func requestAccessToken() async throws -> AccessToken {
    var request = URLRequest(url: tokenURL)

    request.httpMethod = "POST"

    request.setValue("application/json", forHTTPHeaderField: "Accept")
    request.setValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type")

    var body = URLComponents()
    body.queryItems = [
        URLQueryItem(name: "grant_type", value: "client_credentials"),
        URLQueryItem(name: "client_id", value: clientId),
        URLQueryItem(name: "client_secret", value: clientSecret)
    ]
    request.httpBody = body.query?.data(using: .utf8)

    let (data, response) = try await URLSession.shared.data(for: request)

    guard let httpResponse = response as? HTTPURLResponse else {
        throw JamfProAPIClientError.AuthError("Token request failed with response: \(response)")
    }

    if httpResponse.statusCode != 200 {
        throw JamfProAPIClientError.AuthError("Token request failed with status code: \(httpResponse.statusCode)")
    }

    guard let newAccessToken = try? JSONDecoder().decode(AccessToken.self, from: data) else {
        throw JamfProAPIClientError.AuthError("Failed to decode access token: \(data)")
    }

    return newAccessToken
}

Now the interface for thread-safe token requests. getAccessToken will be called by the middleware to return the current valid token that has been cached.

func getAccessToken() async throws -> AccessToken {
    if let activeTokenTask {
        return try await activeTokenTask.value
    }
    
    if let currentToken, currentToken.isExpired {
        return currentToken
    }
    
    activeTokenTask = Task {
        try await requestAccessToken()
    }
    
    guard let newToken = try await activeTokenTask?.value else {
        throw JamfProAPIClientError.AuthError("Failed to return access token")
    }
    currentToken = newToken
    activeTokenTask = nil

    return newToken
}

Here is a breakdown of the logic above:

  1. Check if there is an active task. If there is, another thread is requesting a new access token and this one will wait for it to complete and return the value.
  2. Check if there is a current token and that it is not expired. If the token exists and is valid it will be returned.
  3. If neither of the above conditions are met a new token will be requested and returned.

Here is the complete AccessTokenManager:

actor AccessTokenManager {
    private let tokenURL: URL
    private let clientId: String
    private let clientSecret: String

    var currentToken: AccessToken?
    var activeTokenTask: Task<AccessToken, Error>?

    init(tokenURL: URL, clientId: String, clientSecret: String) {
        self.tokenURL = tokenURL
        self.clientId = clientId
        self.clientSecret = clientSecret
    }

    func getAccessToken() async throws -> AccessToken {
        if let activeTokenTask {
            return try await activeTokenTask.value
        }
        
        if let currentToken, currentToken.isExpired {
            return currentToken
        }
        
        activeTokenTask = Task {
            try await requestAccessToken()
        }
        
        guard let newToken = try await activeTokenTask?.value else {
            throw JamfProAPIClientError.AuthError("Failed to return access token")
        }
        currentToken = newToken
        activeTokenTask = nil

        return newToken
    }

    func requestAccessToken() async throws -> AccessToken {
        var request = URLRequest(url: tokenURL)

        request.httpMethod = "POST"

        request.setValue("application/json", forHTTPHeaderField: "Accept")
        request.setValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type")

        var body = URLComponents()
        body.queryItems = [
            URLQueryItem(name: "grant_type", value: "client_credentials"),
            URLQueryItem(name: "client_id", value: clientId),
            URLQueryItem(name: "client_secret", value: clientSecret)
        ]
        request.httpBody = body.query?.data(using: .utf8)

        let (data, response) = try await URLSession.shared.data(for: request)

        guard let httpResponse = response as? HTTPURLResponse else {
            throw JamfProAPIClientError.AuthError("Token request failed with response: \(response)")
        }

        if httpResponse.statusCode != 200 {
            throw JamfProAPIClientError.AuthError("Token request failed with status code: \(httpResponse.statusCode)")
        }

        guard let newAccessToken = try? JSONDecoder().decode(AccessToken.self, from: data) else {
            throw JamfProAPIClientError.AuthError("Failed to decode access token: \(data)")
        }

        return newAccessToken
    }
}

And here it is integrated back into the APIClientMiddleware:

struct APIClientMiddleware: ClientMiddleware {
    let accessTokenManager: AccessTokenManager

    init(accessTokenManager: AccessTokenManager) {
        self.accessTokenManager = accessTokenManager
    }

    func intercept(
        _ request: HTTPRequest,
        body: HTTPBody?,
        baseURL: URL,
        operationID: String,
        next: (HTTPRequest, HTTPBody?, URL) async throws -> (HTTPResponse, HTTPBody?)
    ) async throws -> (HTTPResponse, HTTPBody?) {
        guard let accessToken = try? await accessTokenManager.getAccessToken() else {
            throw JamfProAPIClientError.AuthError("Failed to fetch access token")
        }

        var request = request
        request.headerFields[.authorization] = "Bearer \(accessToken.access_token)"

        return try await next(request, body, baseURL)
    }
}

The complete middleware solution can now be passed to the client code:

struct JamfProAPIClient {
    public let api: Client

    let clientId: String
    private let clientSecret: String

    init(hostname: String, clientID: String, clientSecret: String) {
        self.clientId = clientID
        self.clientSecret = clientSecret
        self.api = Client(
            serverURL: URL(string: "https://\(hostname):443/api")!,
            configuration: Configuration(dateTranscoder: .iso8601WithFractionalSeconds),
            transport: URLSessionTransport(),
            middlewares: [
                APIClientMiddleware(
                    accessTokenManager: AccessTokenManager(
                        tokenURL: URL(string: "https://\(hostname):443/api/oauth/token")!,
                        clientId: clientID,
                        clientSecret: clientSecret
                    )
                )
            ]
        )
    }
}

Using the Client

Now that all of the work for setting up and creating the Jamf Pro API client is done it is time to put it to use and demonstrate how powerful the Swift OpenAPI Generator is.

Below is a small SwiftUI app using the JamfProAPIClient above to render a list of computers displaying their names, the management ID, and the assigned user. It also displays the total number of computers at the top.

An iPhone simulator screenshot showing a list of Jamf Pro computer entries

Here is the complete code:

// ContentView.swift

import SwiftUI

struct ContentView: View {
    @State private var client = JamfProAPIClient(
        hostname: "dummy.jamfcloud.com",
        clientID: "43fd12fc...",
        clientSecret: "Fn96LFQP..."
    )

    @State private var computerSearchResults: Components.Schemas.ComputerInventorySearchResults?

    var body: some View {
        List {
            Section {
                HStack {
                    Text("Total computers:")
                        .font(.headline)
                    Spacer()
                    Text(String(computerSearchResults?.totalCount ?? 0))
                }
            }

            Section {
                if let computerResults = computerSearchResults?.results {
                    ForEach(computerResults, id: \.self) { computer in
                        VStack(alignment: .leading) {
                            Text("\(computer.general?.name ?? "Unknown") | \(computer.id ?? "Unknown")")
                                .font(.headline)
                            Text(computer.general?.managementId ?? "Unknown")
                                .font(.caption)
                                .textSelection(.enabled)
                            HStack {
                                Text("Assigned User:")
                                Text(computer.userAndLocation?.username ?? "Unkown")
                            }
                        }
                    }
                }
            }
        }
        .task {
            do {
                let response = try await client.api.ComputersInventoryGetV1(
                    .init(
                        query: .init(
                            section: [.GENERAL, .USER_AND_LOCATION],
                            page: 0,
                            page_hyphen_size: 1000
                        )
                    )
                )
                computerSearchResults = try response.ok.body.json
            } catch {
                print(error.localizedDescription)
            }
        }
    }
}

The client is instantiated as a property of the view struct. The other property is to hold the response of the GET /v1/computers-inventory API. Components contains generated types from the OpenAPI doc. It follows the same structure and names as the components object in the doc.

@State private var computerSearchResults: Components.Schemas.ComputerInventorySearchResults?

The view will automatically load data into computerSearchResults at launch. The task modifier contains the client call to ComputersInventoryGetV1.

let response = try await client.api.ComputersInventoryGetV1(
    .init(
        query: .init(
            section: [.GENERAL, .USER_AND_LOCATION],
            page: 0,
            page_hyphen_size: 1000
        )
    )
)
computerSearchResults = try response.ok.body.json

For the sake simplicity this code is embedded with the .task {}. A better, more organized approach would be to move this its own function and call that.

This is a very elegant interface to what is a fairly complex API. GET /v1/computers-inventory uses query string parameters to control and filter the returned computers. The sections are parts of the computer object to include. In code it takes an array ComputerSection enums that have all of the valid values because it was generated from the OpenAPI definition.

Imagine having to code all of this by hand.

response.ok.body.json returns the ComputerInventorySearchResults type. Once this happens the SwiftUI code will automatically render the list.

if let computerResults = computerSearchResults?.results {
    ForEach(computerResults, id: \.self) { computer in
        VStack(alignment: .leading) {
            Text("\(computer.general?.name ?? "Unknown") | \(computer.id ?? "Unknown")")
                .font(.headline)
            Text(computer.general?.managementId ?? "Unknown")
                .font(.caption)
                .textSelection(.enabled)
            HStack {
                Text("Assigned User:")
                Text(computer.userAndLocation?.username ?? "Unkown")
            }
        }
    }
}

The results property is an array of ComputerInventory types. If the results have been loaded, the ForEach loop will display a row for every computer. All of the information that is being displayed is being accessed through dot notation on the record.

Because most properties in the Jamf Pro OpenAPI schemas are optional (meaning it may be null / nil) nil coalescing using ?? is needed to provide a default value if it cannot be read.

Note that computerResults does not conform to Identifiable. This appears to be the case for any array in the generated types, and this would be expected as the generator cannot guarantee that the contained items are unique.

Extending the Client

Now that you have seen how easy it is to use the Jamf Pro API after creating a client using the OpenAPI generator, let’s see how easy it is to extend this foundation with new capabilities.

First, new APIs can be included with the client by adding them to the filter of the openapi-generator-config.yaml.

generate:
  - client
  - types
filter:
  paths:
    - /v1/computers-inventory
    - /v1/computers-inventory-detail/{id}
    - /v1/jamf-pro-version

Now in code a single, full computer record can be requested by its ID:

let response = try await client.api.ComputersInventoryDetailByIdGetV1(
    .init(
        path: .init(
            id: "117"
        )
    )
)

You may be wondering about the shorthand inits that are happening, and why there are so many of them. It may make more sense if you see the full names for the same method call:

let response = try await client.api.ComputersInventoryDetailByIdGetV1(
    Operations.ComputersInventoryDetailByIdGetV1.Input.init(
        path: Operations.ComputersInventoryDetailByIdGetV1.Input.Path.init(
            id: "117")
    )
)

Every API request’s input and response are defined as types, and those objects define all of the possible options as types. GET /v1/computers-inventory-details/{id} takes a path argument as a string – the computer ID. When writing the request using the OpenAPI client each of these types must be instantiated. Swift provides shorthand syntax to spare you all of that verbose typing.

Go back and take another look at ComputersInventoryGetV1 with this newfound knowledge.

Extending OpenAPI

Missing or undocumented APIs can also be added to the OpenAPI doc and be made available in the client. The POST /api/oauth/token endpoint used by the AccessTokenManager is not documented. While all of the code in the token manager is available, it would be more convenient to have a method to request arbitrary tokens as needed.

Here is the OpenAPI JSON for the token endpoint:

{
  "paths": {
    "/oauth/token": {
      "post": {
        "operationId": "AccessTokenRequest",
        "requestBody": {
          "required": true,
          "content": {
            "application/x-www-form-urlencoded": {
              "schema": {
                "type": "object",
                "required": [
                  "client_id",
                  "client_secret",
                  "grant_type"
                ],
                "properties": {
                  "client_id": {
                    "type": "string"
                  },
                  "client_secret": {
                    "type": "string"
                  },
                  "grant_type": {
                    "type": "string"
                  }
                }
              }
            }
          }
        },
        "responses": {
          "200": {
            "description": "OK",
            "content": {
              "application/json": {
                "schema": {
                  "type": "object",
                  "properties": {
                    "access_token": {
                      "type": "string"
                    },
                    "expires_in": {
                      "type": "integer"
                    },
                    "scope": {
                      "type": "string"
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

This can be added to the top of the paths object in the OpenAPI doc. Once added, trigger a new build and the API method will be available. Scroll back to the AccessTokenManager to remember the code required for that single URLSession request.

Now compare to the new AccessTokenRequest method:

let response = try await client.api.AccessTokenRequest(
    body: .urlEncodedForm(
        .init(
            client_id: clientId,
            client_secret: clientSecret,
            grant_type: "client_credentials"
        )
    )
)
return try response.ok.body.json.access_token

All our code should be so pleasant.

Helper Methods

The earlier example usage of ComputersInventoryGetV1 set the page size to 100, but the total count for all computers was 101. New APIs in the Jamf Pro API are paginated and in larger datasets repeat calls are required to obtain the full result.

Below is a method I wrote and added to the JamfProAPIClient that wraps ComputersInventoryGetV1, detects if there are more computers reported for the total than have been returned, and loops requests until it has exhausted all possible pages of the original query.

func ComputerInventoryGetV1AllPages(
    query: Operations.ComputersInventoryGetV1.Input.Query = .init(page: 0, page_hyphen_size: 2000)
) async throws -> Components.Schemas.ComputerInventorySearchResults {
    var currentPage = max(query.page ?? 0 - 1, -1)
    var computerResults = Components.Schemas.ComputerInventorySearchResults(totalCount: 1, results: [])

    while computerResults.results!.count < computerResults.totalCount! {
        currentPage += 1

        let nextPage = try await api.ComputersInventoryGetV1(
            .init(
                query: .init(
                    section: query.section,
                    page: currentPage,
                    page_hyphen_size: query.page_hyphen_size,
                    sort: query.sort,
                    filter: query.filter
                )
            )
        )

        let nextPageResults = try nextPage.ok.body.json

        computerResults.totalCount = nextPageResults.totalCount ?? 0

        if nextPageResults.results!.count == 0 {
            return computerResults
        } else {
            computerResults.results?.append(contentsOf: nextPageResults.results!)
        }
    }

    return computerResults
}

There are a lot of force unwraps ! in this code for the totalCount and results of the inventory response. This is intentional: those values are guaranteed to exist. Neither of these can actually ever be nil/null. The API will return a 0 and an empty array if there aren’t any results.

Most of the Pro API schemas do not list required properties. This defines which properties are not optional and must be present. This applies to both writes and reads. On the ComputerInventory schema you’ll find that the id, another known guaranteed property, is not marked as required and thus becomes an optional in the generated struct.

The task code that automatically loads the list of computers can now call this and be guaranteed to fetch the entire inventory for display.

computerSearchResults = try await client.ComputerInventoryGetV1AllPages(
    query: .init(
        section: [.GENERAL, .USER_AND_LOCATION],
        page_hyphen_size: 30
    )
)

Note that for this helper method I reused Operations.ComputersInventoryGetV1.Input.Query so Xcode would provide the same autocompletion and help text as the lower-level non-paginated call.

Schema Extensions

Earlier in the example app code I explained that by default the generated types from the OpenAPI generator do not conform to Identifiable. The line that loops over the results to display them requires setting the id argument:

ForEach(computerResults, id: \.self) { computer in
    // View code here
}

My friend Nindi pointed out that this can be fixed by using an Extension. The ComputerInventory types all have id attributes and will automatically fulfill the requirements for Identifiable (as will any other Jamf schema that includes an id).

This is all the code that is needed to add the protocol:

//  Extensions+Components.Schemas.swift

extension Components.Schemas.ComputerInventory: Identifiable {}

Putting these in their own file is another best practice for code organization.

Now the ForEach loop can be simplified:

ForEach(computerResults) { computer in
    // View code here
}

What’s Next?

Getting all of this working has been great “aha!” moment.

Even as I wrote this post I was going back and further simplifying and improving the original example code I had intended to share. Next I’ll be taking all this work and applying back to another project I intend to bring to the App Store. I’ll be updating this post with any new learnings from that.

If you are learning or using Swift and are trying out the steps in the guide for your own projects drop a comment and let me know!

Appendix

Fixing the OpenAPI Doc

These are the errors I encountered trying to build a client from the 11.7.1 Pro API OpenAPI doc and how I remediated them. Errors during the build will appear in the Reports navigator. The most recent report will be at the top. The Build has a hammer icon, and there should also be a yellow warning or red error symbol to the right. Select this to view those logs.

  • Invalid content type string...
    There were two .../history APIs where Jamf generated an invalid content-type label for the 200 responses. Instead of documenting two types of responses they were concatenated together as text/csv,application/json. Edit these to just one of the types to clear the error.
  • Feature "Cookie params" is not supported...
    The generator does not support cookie parameters. The PATCH /v2/account-preferences API has JSESSIONID as in the cookie. Delete this object.
  • warning: A property name only appears in the required list, but not in the properties map...
    An API lists a required field that doesn’t exist. There will be multiples of this and you will need to inspect the error message to get the location and the value. For example, context: foundIn=Components.Schemas.CloudLdapServerUpdate (#/components/schemas/CloudLdapServerUpdate)/providerName shows the schema at issue is CloudLdapServerUpdate and the property that’s required but does not exist is providerName.
  • Invalid discriminator.mapping value... must be an internal JSON reference.
    In the MdmCommandRequest the discriminator mapping still includes external file references. Those schemas all exist within the OpenAPI document. Remove all of the *.yaml prefixes.

Date Decoding Errors

Between two different Jamf Pro instances while testing I have encountered this issue in my console logs when returning device data:

Client error - cause description: 'Unknown', underlying error: DecodingError: dataCorrupted - at : Expected date string to be ISO8601-formatted.

I suspect this is an issue due to old, inconsistent formats for dates between the two. In one of the Jamf Pro instances a record had timestamps with and without the fractional seconds.

Here is the date transcoder I am using in this post’s client configuration:

configuration: Configuration(dateTranscoder: .iso8601WithFractionalSeconds)

That sets up an ISO8601DateFormatter with the following options:

ISO8601DateTranscoder(options: [.withInternetDateTime, .withFractionalSeconds])

When .withFractionalSeconds is set it requires that all timestamps contain fractional seconds. Responses with mixed types of ISO8601 formats will throw the decoding error. To work around this, I wrote my own date transcoder based on the generator’s that will attempt a factional decoding first, and fall back to non-fractional.

struct CustomDateTranscoder: DateTranscoder {
    private let lock: NSLock

    public init() {
        lock = NSLock()
    }

    public func encode(_ date: Date) throws -> String {
        lock.lock()
        defer { lock.unlock() }
        return Date.ISO8601FormatStyle(includingFractionalSeconds: true).format(date)
    }

    public func decode(_ dateString: String) throws -> Date {
        lock.lock()
        defer { lock.unlock() }
        do {
            return try Date.ISO8601FormatStyle(includingFractionalSeconds: true).parse(dateString)
        } catch {
            do {
                return try Date.ISO8601FormatStyle().parse(dateString)
            } catch {
                throw DecodingError.dataCorrupted(
                    .init(codingPath: [], debugDescription: "Expected date string '\(dateString)' to be ISO8601-formatted.")
                )
            }
        }
    }
}

This is a drop-in replacement for the builtin date transcoder:

configuration: Configuration(dateTranscoder: CustomDateTranscoder())

This custom date transcoder is also Swift 6 compliant. In Xcode 16 if you try to encode/decode using ISO8601DateFormatter (as the ISO8601DateTranscoder does) there will be a warning that it does not conform to Sendable.

Event Driven Applications with DynamoDB Streams and EventBridge

Build flexible serverless applications that are driven by changes in your DynamoDB table.

Whenever I start a new serverless application there are always three core technologies that form the basis: API Gateway APIs, Lambda Functions, and DynamoDB Tables. They’re the bread and butter of most AWS developers in this day and age.

From there, as my application grows in complexity, I inevitably end up adding in EventBridge Event Bus for triggering my downstream business logic and automation in reaction to the API.

Many of my APIs are designed to be RESTful where I am writing, reading, and updating data in the backend. My Lambda Functions are mainly focused on validation of the requests – which can simple schema validation or include complex logic around related records that may or may not exist – and the DynamoDB operation the resource and method map to.

Ensuring that my API functions have one, and only one, job to perform keeps them simple and easy to understand. The data layer of my application- the DynamoDB table primarily- is the source of truth for the service. Rather than have sequential operations at the API layer to trigger automation or business logic I instead want that trigger to be the action of the change in my data.

This is where marrying DynamoDB Streams to Event Bus comes in.

DynamoDB Streams and Lambda

When enabled, DynamoDB streams contain records of modifications to your table. These records are in the order they occurred and appear only once (no duplicates). This all occurs in near-real time which makes it an extremely attractive tool to turn on.

But, there are some gotchas. When you add a DynamoDB Stream as an event source to your Lambda Function it will receive batches of records serially (either from the start or the end of the stream). Lambda will not scale out horizontally as it must ensure records are processed in order. This means you need to be very careful about the business logic you put into those stream processor functions. Your stream will only be as fast as what is executing it, and if you introduce any bugs your entire pipeline will come grinding to a halt until you push a fix (I learned this the hard way).

There are a features for DynamoDB event sources that allow you to work/design around these issues. See the BisectBatchOnFunctionError and DestinationConfig options for more info.

My DynamoDB tables are almost always of a single table design. I have various types of records I store in a single table with generic keys for my indexes to enable the queries I require.

MyTable:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
– AttributeName: pk
AttributeType: S
– AttributeName: sk
AttributeType: S
– AttributeName: gsi1_pk
AttributeType: S
– AttributeName: gsi1_sk
AttributeType: S
KeySchema:
– AttributeName: pk
KeyType: HASH
– AttributeName: sk
KeyType: RANGE
GlobalSecondaryIndexes:
– IndexName: GSI1
KeySchema:
– AttributeName: gsi1_pk
KeyType: HASH
– AttributeName: gsi1_sk
KeyType: RANGE
Projection:
ProjectionType: ALL
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES

If I could be storing any kind of record in my table that makes writing logic on where to dispatch stream records in my Lambda Function more precarious. To me, the best solution is to dispatch ALL of my DynamoDB stream records to a centralized location that I can then build my business logic from.

EventBridge Event Bus

EventBridge is something I jumped onto during re:Invent 2019. Up until the introduction of the EventBridge Event Bus all serverless applications relied on AWS’s event sources for automated triggers. With Event Bus we now have our own custom eventing framework that plugs right into serverless applications.

There are a lot of features with EventBridge that do cool things around direct SaaS partner integrations (like Datadog), cross account eventing, and event discovery by pointing whatever you want at it, but don’t let those big use cases deter you from implementing one into small services.

We pay $1.00 for every 1,000,000 events we publish into a Bus, and we don’t pay for the rules that we attach to it.

We don’t pay for the rules we attach to our Event Bus.

Those rules enable patterns in your applications that before required all kinds of additional work and scaffolding to make happen. At a minimum a rule must define the source that triggers it. Past that, rules can become as fine grained as we desire. Emitting events with JSON payloads opens up the ability to drill into the details (effectively the body) and match against the content.

Rules are then able to invoke a wide range of AWS services. Beyond other Lambda Functions, you can pass events on to SQS Queues, SNS Topics, directly invoke Step Functions, call downstream HTTP endpoints… And then consider that you can have multiple rules triggering off the same events allowing parallel processing and workflows.

This flexibility and power dwarfs most other AWS offerings.

DynamoDB Events

As shown in the diagram at the start of this post; the goal is to emit changes to the DynamoDB table into an Event Bus where we can take full advantage of its Swiss army knife nature to plug in all of the business logic we want.

To this end, our DynamoDB stream processor has only one job to do:

from datetime import datetime
import json
import os
import boto3
EVENT_BUS = os.getenv("EVENT_BUS")
events_client = boto3.client("events")
def lambda_handler(event, context):
"""This Lambda function takes DynamoDB stream events and publishes them to an
EventBridge EventBus in batches (DynamoDB streams can be submitted in batches of a
maximum of 10).
"""
events_to_put = []
for record in event["Records"]:
print(f"Event: {record['eventName']}/{record['eventID']}")
table_arn, _ = record["eventSourceARN"].split("/stream")
events_to_put.append(
{
"Time": datetime.utcfromtimestamp(
record["dynamodb"]["ApproximateCreationDateTime"]
),
"Source": "my-service.database",
"Resources": [table_arn],
"DetailType": record["eventName"],
"Detail": json.dumps(record), # Gotcha here: Decimal() objects require handling
"EventBusName": EVENT_BUS,
}
)
events_client.put_events(Entries=events_to_put)
return "ok"

This function takes a batch of DynamoDB stream records from the event source and translates them into the Event Bus event structure.

In my code example I treat the Source as something descriptive. For internal application events I tend to follow the pattern of service-name.component-name for labeling my sources. In this case it is simply my-service.database with the implication being if I end up with multiple tables they’re all the same source but different Resources – the table ARN here – that I can use as a part of my rule to filter out what I’m executing on. I map the DynamoDB action (INSERT, MODIFY, REMOVE) to DetailType and I pump the entire record into the Detail as JSON.

Now when I go to take action on changes in my table I can add complex rules looking for those specific attributes and details.

MyFunction:
Type: AWS::Serverless::Function
Properties:
Runtime: python3.8
CodeUri: ./src/my_function
Handler: index.lambda_handler
Events:
TableChanges:
Type: EventBridgeRule
Properties:
EventBusName: !Ref EventBus
InputPath: $.detail
Pattern:
source:
– my-service.database
resources:
– !GetAtt MyTable.Arn
detail-type:
– INSERT
– MODIFY
detail:
dynamodb:
Keys:
pk:
S: [{ "prefix": "OID#" }]
sk:
S: [{ "prefix": "UID#" }]

By preserving the entire DynamoDB stream record I am able to match on key patterns to enable rules for specific record types.

The example above is taken from a Lambda Function that listened for the creation and modification of records that described customer integrations and then wrote back a historical record stating what keys were changed and by whom.

I could take this same rule, change it to listen for INSERT and REMOVE on those same key prefixes, and pipe matching events into a SQS FIFO Queue that manages aggregate records for customers tracking overall counts for things like the number of integrations or device counts (which would be a separate event rule going into the same queue).

This framework allows the service now to scale out, adding in automation and workflows on DynamoDB events without having to do anything to the stream, the stream processor, or anything that is already hooked up to a rule as they’re all completely independent components.

Drawbacks?

This design pattern isn’t without its inefficiencies which tend to pop out at large/high scale.

The amount of data being emitted by your table into the Stream isn’t necessarily a major issue. Past the free tier, GetRecords request will only run you $0.20 per million assuming your records aren’t very large. If they are, you can switch to KEYS_ONLY instead of sending the entire item into the stream which should still allow focused event rules.

That free tier covers 2,500,000 stream read request units every month. You may not notice it for quite some time.

You also run the risk of having a large amount of wasted events. At $1.00 per million on our Event Bus maybe we don’t care too much as small scale. Once throughput ratchets up and millions of table events are going through every day that becomes a different story. Ensuring our systems are designed around internal eventing should cut down on e-and-billing-waste.

Lastly, I’m going to make mention of service quotas – which might be a bit of bike-shedding but I’m gonna do it anyway.

EventBridge’s PutEvents API ranges from 600-2,400 requests per second depending on which region you’re operating in. These are limits you can increase, but you could quickly spike into them before you realized it. Batching events (like shown in our stream processor function above) is your best friend to stave this off.

This is not a limit you would (likely) be able to hit off a single DynamoDB stream (you’re more likely to back up on the stream while having plenty of overhead for events). Add in multiple sources for your Event Bus and it’s something you could spike into quickly.

Trick Sam into building your Lambda Layers

Right now, the SAM CLI doesn’t support building Lambda Layers; those magical additions to Lambda that allow you to defined shared dependencies and modules. If you’re unfamiliar, you can read more about them here:

New for AWS Lambda – Use Any Programming Language and Share Common Components

If you read my last article on using sam build, you might think to yourself, “Hey, I can add Layers into my template to share common code across all my Lambdas!”, but hold on! At the moment sam build does not support building Layers the way it builds Lambda packages.

But, there is a hacky way around that. Here’s our repository:

carbon (7)

Now here’s the contents of template.yaml:

carbon (6).png

We’ve defined an AWS::Serverless::Function resource, but with no events, or any other attributes for that matter. We have also defined a AWS::Serverless::LayerVersion resource for our Lambda Layer, but the ContentUri path points to the build directory for the Lambda function.

See where this is going?

sam build will install all the dependencies for our Layer and copy its code into the build directory, and then when we call sam package the Layer will use that output! Spiffy. This does result in an orphan Lambda function that will never be used, but it won’t hurt anything just sitting out there.

Now, we aren’t done quite yet. According to AWS’s documentation, you need to place Python resources within a python directory inside your Layer. The zip file that sam build creates will be extracted into /opt, but the runtimes will only look, by default in a matching directory within /opt (so in the case of Python, that would be /opt/python).

See AWS Lambda Layers documentation for more details.

We can’t tell sam build to do that, but we can still get around this inside our Lambda functions that use the new Layer by adding /opt into sys.path (import searches all of the locations listed here when you call it). Here’s an example Python Lambda function that does this:

carbon (5)

Performing a test execution gives us the following output:

START RequestId: fd5a0bf2-f9af-11e8-bff4-ab8ada75cf17 Version: $LATEST
['/var/task', '/opt/python/lib/python3.6/site-packages', '/opt/python', '/var/runtime', '/var/runtime/awslambda', '/var/lang/lib/python36.zip', '/var/lang/lib/python3.6', '/var/lang/lib/python3.6/lib-dynload', '/var/lang/lib/python3.6/site-packages', '/opt/python/lib/python3.6/site-packages', '/opt/python', '/opt']
<module 'pymysql' from '/opt/pymysql/__init__.py'>
<module 'sqlalchemy' from '/opt/sqlalchemy/__init__.py'>
<module 'stored_procedures' from '/opt/stored_procedures.py'>
END RequestId: fd5a0bf2-f9af-11e8-bff4-ab8ada75cf17
REPORT RequestId: fd5a0bf2-f9af-11e8-bff4-ab8ada75cf17	Duration: 0.82 ms	Billed Duration: 100 ms 	Memory Size: 128 MB	Max Memory Used: 34 MB

Voila! We can see the inclusion of /opt into our path (and the expected path of /opt/python before it) and that our dependencies and custom module were all successfully imported.

It breaks PEP8 a little, but it gets the job done and we have now successfully automated the building and deployment of our Lambda Layer using AWS’s provided tooling.

 

 

Design a site like this with WordPress.com
Get started