Christmas & New Year Deal
bit flows 33 off
Offer ends in:
00

days day

00

hours hour

00

Mins Min

00

Secs Sec

OpenAI Integrations with Bit Flows – Automate AI-Powered Tasks

Estimated reading: 7 minutes 372 views

OpenAI Integrations: Latest news · Stories · Latest research · OpenAI for business · Get started with ChatGPT. Download. Our Research. Research Index · Research Overview ·

In this guide, we’ll walk you through the OpenAI integrations using Bit Flows. While we’ll use Bit Form as the trigger in this example, you can choose any other trigger that fits your needs. Once the setup is complete, data from each form submission will be automatically sent to OpenAI for processing or generating a response.

This beginner-friendly tutorial guides you step by step, making it easy to create your first AI-powered automation using Bit Flows and OpenAI.

info-icon-bit-apps  INFO

You’ll need the premium version of OpenAI to create and use this integration.

Authorization of OpenAI Integrations

To set OpenAI as an action in Bit Flows, first open your Bit Flows Dashboard, then either create a new flow or open an existing one. In the Flow Builder, click the plus (+) icon to add an action. From the list of available apps, search for and select OpenAI.

After selecting your preferred action, the next step is to choose an event. For example, in this case, we’ve selected the “Create a Chat Completion” event.

  • Create a Chat Completion
  • Transform Text to Structured Data
  • List Batches
  • Get A Batch
  • Generate An Image
  • Create A Moderation

After selecting the action event, a new popup will appear. Here, you need to connect your OpenAI account. If you’ve already connected OpenAI before and want to use the same account, simply select it from the “Select Connection” dropdown. If not, click on “Add Connection” to create a new one.

If you’ve already created a connection earlier and want to use it with the same account, simply select it from the “Select Connection” dropdown. This saves time and avoids creating duplicate connections.

When you click the Add Connection button, you’ll be asked to enter a name for your connection and provide your token in to Value field.

To get your OpenAI API Token, follow these steps:

  • Go to the OpenAI Platform
  • Navigate to the API Keys section
  • Click on Create new secret key
  • Enter a name for your API key and select your project and set the desired permissions then click Create
  • Your API key will be displayed—copy it
  • Paste the copied key into the Value field in Bit Flows

After entering your token, click on Connect. Once connected successfully, a popup will appear where you can configure the integration settings and map the fields.

Select Model: Select the OpenAI model you want to use for this integration.

Max Completion Token: This setting defines the maximum number of tokens OpenAI can use to generate a response. A higher value allows for longer and more detailed outputs, while a lower value keeps responses shorter and more concise. Adjust it based on how much content you expect from the AI.

Message

First, select a Role, such as UserAssistant, or Developer/System.

then enter the message you want to send to OpenAI. You can also map available form fields to include dynamic content in the message. Additionally, you can use FlowMathString, and System functions to enhance your message logic.
Learn more about Field Mapping.

Advance Feature

If you want to use advanced features, first enable the “Show Advanced Features” option.

Response Format: When using JSON Object, you must also instruct the model to produce JSON via a System or User message. If not, the model can generate an unending stream of whitespace until it reaches the token limit. This will result in a long-running and seemingly “stuck” request.

Temperature: This setting controls the creativity of OpenAI’s responses. A lower value (e.g., 0.2) makes the output more focused, consistent, and predictable, while a higher value (e.g., 0.8) leads to more creative, diverse, and exploratory replies. Adjust it depending on whether you want precise results or more imaginative outputs.

Top-p: This setting is an alternative to temperature-based sampling and controls the diversity of OpenAI’s responses by focusing on token probability. For example, a value of 0.1 means the model will only consider tokens from the top 10% of the probability distribution. The default is 1, meaning all tokens are considered. It must be less than or equal to 1. Lower values result in more focused and deterministic outputs.

Number: This setting defines how many response variations you want OpenAI to generate for a single prompt.
For example, setting it to 1 will return a single response, while a higher number will return multiple responses to choose from.

Frequency Penalty: This setting reduces the chance of OpenAI repeating the same phrases or words in its response. It works by penalizing tokens that appear more frequently.
A higher value (e.g., 1) makes the output less repetitive, while a lower value (e.g., 0) allows for more natural repetition. Adjust it based on how much variation you want in the response.

Presence Penalty: This setting influences how likely OpenAI is to introduce new topics in its response. A higher value increases the chances that the AI will talk about something new, rather than sticking to previously mentioned ideas. Use this to encourage more variety and exploration in the output. A value of 0 means no penalty, while higher values (up to 2) promote more diverse content.

Seed: The Seed setting allows you to generate consistent and repeatable responses from OpenAI. By setting a specific seed number, the model will produce the same output each time for the same input. This is useful for testing, debugging, or ensuring stable responses. If left blank, responses may vary with each request.

Stop Sequences: Stop sequences tell OpenAI when to stop generating a response. When the model encounters any of the specified stop sequences in its output, it will immediately stop. This is useful for controlling where a response should end, especially in multi-part conversations or structured outputs. You can define one or more custom strings as stop sequences, such as "###" or "END".

Optional Fields: You also have the ability to map optional fields to further customize and control the AI’s response behavior.

Once you’ve finished these settings, you can either click the “Test Run” button to check if the integration is working correctly or simply close the popup to complete the setup.

note-icon-bit-apps  Note

When you click the Test Run button, the output will be displayed just above it. However, please note that Test Run results are not recorded in the logs.

You also have the option to test the full flow. You can either click “Listen Response” and then run the trigger event (e.g., submit the form), or use existing data to test the integration and make sure everything works correctly.

After completing all the steps, click the “Logs” icon at the top-right corner of the Flow Builder to view your integration logs. Logs help you verify if the trigger and action worked correctly and make it easier to spot and fix any issues.

That’s it! You’ve successfully set up an automation in Bit Flows to connect your trigger with OpenAI. Now, whenever the trigger event occurs, a message will be sent to OpenAI, and you’ll receive a dynamic AI-generated response based on your configuration. This completes your OpenAI integrations with Bit Flows, enabling smart, automated workflows powered by AI.

If you need more help or want to explore more integrations, feel free to check out our User Guide.

Check out our easy-to-follow tutorials!

  • How to Integrate OpenAI with Bit Flows

Share this Doc

OpenAI Integrations with Bit Flows – Automate AI-Powered Tasks

Or copy link

CONTENTS