Why you shouldn’t be using the OpenAI Assistant API with Bubble.io
In this Bubble tutorial, we will discuss whether you should use the create chat completion endpoint or the new beta list of endpoints from OpenAI when working with Bubble and building a web app with Bubble.io.
Unlock OpenAI's power in Bubble: Chat completion vs. new beta endpoints - which is right for your app?
Master OpenAI integration: Discover the pros and cons of chat completion and OpenAI assistants for your Bubble project
Elevate your Bubble app: Learn when to use chat completion or the new beta API for AI-powered features
Introduction to OpenAI and Bubble.io Integration
I've spoken with a number of people in our Bubble coaching calls over the last few weeks where we've deliberated over whether you should still be using the create chat completion endpoint from OpenAI or whether you should be using the new beta list of endpoints such as assistance threads messages and runs. In this tutorial video, I'm going to give my explanation of when working with Bubble and building a web app with Bubble.io, you'll want to weigh up these different pros and cons for each.
Community and Learning Resources
But before I launch into that, if you are learning Bubble then there's no better place to do it than joining our community over at planetnocode.com where we've got hundreds of Bubble tutorial videos and we've got discount Bubble coaching where you can book a call with me and we can work through that pain point that frustration. I'm not exaggerating when I've had people tell me that they spent eight hours on something and we fixed it in 30 minutes on a Bubble coaching call.
The Old Method: Chat Completion Endpoint
So let's just begin with the old and I say old but let's bear in mind that OpenAI have been given the impression that they release very quickly and that we should actually expect things to change especially beta. Whenever you're using any beta software you're taking a risk if you use it in your own production software because they could scrap it or they could change bits they could break things overnight. Now I think that's unlikely given the scale at which OpenAI operates but you should bear that in mind.
So let's start with the old which is chat completion. Oh I've gone to the wrong bit. I mean to go to chat chat completion here we go. So this is the endpoint which we use in most of our OpenAI and Bubble tutorial videos from the last year. This is where if you've been watching our previous videos where we engage with GPT 3.5 turbo or GPT 4 then we're probably using this endpoint and one reason for doing that is it's been around the longest but also it is the simplest way to get a chat like chat GPT client working in Bubble and that's because when you compile together your request it's going to look something like this.
Advantages of Chat Completion Endpoint
Check out, like I said we've got tons of videos on this but if this is looking a little bit alien to you go and check out our other videos. But you put together a request and first thing to point out is that when you send a new chat you have to supply all of the previous messages. There is a list of messages and that is the only way that you can ensure that OpenAI keeps track of the previous messages in the conversation. Now the downsides of this are that it's going to affect your token count, it's going to affect basically how big a conversation can get. Now remember that the token limits is just going to go up and up and up. I mean if you don't have enough tokens with GPT 3.5 Turbo you can start using GPT 3.5 Turbo 16k.
Building a Chat GPT Client in Bubble
So yeah something to bear in mind and we've got videos covering basically within 30 minutes you can build a chat GPT client in Bubble which keeps track of the conversation and provides all of the previous messages with the latest message in every single API call that you make so that the chat is aware of the previous context of messages. It's historically aware of where the messages have been the topics that have been covered. And one of the advantages of that is that when you send an API request with all your messages to the chat completion endpoint your Bubble app waits and it waits to get a response.
Pros and Cons of Chat Completion
And the plus side of that is that you don't have to do anything else apart from save it, display it in your Bubble app because the following action in your workflow can be to save or display the content of the workflow action that calls OpenAI. The downside of it is that if you are using a huge amount of text, thousands and thousands of tokens you may still experience that the API call times out. In which case the Bubble app basically gives up stops waiting for OpenAI to respond with the new message and so that's going to leave a really poor experience for you and your users. But in a nutshell TLDR chat completion is by far the simplest way to add OpenAI's GPT models into your Bubble app.
New Beta API Endpoints
That brings us on to the new beta ones and these came out I believe it was the end of November 2023. So this comparison is a little bit a little bit late but there you go I'm still getting questions about in the Bubble coaching calls. This is a completely different approach to basically how you structure your data and how you get engaged with the OpenAI API and it's broken down into these endpoints.
Creating Threads and Messages
So basically when you want to create a new conversation you create a thread and this is setting up for what is potentially one of the advantages with just get rid of that one of the advantages with these new beta API endpoints is that OpenAI is now storing the conversation. Now there's nothing I could find in the documentation about how long they store it. There are some miniscule storage costs associated with it particularly if you're just storing text values you know that's not a lot of data to store on someone's server. But yeah you create a thread and then you start to create messages in the thread and this is the same way that you would go about here where you go about listing your messages.
Differences in Message Handling
In using the new beta API endpoints you don't have to list all the messages every single time you just have to create a message and assign it to the right thread ID based on the threads that you've created. Now we've got a Bubble tutorial video demonstrating all of this that you can go and check out but in case you want to skip ahead I'll say it here I think there is a major downside to this and that's coming up. And then when you start adding messages so let's say that you ask you're just using it as like a general noise GPT and so you say what is the capital of France you would create a message and assign it to the thread by using the thread ID parameter but then nothing happens.
Running the AI Process
All you've done is add the message to the thread you've not done any of the AI magic or wizardry that we've become accustomed to. In order to do that you have to run and run is the command for OpenAI to actually look at the messages and generate a response and this is the same whether you're responding to one message or whether you've got 20 messages in the thread. If you want OpenAI to respond with what GPT whichever model you're using is going to write you have to use run and this is where you can assign an assistant ID and if we go into assistants this is basically an alternative way to using a system prompt and the people I've spoken to so far say that assistants works better than a system prompt this being an example here of a system prompt is that you can create an assistant and the advantage of that is that you could create say four different assistants for like four different personas for your SaaS app that you're building in Bubble and then you can deploy those same four assistants across all of your conversations that or threads that your users engage with because when you run you can provide an assistant ID and so that's going to define the tone the background knowledge the persona the style of which OpenAI writes the reply message and so you run the create run prompt uh um API call on the thread ID you can supply an assistant you can supply a model um then this is this is where I think the downside comes in which is that uh the thread list of messages on the OpenAI servers updates but there's no way of informing your Bubble app that that has happened now I really hope because this is in beta that one of the things that they will change is they will add in webhooks using a service like assembly AI which is a speech to text transcription service where if you upload 10 minutes of uh a 10 minute audio file it could probably take five minutes or so a little bit less than that you know it's going to take more than a minute to transcribe it you can supply a webhook endpoint and then your app is alerted when the processing on their server is completed OpenAI doesn't have that um so you need to keep checking and retrieving run and then when you know that the run is complete then you will need to refresh the messages get the list of messages and you'll need to find a way of saving the new message or simply showing a list of messages so that's the main downside the oh one other upside is that you can when you create an assistant uh you can supply file IDs such as PDF files so if you want your assistant to be trained on like a mid to medium sized amount of data then the assistant has the advantage over that but if you're simply wanting to provide a bit of background knowledge you could still use the system um role in your messages when you use the chat completion endpoint and then you don't have to worry about getting Bubble to say every second you have to get Bubble every second to check uh if there are new messages and if the run is complete which seems to me to be a real waste of workload units so those are my thoughts if you've got any questions or any thoughts of your own please leave a comment down below and don't forget to like and subscribe this video to continue to support uh our ambition to become the best place to learn Bubble on the internet
Ready to Transform Your App Idea into Reality?
Access 3 courses, 400+ tutorials, and a vibrant community to support every step of your app-building journey.
Start building with total confidence
No more delays. With 30+ hours of expert content, you’ll have the insights needed to build effectively.
Find every solution in one place
No more searching across platforms for tutorials. Our bundle has everything you need, with 400+ videos covering every feature and technique.
Dive deep into every detail
Get beyond the basics with comprehensive, in-depth courses & no code tutorials that empower you to create a feature-rich, professional app.
Valued at $80
Valued at $85
Valued at $30
Valued at $110
Valued at $45
14-Day Money-Back Guarantee
We’re confident this bundle will transform your app development journey. But if you’re not satisfied within 14 days, we’ll refund your full investment—no questions asked.
Can't find what you're looking for?
Search our 300+ Bubble tutorial videos. Start learning no code today!
Frequently Asked Questions
Find answers to common questions about our courses, tutorials & content.
Not at all. Our courses are designed for beginners and guide you step-by-step in using Bubble to build powerful web apps—no coding required.
Forever. You’ll get lifetime access, so you can learn at your own pace and revisit materials anytime.
Our supportive community is here to help. Ask questions, get feedback, and learn from fellow no-coders who’ve been where you are now.
Absolutely. If you’re not satisfied within 14 days, just reach out, and we’ll issue a full refund. We stand by the value of our bundle.
Yes, this is a special limited-time offer. The regular price is $350, so take advantage of the discount while it lasts!