What is GPT 4o
In the world of no-code AI app development, keeping up with the latest models like GPT-4o from OpenAI is crucial. With advancements like the ability to input a whopping 128k of context length, innovators using platforms like Bubble.io are able to push the limits of what's possible with AI-powered applications.
GPT-4o unleashed: Discover how this Omni model combines text, audio, image, and video for lightning-fast AI responses in your no-code Bubble app.
Massive 128k context window: Learn how to leverage GPT-4o's expanded capacity for creating AI-powered content with richer, more comprehensive prompts.
Output limitations revealed: Uncover the 8k token constraint and strategies to overcome it when generating large-scale content in your Bubble projects.
Introduction to GPT-4o
What is GPT-4o and should you consider swapping it into your No Code AI app? Let's start by looking at the press release. We'll include a link down in the description, but effectively GPT-4o was released a few days ago on May 13th, 2014, and they want us to know that O stands for Omni. That's because it allows you to combine information into your prompts such as text, audio, image, and video. They've got some amazing demos on their page. This is a really quick model and it does some very quick things specifically with video and with audio. So yeah, there are some demos. Again, OpenAI has become the new Apple in terms of when they put out something new, it is really a wow moment.
Comparing GPT-4o to Other Models
Let's compare it to other models. If you've been following this channel for a while, you will have seen me build many AI-powered apps in Bubble using No Code, and we began by using GPT 3.5 Turbo mainly because it was the most affordable and also the quickest responses from OpenAI. Now there was initially an issue when you were using GPT-4 with Bubble which was that if you were asking for it to generate a lot of data because it was of that high quality, it would take too long to respond, but things have changed. The AI landscape is ever-changing.
Choosing the Right Model for Your App
Now if we compare the models, I think it makes logical sense if you're still using GPT 3.5 Turbo and you can afford the jump and you want that slight boost in quality, then you should just move straight up to GPT 4.0. Now I have been watching on X/Twitter and some people have been saying that they are sticking with GPT 4 or GPT 4 Turbo because they believe that OpenAI have compromised the actual writing performance of GPT 4.0 by making it this Omni model with this multimodal communication model, but that's mainly anecdotal. My advice would be simply test out swapping the different models into your prompts and your API calls to the OpenAI API and see what responses you get back.
Context Length and Its Implications
Lastly, I want to talk about context length. This is basically how much data, the size of your text that goes into your prompt for example, that you send over to the API. We used to be stuck on below 8k, and now we start with 16k on GPT 3.5 Turbo, but now we go up to this massive 128k. That's phenomenally good news because it just means that you can create a massive prompt. In fact, I've got a video coming out soon where we basically scrape a website and all of that website goes into the prompt, and so you can do amazing things of providing knowledge into the AI so that it responds based on the content that you provided it.
Limitations of Output Length
But there is still one disappointing thing, and I was researching this yesterday, and it's the same with Anthropic and their Claude model, and it still continues to be the same with OpenAI, which is that the output is limited to around 8k of tokens. This may be a strategic move. For example, it means that it is more laborsome. You basically have to run the model, run an API call multiple times if you want to create large amounts of content. So if you wanted to create, say, a 10,000-word blog post, you can't currently do that unless you kind of chain step by step each. I suppose yes, you could take what is output and you could feed it back in and say extend this.
Real-World Example and Future Expectations
An example of this that I hit recently was a 30-minute video that I recorded, and so I ran the transcript through OpenAI's new GPT-4o. It's very quick. It took the whole transcript, a lot of text there, but I was asking it to convert it into a blog post of the same length, and it couldn't do that because it couldn't output the same number of tokens that went into it. My prompt was very small, and I just provided the transcript, and so that easily fit into the 128k context window, but it exceeded the output window. What I'm hoping to see is that the output window grows just as the context window has grown. If anyone knows of any models out there that have a larger output window, then please leave a comment down below. I'd be really interested to hear your input.
Ready to Transform Your App Idea into Reality?
Access 3 courses, 400+ tutorials, and a vibrant community to support every step of your app-building journey.
Start building with total confidence
No more delays. With 30+ hours of expert content, you’ll have the insights needed to build effectively.
Find every solution in one place
No more searching across platforms for tutorials. Our bundle has everything you need, with 400+ videos covering every feature and technique.
Dive deep into every detail
Get beyond the basics with comprehensive, in-depth courses & no code tutorials that empower you to create a feature-rich, professional app.
Valued at $80
Valued at $85
Valued at $30
Valued at $110
Valued at $45
14-Day Money-Back Guarantee
We’re confident this bundle will transform your app development journey. But if you’re not satisfied within 14 days, we’ll refund your full investment—no questions asked.
Can't find what you're looking for?
Search our 300+ Bubble tutorial videos. Start learning no code today!
Frequently Asked Questions
Find answers to common questions about our courses, tutorials & content.
Not at all. Our courses are designed for beginners and guide you step-by-step in using Bubble to build powerful web apps—no coding required.
Forever. You’ll get lifetime access, so you can learn at your own pace and revisit materials anytime.
Our supportive community is here to help. Ask questions, get feedback, and learn from fellow no-coders who’ve been where you are now.
Absolutely. If you’re not satisfied within 14 days, just reach out, and we’ll issue a full refund. We stand by the value of our bundle.
Yes, this is a special limited-time offer. The regular price is $350, so take advantage of the discount while it lasts!