Combining multiple AI prompts with UsePlumb.com
Streamline your AI-powered no-code app development process with Plumb's AI pipeline builder tool, making it easier to incorporate multiple AI features seamlessly into your Bubble.io app. With structured JSON outputs and efficient testing capabilities, Plumb is a game-changer for simplifying complex workflows.
Unlock AI-powered workflows: Learn how to combine multiple AI requests in a single Bubble app output!
Supercharge your no-code app: Discover how to integrate Plumb for advanced AI features and structured JSON responses.
Build, test, deploy with confidence: See how to create a content generation AI using AI and no-code tools in minutes!
Introducing UsePlumb.com for AI-Powered No-Code Apps
If you're building an AI-powered no-code app with Bubble, then you need to check out useplumb.com because they can help you build, test and deploy AI features with confidence because they've got this amazing pipeline tool that you can bring together lots of AI-powered nodes to create workflows that simply wouldn't be possible in a Bubble app alone. Including, if I go and show you this, here's a really simple pipeline combining two AI requests to two separate AIs, combining them in the same output and returning that and it looks like this in my Bubble app.
Demonstrating a Content Generation AI
So it's a content generation AI that I've built. I'm going to demo from a blank canvas how I built it in this video. But we get back some lovely structured JSON. So no more having to use split by to identify different parts of the response from OpenAI. We get a title, we get tags, we get content, we get tweets and we get our tweets in a nice list in the JSON.
Building the Pipeline in Plumb
So let me show you how I've gone about and built this by diving into the Plumb pipeline builder. So here's where we start in the Plumb pipeline builder. I've created a new pipeline called content generator and the first thing I'm going to do is set my inputs. So I'm going to have, let's have topic because I'm going to feed in a topic, I'm going to feed in some keywords and I'm going to get a series of AI responses out all back in a single call while pointing out as I go some of the really cool features that Plumb offers that makes this easier than if you were building the pipeline or the workflow in Bubble alone.
Setting Up Inputs and AI Prompts
So we're going to have topic and we're going to have keywords and they're both required. Let's make a bit of space. Then I'm going to go into AI prompt and drag in the text LLM and I'm going to rename this to generate blog and hit save and link the two up. So when you pass data through a Plumb pipeline, you have to have a direct link between the two. So in order to access my topic and my keywords, I need to create a branch going between them.
Writing the Prompt and Choosing the AI Model
So now I can write my prompt in here. So I can say write a SEO focused blog post about and then I can insert my topic. Try and include the following keywords and then I can insert my keywords and now this is where I'm going to find it really easy to begin to demonstrate how Plumb helps out getting the right prompt with the right AI model just so that you get the right response basically because it's dead easy to swap between the different providers. Now when you're testing it at the moment, Plumb provides their own keys behind the scenes but when you deploy your pipeline to production, you will need to supply your own keys for the different providers. But I don't want to have to wait for GPT 4. I'm going to say let's use GPT 3.5 Turbo and I can test it.
Testing the Pipeline
So let's test it. I'll say topic building AI no code apps with Bubble keywords. Let's say open AI chat GPT and Claude. And it's now going to run the pipeline even though I've not completed it and this is part of the power of Plumb. It's the ability to test, revise, revisit different parts of the pipeline.
Handling Long AI Response Times
So now let's run the next step and part of a frustration from even when we first started using GPT 3.5 Turbo with Bubble but particularly with using GPT 4 is the fact that if you ask for a lot of content, here we go here's a blog post, it can take a really long time for the AI to respond. Now I'm going to do a completely separate video on this but just as a kind of tease of what you can also do, you can set up a webhook. At the moment with the return response, you know, the loading bar goes across the top of your Bubble app, your user is waiting for the response, at least if you run it in a front end workflow, but by using a webhook you can basically think well actually it's going to take 15 minutes to generate all this AI content, the Bubble app's going to time out waiting for a response. I can use a webhook in order to send the data through to my Bubble app when the data is ready.
Structuring the Output
Now let's actually update this because that's a little bit unstructured, I don't want raw text, I want a structured output as a record because I want to have, and you can see I've had a few goes at this, I want to have a blog post title and I want to have the blog post content and actually both of these I want to have required. Oh no, no, that's not, right, so the record is the blog post, blog post and then I'm going to have a blog post title and add another property, blog post content and this is all going to go in behind the scenes into the prompt that Plum constructs for you and sends off to in this case openAI.
Adding Another AI Step for Tweets
Right, let's add in another LLM step, because I also want some tweets and I'm going to demonstrate how we can create that nice list where it's automatically divided up already in the JSON. So I'll say generate three tweets and I'll say write three tweets about, and I'll just insert the topic, I won't bother with the keywords. Now we can use structured output and I can say list, so I'll say list of tweets and yeah, I'll just have write a tweet on the topic and I'll set that in. Now I want this to be really quick, so I'm going to change it to anthropic and change it to haiku and let's run the whole thing from the top once more.
Combining Responses into a Single Output
Okay, and there we go, there are our three tweets separated out in the JSON that's replied. So what we've got here is two separate AI calls and we're now going to combine them into a single response. So to do that I need to use a data processing mode and I think it is compose, here's this down here, and so on compose I'm going to rename it and call this one combine responses. So you could branch this off and you could have five different LLM prompts going off and waiting for responses and so I bring these into here.
Structuring the Final Response
And now I build the structure of what I want to respond to my Bubble app with when the AI the endpoint deployment pipeline endpoint is run and data is sent to. So I'm going to have a blog post and that's got a few properties, I've got title and I've got content and then I'm also going to have tweets and there are no properties on tweets. Let me just add that in again, tweets. Okay, and then I'm going to link it up to the response and now the response I say well what data do I want sent back to my Bubble app? Well I don't just want it to be empty, I want to send back a step and I'm going to send back the combined responses step and I want to send back everything in it.
Testing the Complete Pipeline
So let's test the whole pipeline from the top and I should say as well as being able to deal with the weight by using a webhook instead of return response you can easily go in here and you could easily change the model because if I thought oh GPT 3.5 is too slow or I just want to test in the midst of my workflow where lots of sorry pipeline where lots of LLM data is coming together I just want to swap it out and try another model and you can do that in in Plum and just imagine how much time can be saved in Bubble because you might have to otherwise run this all in a back end workflow where you wouldn't be able to test individual bits, easy revise and Plum is here to save you that time.
Integrating with Bubble
So here's the response and yeah we've got our tweets and we've got our blog post. Right, final part of the video how do we plug this all into Bubble? So we can go deploy and I'm going to skip straight to production and copy this endpoint and I already have my Bubble app source or setup to run this so I've got my I've got Plum, this is all familiar to you if you've ever worked in the Bubble API connector. So I'm just going to replace the endpoint and then check that I've used the same keys topic and keywords, topic and key words, that's what I've written there, keywords there, fine cool so let's give it a try.
Final Adjustments and Testing
Now there's one bit that I missed and had caused the workflow to fail which is that on the tweets here if I add this back in I'll say tweets and then generate three tweets and then I just have to say no properties yet because now if I run it it's going to work. Okay there we go it has worked we've got a response into Bubble and we've got all of this beautifully structured JSON data based on our combination of multiple LLM prompts using different models and Plum has been there to save the day, save us time, save us debugging time especially.
Conclusion and Further Possibilities
So that's a really simple demonstration of what you can do with Plum. I'm going to record a video demonstrating how to use the webhook feature for if you're having to wait a really long time for an LLM to respond but there's so much that you can do with Plum. I'd recommend heading over to useplumb.com in fact we'll have a link down in the description and reaching out to the team because they've been working closely with me and they've done some amazing demos of pipelines that just blow your mind of what is possible including things like transcription into an AI separating out different speakers it's amazing. So yeah do click the link down in the description to find out more about useplumb.com.
Ready to Transform Your App Idea into Reality?
Access 3 courses, 400+ tutorials, and a vibrant community to support every step of your app-building journey.
Start building with total confidence
No more delays. With 30+ hours of expert content, you’ll have the insights needed to build effectively.
Find every solution in one place
No more searching across platforms for tutorials. Our bundle has everything you need, with 400+ videos covering every feature and technique.
Dive deep into every detail
Get beyond the basics with comprehensive, in-depth courses & no code tutorials that empower you to create a feature-rich, professional app.
Valued at $80
Valued at $85
Valued at $30
Valued at $110
Valued at $45
14-Day Money-Back Guarantee
We’re confident this bundle will transform your app development journey. But if you’re not satisfied within 14 days, we’ll refund your full investment—no questions asked.
Can't find what you're looking for?
Search our 300+ Bubble tutorial videos. Start learning no code today!
Frequently Asked Questions
Find answers to common questions about our courses, tutorials & content.
Not at all. Our courses are designed for beginners and guide you step-by-step in using Bubble to build powerful web apps—no coding required.
Forever. You’ll get lifetime access, so you can learn at your own pace and revisit materials anytime.
Our supportive community is here to help. Ask questions, get feedback, and learn from fellow no-coders who’ve been where you are now.
Absolutely. If you’re not satisfied within 14 days, just reach out, and we’ll issue a full refund. We stand by the value of our bundle.
Yes, this is a special limited-time offer. The regular price is $350, so take advantage of the discount while it lasts!