3 Ways Plumb Will Supercharge Your No Code AI App
Elevate your no-code AI app with Plumb's powerful features, including a versatile pipeline builder for managing multiple AI requests, seamless swapping between different large language models (LLMs), and the ability to combine AI-generated content into structured JSON responses. Whether you're using Bubble.io or another platform, Plumb ensures efficient, high-quality outputs without the hassle of timeouts, thanks to its robust webhook integration.
Supercharge your no-code AI app: Swap LLMs, combine responses, and bypass timeouts with Plumb.
Build smarter, faster: Create powerful AI-driven no-code apps with Plumb's pipeline builder and webhook integration.
Unlock AI potential: Easily compare LLM outputs and structure JSON responses for your next-level no-code project.
Plumb will take your no-code AI app to the next level. Here are three reasons why.
1. Advanced Pipeline Builder for Multiple AI Requests
First of all, you get this amazing pipeline builder which allows you to group together multiple nodes and run multiple AI requests. So for example here, I take an input which is a topic and I say "building no-code apps with AI" and then I'm saying let's generate a blog post and let's generate some tweets. And let's just point out on tweets that it is generating three separate tweets, not just one block of text that you've then got to split by to separate out if you're building this with Bubble.io.
Easy Model Swapping and Comparison
You've then got the ability to dive in and easily swap out the models. So for example with the blog, let's go in and you can see that I'm running this on 3.5 turbo 16k. Now I might think right, I know that there are newer models, let's just check it with GPT-4. And so I can just swap in GPT-4 and then I can run the whole thing again. So this is going to take slightly longer because GPT-4 is a model that takes longer to respond.
Okay now I've got my response back from GPT-4. So that's my point one, it allows you to easily swap in different LLMs and try them. In fact, let's even just dive back in. What if I thought, oh OpenAI, that's old news, I want to run this on Anthropic. And I want to use Sonnet, which is their kind of middle-of-the-road LLM, their model. So I've swapped that in and now I'm gonna run it again on Sonnet.
Using Prompt Labs for Side-by-Side Comparisons
Okay and we can see that Sonnet is actually giving me by far the longest response. But let's take it one step even further. If I go into configure steps, they've just recently added this feature in called Prompt Labs. And so we can see that my prompt is here and I'm gonna now copy it over to this side and I will now just swap out some of the data. So if we go back here we can see that this is what it's currently producing. And so Sonnet is middle-of-the-road, what if I want to use Opus because I want the best written blog that I can. Well I just swap it into Opus and I click run.
And now this is gonna allow me to have a side-by-side comparison, incredibly helpful if you're building an AI app and you want to get that right balance between the cost of the tokens and the quality of your output. So let's see what response we get back from here. And here is my Claude Opus response. Now I'm gonna just give it a quick skim read but I'm gonna assume because it's a more powerful model that it's a higher quality response. And if I like that I can just say use in pipeline and it's now swapped into my pipeline.
2. Combining Responses into Structured JSON
So let's move on to something else, my point two, which is the ability to combine responses into structured JSON. So you'll notice on my pipeline here that I've got two different AI nodes, I've got generate blog and I've got generate tweets. Now if I go on to combine responses which is gonna be up in here, which is the compose node, I've dragged that in and this allows me to take the outputs from my generate blog step and I can even choose a particular part of the output like do I just want to take the title or the content and I can combine them into the same response which means that my output is now combined responses and so that means that I could see on the results here that I get both my tweets and my blog post.
Overcoming Output Window Constraints
So this is particularly useful if you are wanting to generate a lot of content. Now you may have been putting all of that into a single prompt but the issue there is that there is still quite a constraint on the output window of all of these LLMs, whether that's OpenAI or Anthropic. So if you're generating a blog post you may find that that takes all of your output window and you wouldn't have space for other content. Well, with using Plumb it enables you to take the outputs of multiple AI steps and combine them into a single response.
3. Handling Long-Running Requests with Webhooks
My third point on this topic of response is well what if I said let's generate 10 blog posts and what if I was to use say GPT-4 or I was to use Claude 3 Opus, that's gonna take more than 60 seconds, more than two minutes and if I'm building this with a Bubble app I'm gonna find that my Bubble app times out because when Bubble sends a request to Plumb which sits in the middle between your Bubble app and these LLMs, it waits for a response but in the end it will eventually time out.
So here Plumb also comes to the rescue which is that we've got destinations and we've got a web hook. So I can add in the web hook node and then change from here I can say go to the web hook and then I can move my response, I delete the node, I can move my response up to the top, let's give that some space and change the response to say empty.
Setting Up a Backend Workflow for Long-Running Requests
So now this is what this is going to do, when I send data through the Bubble API connector to Plumb I'm gonna get back a 200 response to say that the data has been received and that there have been no errors at least in that initial bit. Then Plumb is going to run all of these additional steps and finally it's going to send the results to a web hook. So if I'm building this in Bubble I can set up a back-end workflow, make sure that it's public, the accessible, probably change this to post so I've got a bit more structure and then I can send all my data back into my back-end workflow. And that gets around this whole issue of timing out when you're doing really large and lot you know API requests that are going to take a very long time, more than a few minutes to respond.
Matching Inbound Data with Existing Database Entries
So as a suggestion one of the issues that you'll find when you use a back-end workflow to process inbound data is how to match it up with existing data. So you could have a data type called content and that your user input some information about their business and you then use that to generate the blog and generate the tweets. Well you've got that data content and you want to make sure that the data that comes back in from Plumb into the webhook gets matched back in with the right entry in your database.
One way you could consider that, a way you could consider doing that is that in your inputs you add in an input and you say something like unique ID and that way you can pass the unique ID from your Bubble database into the input of your Plumb pipeline and then on your output you also unchange just to apply the unique ID. So it can run, in fact you need to have direct links between items, nodes in Plumb in order for data to be passed between. But this now means that in my webhook I can supply the unique ID. So in my Bubble app I can have all of this data coming into a back-end workflow and I would be able to do a search in my database for the content that I want to fill with this AI generated content and I can match that up using the unique ID.
Recap: Why Plumb Will Supercharge Your No-Code AI App
So there are so many possibilities with what you can do with Plumb, I'd love to hear your suggestions down in the comments. But as a quick recap, why is Plumb gonna take your no-code app to the next level? Well it enables you to easily swap and preview different MLM models, it allows you to combine responses of multiple AI generating nodes into a single response and lastly it allows you to send all of that data back into app using a webhook, especially useful if it's going to take a long time to generate all of that amazing AI content.
Ready to Transform Your App Idea into Reality?
Access 3 courses, 400+ tutorials, and a vibrant community to support every step of your app-building journey.
Start building with total confidence
No more delays. With 30+ hours of expert content, you’ll have the insights needed to build effectively.
Find every solution in one place
No more searching across platforms for tutorials. Our bundle has everything you need, with 400+ videos covering every feature and technique.
Dive deep into every detail
Get beyond the basics with comprehensive, in-depth courses & no code tutorials that empower you to create a feature-rich, professional app.
Valued at $80
Valued at $85
Valued at $30
Valued at $110
Valued at $45
14-Day Money-Back Guarantee
We’re confident this bundle will transform your app development journey. But if you’re not satisfied within 14 days, we’ll refund your full investment—no questions asked.
Can't find what you're looking for?
Search our 300+ Bubble tutorial videos. Start learning no code today!
Frequently Asked Questions
Find answers to common questions about our courses, tutorials & content.
Not at all. Our courses are designed for beginners and guide you step-by-step in using Bubble to build powerful web apps—no coding required.
Forever. You’ll get lifetime access, so you can learn at your own pace and revisit materials anytime.
Our supportive community is here to help. Ask questions, get feedback, and learn from fellow no-coders who’ve been where you are now.
Absolutely. If you’re not satisfied within 14 days, just reach out, and we’ll issue a full refund. We stand by the value of our bundle.
Yes, this is a special limited-time offer. The regular price is $350, so take advantage of the discount while it lasts!