Chatbot Builder

Product details
CuddlyWuddly

Verified purchaser

Deals bought: 59Member since: Mar 2021
2 stars
2 stars
Oct 27, 2025

Higher costs than benefits

The user interface and user experience are excellent, with a lot of thought put into the product. However, the final cost-benefit analysis fell short for me. The founder’s insistence on not offering white labelling, even with the highest tier skewed this analysis in favour of higher costs.

Another issue, which I’m unsure is specific to the product or the RAG technology, was the excessive number of tokens sent as input to the LLM, which significantly increased costs for even a single input/output combination. May be RAG uses that many input tokens or perhaps this software is not efficient enough w.r.t. optimising the input tokens. I compared this with using Dialogflow, which is at least ten times cheaper than this. I understand that use cases may differ between the two, but for my specific use case, both products serve the purpose. Therefore, if I had no other options, I would opt for this even with the high input token count and monthly ten dollars for white labelling. However, in the AI landscape, there are hardly any shortages of options.

There were also minor issues, such as one of my bots suddenly stopping working. Whenever I opened it, it prompted me to choose between four LLM options, but clicking on the options did nothing. I reported this to support, but I haven’t received a reply after a significant amount of time.

Despite these issues, I must say that the UI/UX is one of the best I’ve ever seen for a chatbot, and some of the functionalities are unique and well-developed. But ultimately, the decision to purchase depends on the overall value the product provides, and for me, the reasons outlined above make it an unsatisfactory choice.

Founder Team
Akshay_ChatbotBuilder

Akshay_ChatbotBuilder

Oct 28, 2025

Hello,

Thank you for taking the time to write such an incredibly detailed, honest, and thoughtful review. Even though it's a 2-star review, this is the exact kind of feedback that helps us get better, so we genuinely appreciate it.

It's clear you've done a deep dive into the tool, and I want to address your points directly, especially regarding the token usage and the support issue, as those are areas we've been working on. On Token Usage & Efficiency (RAG): You are 100% correct to be laser-focused on this. An inefficient RAG can absolutely drive up costs, and your comparison with Dialogflow is a fair one for certain use cases.

This has been a massive area of development for us. I'm happy to share that we have recently implemented several new, significant mechanisms to make our bot prompts stronger and more efficient, with the specific goal of reducing input token usage:

1. Optimized Context Retrieval (DEPA): We've completely overhauled our RAG logic by implementing our new DEPA (Dynamic Element Prioritization Algorithm). Instead of just grabbing any data that might be relevant, this system is far more selective. It intelligently prioritizes and pulls fewer, but higher-quality, data chunks that are semantically closer to the user's specific query. This drastically reduces the number of irrelevant tokens being fed to the LLM.

2. Prompt Compression (GEPY): We have re-engineered our internal system prompts using a new GEPY (Generative Prompt Yield-optimization) process. This method analyzes and compresses the static instructions (like the bot's "personality") to be far more concise and token-efficient. This reduces the number of static tokens sent with every single message without losing the bot's core instructions or tone.

3. Smarter Conversation History Management: This is a big one. Instead of sending the entire chat history back to the LLM on long conversations, our system now has a much better "sliding window" and summarization mechanism. It automatically summarizes the earlier parts of the conversation, keeping the context intact while dramatically cutting down on token load as a chat progresses.

These three changes were a top priority for us precisely because they directly impact our users' operational costs. We are confident that if you were to test this again, you would see a substantial reduction in the input tokens used per query.

On the Bug & Support: Regarding the bot that suddenly stopped working with the LLM pop-up: you were absolutely right. That was a bug on our end that we identified and have since fixed. However, the fact that you reported it and never received a reply is a clear failure on our part, and I am very sorry for that. We take support seriously, and your ticket should not have been missed. So we can investigate what happened with our support process and where the ball was dropped, could you please share the email address or support ticket ID you used to reach out to us?

On White-Labeling: I hear you, and I understand. We know the $10/mo add-on was unpopular with many. It was a tough strategic choice we made to unbundle the aesthetic feature from the core functional platform. This was to ensure our long-term sustainability so we can continue to support and invest in the platform for all our AppSumo users for the long haul. That said, your cost-benefit analysis is entirely fair, and we absolutely respect your decision.

It sounds like the main issues were the white label cost, the high token usage, and the bug, combined with a lack of support. We've taken your feedback to heart and have already shipped improvements (such as DEPA and GEPY) to improve token efficiency. The bug has also been resolved. Given that two of the three primary issues you faced are now resolved, we would be incredibly grateful if you could reconsider and update your review.

We'd love a chance to show you that we are building a product (and support team) that is worthy of a better rating.

Thank you again for this invaluable feedback.

Regards,
Akshay

Helpful?
Share
Ratings