Chatbot Builder

Product details

Q: What does "agents" mean? Anthropic credits

Is it like.. instances, of chatbot instances, right?

Can be on different domains, subdomains, just a different "chatbot knowledge base". Correct?

Also you said you used Anthropic credits on the free/trial. But alot depends ofcourse on the efficiency of your prompts, using it? I'm not really sure like how to calculate the costs. Do you have any idea or maybe have a quick calculator avaiblale.

b583b51c02d84a5a90a6800fb4581d9fAug 6, 2025
Founder Team
Akshay_ChatbotBuilder

Akshay_ChatbotBuilder

Aug 6, 2025

A: Hello,

Thank you for the excellent questions. These are very important to understand how the platform works and how to manage costs effectively.

Here is a breakdown of your questions:

What does "agents" mean?
You are correct in your understanding. An "agent" in Chatbot Builder is essentially a single, independent chatbot instance. Each agent has its own:

Knowledge Base: A unique set of training data (e.g., website content, PDFs, FAQs) that it learns from.

Settings: Its own custom instructions, goals, and behavior.

Deployment: It can be embedded on its own unique domain or subdomain.

This means that with a plan that includes multiple agents, you can have a completely separate and distinct chatbot for each of your websites, each trained on different content.

Anthropic Credits & Cost Calculation
You are right to be concerned about the costs, as they can be difficult to predict. The cost is not a simple flat fee; it's based on consumption, primarily the number of tokens processed.

A crucial distinction between our trial and the AppSumo deal:

Our trial at Chatbotbuilder.net uses our own Anthropic API key, so you are using our credits.

The AppSumo Lifetime Deal is a "Bring Your Own Key" (BYOK) deal. This means you will need to get your own Anthropic API key and input it into your Chatbot Builder account. Any credits used by your bot will be paid directly by you to Anthropic.

Here is how the cost is calculated:

How it works: When a user asks a question, your chatbot sends a prompt (which includes the user's message and the conversation history) to the underlying large language model (LLM), such as Anthropic's Claude. The LLM then generates a response. Both the prompt (input) and the response (output) are measured in "tokens." The total number of tokens consumed determines the cost, which you pay directly to Anthropic.

Efficiency: The cost is highly dependent on the "efficiency" of your prompts. Shorter, more direct questions and shorter bot responses will consume fewer tokens and cost less.

Cost Calculators: While we don't have a built-in calculator, you can use one like the Anthropic Claude Pricing Calculator to get a general idea. You would need to input your estimated number of input and output tokens per month.

Here are the key variables you need to consider for a rough calculation:

Input Tokens: The number of characters in the user's question and the chatbot's internal instructions.

Output Tokens: The number of characters in the bot's response.

Number of Messages: The total number of interactions you expect per month.

Since Anthropic and other LLM providers charge per token, you can use these calculators to get a good sense of your potential expenses by modeling different usage scenarios.

Share
Helpful?
Log in to join the conversation

Yes, but.. I can prompt things; or my users can prompt things. But it's your chatbot that will eventually send the "ultimate" prompt, right? Say I have a 1000 pdf's indexed and 5000 pages via your chatbot, your prompt sends them all to Anthoppic? Or how do I have to see this? How is this efficiently managed?

Founder
Posted: Aug 6, 2025

No, the chatbot doesn't send all documents. It uses a technique called Retrieval-Augmented Generation (RAG). Your prompt is used to find the most relevant document chunks from your indexed data. These few chunks are combined with your prompt to form a concise query. This focused query reduces the number of tokens sent to the language model, which in turn significantly lowers processing costs.