Q: OpenAI API + Compliance

Hello, from your experience with other clients, with how much extra cost or tokens do I need to calculate for using the OpenAI API?

A because of Compliance, anybody should buy Tier3, because there in the Chathistory included. If there is anything with the Chatbot, and need to proof anything, in Tier 1+2 its impossible. The Chathistory should be included always.

cknyPLUSJan 3, 2025
Founder Team
tinytalk

tinytalk

Jan 3, 2025

A: Hi! 👋

Thank you for the questions.

Based on our experience, the outcome will largely depend on several factors specific to your use case and business. These factors can be categorized into three main areas: the number of monthly active users interacting with your chatbot, the frequency of these interactions, and the duration of each conversation.

You may have several thousand users engaging with your chatbot, but only once a month. Conversely, you might have a few hundred users who interact with the chatbot continuously throughout the month.

We have clients who spend as little as $5-10 per month, while others spend several hundred dollars due to high interaction rates.

Our recommendation is to test the system for a few weeks or months. You can create an OpenAI API key for Tiny Talk and monitor its usage through the OpenAI Platform dashboard. To manage costs, you might start with a $10 credit at OpenAI and set a budget limit to avoid overspending.

We recommend using GPT-4o or GPT-4o mini models due to their good balance of quality and cost. You can find API pricing here https://openai.com/api/pricing

GPT-4o
$2.50 / 1M input tokens
$10.00 / 1M output tokens

GPT-4o mini
$0.150 / 1M input tokens
$0.600 / 1M output tokens

Keep in mind that during a single AI response, more tokens are sent to OpenAI than what is visible on the screen. Each request to OpenAI includes the retrieved documents from your knowledge base along with the last 10 messages of the conversation, averaging around 5000 tokens.

This number can increase depending on your configuration. For instance, if we use GPT-4o mini and input 5000 tokens per reply, the cost could be approximately $0.15 for 200 replies. Remember, conversations typically consume more input tokens than output tokens.

Hope this helps and let us know if we can clarify anything further.

All the best,
Oscar

Share
Helpful?
Log in to join the conversation