Q: I’m currently using a chatbot company whose embedding process is poor at best.
It’s nice to have features and addons but I what I really need to know is how well these bots embed, recal, and learn. I’m working with large amounts of information, possibly as much as 100-500mb of data my bots will need to be trained on so what I really need to know is:
1. How do you implement conversational memory for your chatbot? What type of memory do you use and how do you manage the token limits of the underlying model?
2. How do you measure the knowledge retention rate, knowledge recall rate, and knowledge fidelity rate of your chatbot? Can you provide some examples or benchmarks of how your chatbot performs on these metrics?
3. How do you handle file uploads for your chatbot? What formats and sizes do you support and how do you parse and embed the data from the files?
4. How do you ensure the security and privacy of the data that you upload and embed in your chatbot? How do you protect the data from unauthorized access or misuse?
5. How do you update and maintain your chatbot with the latest data and information? How often do you refresh the data and how do you notify the users of any changes?
The sooner theses questions are answered to my satisfaction, the sooner myself snd several of my colleagues will sign up for your highest tier. We need a workable solution ASAP.
I understand that your time is limited and you probably have a lot of development projects on your plate, but I would appreciate whatever answers you could provide.
Thanks,
Ricky H.
agrass
May 15, 2024A: Hi Ricky,
I'm sorry for the delay in getting back to you. I had to double-check a few things with our tech team.
So, here's the deal: we currently only store memory of conversations within the same thread. This means it remembers previous questions and answers, but only within that specific thread. It's a bit like using ChatGPT and Bard, but with the added perks of saving costs and having access to multiple models all at once.
Because the memory used isn't that large, it's pretty easy to store in a cache. But, just so you know, if you start a new thread, it'll start from scratch.
Right now, we're not set up to process files, but we're thinking about adding that to our roadmap. We also don't allow the use of fine-tuned models at the moment, but it's definitely something we're considering for the future. Just a heads up though, the training should be done by you, not with inputs from Slack.
The data is only used as a cache for the context and then it's removed.
If you have any other questions or suggestions, feel free to drop them on our roadmap: https://chatscope-ai.feedbear.com/. We're always open to feedback!
Cheers!