Q: Hi, First, I would like to state that this looks very exciting.
Looking forward to see how it grows.
## Issues:
When I installed the app on Windows, after the onboarding strated, the conversation window would not upload any image I dragged onto it regardless of size, shape, dimensions, etc. In the end, I had to close down the app, and then I was able to complete this in the settings manually.
I know the local models are experimental, but when I enter any Ollama settings on Windows, upon closing the settings and reopening them, the values seem to get wiped out completely, like it doesn't save the state?
## Questions:
I hope you don't mind, but I also have some questions.
Q. Will you also add support for OpenRouter as another avenue for API keys to access different models?
Q. I downloaded your 'heyalice-nodejs-backend-template' from GitHub, and just have it running the basic server, and have I have entered my OpenAI key in the env file. Does this mean that if I choose 'local' when using Alice, it will be using the local backend, but, still using my OpenAI key, as that is what is set in the custom backend? (As I don't see how it can be using anything else, because there are no other models inside the code from what I can see?)
Q. Can we also use the Ollama.js node package in the backend to run completely from Ollama models? If so, I assume we would just need to modify the chat.ts to use the Ollama endpoint instead?
Q. Is it at all possible to connect to the ollama app - https://ollama.com/ and use the default 'http://localhost:11434' as the server URL? And, would we need to append anything to the endpoint such as '/api/chat' similar to what the backend uses? (I couldn't try, because as stated, the values seem to get wiped out on closing the settings). The other reason I ask, is because, I do have some models downloaded locally through Ollama, but when I tried to enter them as the model name, they kept saying "This model name is forbidden" - Note: That was even with pre-pending the model name with: 'local:my-model-name').
Q. Will you also add local support for software like LM Studio or Anything LLM for additional options to run models locally for offline use?
Sorry for all the questions, and thanks. 😊
Thanks for the reply, I have already tried your above steps. I sent a video in an email last night that showcases the issues faced. You will see that the above steps do not work, even with using the local GitHub repo you mentioned above, which is also showcased in the video. It does not solve the issue. Also, my message you just replied to mentioned I already installed the repo. Thanks.
I understand. I'll get back to you via email and we'll resolve this issue.