Q: Hi there, trying to make it short: 1.
Have you heard about htps://jan.ai? In specific https://jan.ai/docs/local-api allowing me to interact with it from any device (e.g. iOS shortcuts => http request). I kinda read you are capable of that as well. => True?
2. I love “DRY” principles (Dont repeat Yourself). Currently I use a set of local makefiiles to manage my nested snippets to create dynamic prompts for the same Assistant (example: “Gandalf” could be act as the fully fletched assistant providing comprehensive answers, but sometimes I first want him to explain things to me like I am 5 without running the fully fledged model nor the fully fledged detailed respsone. I therefore got snippets for skills, formats, lenghts, …) - sounds complicated but is easier to work with practically since I only talk to Gandalf and not 15 variations with different names for each assistant.
Short: Can I nest (multiple) prompts of any depth as well as multiple snippet of any depth and ideally even get a “dynamic” prompt shown up initially with depending dropdown boyes allowing me to adjust any setting at runtime (and no, I don’t want to create 1001 keyboard shortcuts)
3. Is there an internal API allowing me to get/post/push or trigger things (I read you mentioned webhooks but afaik webhooks only work on registered domains? Are those webhooks hosted on your end and if so whats the TTL (if I reboot and a webhook gets triggered while I am offline I want to be sure no request is lost but processed as I am back online)
4. Since you mention local models: Those won’t require any credits since things run locally for those (as in jan.ai) => right?
5. Is your app context aware? Precise: If I select content on a browser website can I interact with it without copy/paste things first?
6. Does it allow me to set “data sets” to refer to (e.g. a list of URLs or even a sitemap.xml) to perform actions on that or at least considering the content from those sites ? Ideally would be a mix of local files, remote files (on a gDrive = not public content) as well as a list of URLs.
7. A dream come true: For my articles and community I often need to create supporting drawings (svg based). Its about trading and explaining candle stick patterns, trends, divergences, etc.unfortuntately I dont own a full set of svg drwaings for each and every case and situation. While I found AI being usefull in explaining things, I was not possible to get any AI engine to be smart enough to learn from a set of svg drawings and adjust them. Is that that something I could use ALice for and train it on (create my custom model from a set of data she uses herself to learn from and getter on over time? If so, can I ensure a separation of concers (is she smart enough to understand in which context she works and not mix up the data set / models she adds “knowledge” to?
I feel this is a bit edgy but thats what I deal with. I want my AI assitant to learn from things I flag as being relevant while knowing how to improve and not building up on things again and again that I marked as being wrong and iterated over it multiple times (= learn from the iterations but remember the result so even if I start a new conversation, I don’t start from scratch / don’t require to load the full conversaiotn again and again (costly thing to do).
overment
Jul 14, 2024A: > Have you heard about htps://jan.ai?
Yes, and I've heard a lot of good things about this.
> In specific https://jan.ai/docs/local-api allowing me to interact with it from any device (e.g. iOS shortcuts => http request). I kinda read you are capable of that as well. I kinda read you are capable of that as well. => True?
Not exactly, at least not now. Let me explain.
- Jan.ai provides a built-in local server that listens for INCOMING requests. It's like sending messages to Jan.ai.
- Alice lets you CONNECT to external automations through webhooks in Remote Snippets and receive a response. Alice's "Custom AI" feature enables you to connect to your own server, including localhost. This turns Alice into a visual interface for your custom solution, offering powerful capabilities.
To put this simple: Jan.ai can listen for external requests, while Alice doesn't. However, you can interact with external services and your own server through Alice's interface.
I hope that's clear. It was challenging to explain because the difference is subtle and technical.
I accidentally sent a comment, so I'm continuing here. I'm limited by character count.
> Can I nest (multiple) prompts of any depth (...)
Yes and no. You can set the Assistant's general prompt and assign Snippets and Remote Snippets to extend its behavior. You can change active Snippets during the chat. I use this option in a similar way to what you've described.
> Is there an internal API allowing me to get/post/push or trigger things (I read you mentioned webhooks
CustomAI may serve as an internal API, but it's a feature for programmers. Here's a template to start with: https://github.com/iceener/heyalice-nodejs-backend-template
> but afaik webhooks only work on registered domains?
Nope, it may be localhost or webhook generated with make.com for ex.
Never got localhost working for webhooks without tunnel (e.g. hook0 or hookdeck) but recently fell in love with webhookinbox (all three open source though while webhookinbox got a nice & simple UI). I think I got to give Alice a try and see how I can utilize her. Overall you appear one of the first founders who comes with an approach to AI that makes sense to me. (especially in the long run)
> Since you mention local models: Those won’t require any credits since things run locally for those (as in jan.ai) => right?
Yep. CustomAI works the same way unless you connect your server to a paid service like Anthropic or OpenAI.
Perfect. Thanks for being someone who finally understands the needs many got but can’t code themselve.
> Is your app context aware? Precise: If I select content on a browser website can I interact with it without copy/paste things first?
Nope, you need to copy and paste it first. To make it easier, you can assign a Snippet or Remote Snippet to a global keyboard shortcut, making it simpler to use. It's super handy, and I use it all the time (even right now).
I know its a chicken/egg discussion, but its not just about a specific task to trigger.
More often than not its about contextual knowledge i want to add as a (remote) store used to skill up specific assitants (RAG). The beauty of Alice is that I could skip many steps + saving me from a lot of headache I got now (multiple point of failures/test/troubleshoot). Likely valid for many other as well.
> Does it allow me to set “data sets” to refer to (e.g. a list of URLs or even a sitemap.xml) to perform actions on that or at least considering the content from those sites
Only through Custom AI, as creating a generic, ready-to-use integration for all users is beyond my reach. You can create your own local server to handle tasks and use Alice as the interface for interacting with it.
Awesome! Having you built it was not the expectation. No worries.
> While I found AI being usefull in explaining things, I was not possible to get any AI engine to be smart enough to learn from a set of svg drawings and adjust them (...)
I don't know if it's possible with the current generation of LLMs. I've created some custom graphics using ChatGPT Code Interpreter or Claude Artifacts, but they're not quite the SVG drawing I envisioned.
I knew this was sort of unfair to ask, but still had to. Didn’t meant to bring you on a unconfy position with this.
> I feel this is a bit edgy but thats what I deal with. I want my AI assitant to learn from things (...)
It's outside Alice's current scope, but I have my own, private "Custom AI" server that does most of the things you've mentioned. It has short-term and long-term memory, a set of tools, and can "reason" about things.
It's called an "agentic system." You can read more about it.
the best on your response is, that I don’t feel like abusing things anymore. Making me feel less “edgy” now. Thanks. Will find the ressource you mentioned online I guess.
Thanks for getting back & taking the time to respond. I like the option to trigger external automations & receive their response but seriously would love being able to e.g kick off things while browsing without switching context (=send it to my assistant rather switching wnidows. To name a max simplified case). Getting Alice to “listen” would be seriously maximiizing her value even more.