Q: Regarding LLMs and prompting
I purchased Tier 2 (today) and have not yet run a project.
1. You mention "4x for using Advanced AI models" - You predict, "...10-20% improvement in quality from using top-tier models." What are Advanced AI models? And what are the 'everyday' Non Advanced AI models? Are we talking Sonnet vs Opus etc?
2. I see in the AppSumo video different LLMs being used for different aspects of the production process. If I opt to bump up to Tier 3 or 4 to gain access to BYOK discounts, do I have the option to deploy a similar mix?
3. Is the "mix" spelled out so that I may copy what works?
4. The project setup makes what you call "prompting" seem like just a few simple one-liners. ("Describe the subject of your non-fiction book") Are we better off if we understand prompting - ie. Role, Task, Context, Self-Reflect, Meta-Fix, Output.
Thanks
Ioannis_Youbooks
Sep 30, 2025A: No, we do not use Opus and in fact, we don't even allow BYOK customers to use Opus because it is too expensive and makes no sense to use it.
A non BYOK project will use models from all providers to complete different tasks. The discussion mode also uses Llama and Mistral.
During evals, we found that using much more expensue moels had a small increase in quality. So, if I were you, I would not use the 4x mode. However, customers do tend to use it because they want the absolute best for some books which is understandable. Activating 4x wil introduce more reasoning and higher tier models in some places of the workflow. If a task is already using a higher model in a basic run, we don't push it to "Opus" for the 4x run, it just uses the same model.