Prompt Architects

Product details
Madikis

Verified purchaser

Deals bought: 21Member since: Oct 2011
2 stars
2 stars
May 7, 2026

The product today doesn't match the present-tense feature claims that drove my Tier 3 purchase

I bought Tier 3 after asking pre-purchase questions. In his response under a "What we have today" heading, the founder wrote:

"Prompt Library with Context Slots — This is exactly what you're describing. You can save prompts with reusable context slots (project-specific variables, client names, tone settings, formatting rules, model-specific instructions). Build once, reuse everywhere."

Combined with the Tier 3 listing of "Personal context slots: Unlimited" and "Save your prompts: Unlimited," I expected that saving a prompt with {slot_name} references would let those slots resolve when the saved prompt is invoked. That's what "save prompts with reusable context slots" plus "build once, reuse everywhere" reads as in product terms.

What I found after testing:

Context slots only inject at prompt-improvement time, not at saved-prompt-execution time. When I save a prompt to the library that contains {slot_name} variables, the variables stay as literal text. Copying the saved prompt copies the literal {slot_name} string. There is no resolution mechanism in the web app, in copy-paste workflow, or via MCP.

The MCP integration confirms the same scope. It exposes four operations (improve, refine, shorten, enhance, plus get_refinement_questions). The MCP page on the website lists exactly these four and accurately describes them. There is no "execute saved prompt" tool and no slot resolution at the MCP boundary.

In practice this means every time a context slot's content needs updating, every saved prompt that depended on it has to be manually rebuilt. That's the opposite of build-once-reuse-everywhere as I understood the term.

A few smaller current issues:

The context slot content is capped at 3000 characters. I hit this limit twice in my first two slots on a real use case.

The context slot picker has no search bar and isn't alphabetized. With Unlimited slots on Tier 3, this becomes painful as the slot count grows. The category dropdown when saving a prompt has the same issues.

The model selector targets platforms (ChatGPT, Gemini, Claude) rather than specific model versions. Selecting "Claude" doesn't let me optimize differently for Sonnet versus Opus. The marketing language uses "model" loosely here, which is a smaller issue but worth mentioning.

Why two stars instead of one: the underlying product idea is right and the founder has been responsive in the Q&A section. The Improve Prompts feature does work, but unevenly. When I selected Claude as the target model, the output was usable. When I left it on the default "Any" option, the output got more generic and ceremonial. In one test run, the enhancer described the role as a "Montessori expert" even though nothing in my prompt or context referenced Montessori, and the school I was writing for is explicitly not Montessori. That kind of invented framing would create errors in any downstream content generated from the enhanced prompt. I also noticed the structural headers (ROLE, CONTEXT, INSTRUCTION, etc.) appeared in English even when the rest of the prompt body was in French, which suggests the section labels are hardcoded rather than localized. In a few cases the response would start in English and then switch to French mid-output. So the enhancer is a real feature, but quality is inconsistent depending on configuration and language.

There is a real product here at Tier 1 or Tier 2 prices for someone who wants a prompt enhancer with persistent storage. The two stars reflect the gap between the present-tense claim about context slots and what the product actually does today, which is the specific reason I went Tier 3 instead of Tier 2.

What would change this review: making the context slots actually function as reusable across saved prompts (especially via MCP, where the implementation could scan invoked prompts for {slot_name} patterns and inject from the user's library before sending). Also alphabetized lists, search bars, removal of the 3000-character cap, and naming specific models in the model selector. If those land within my 60-day refund window, I will revise this review and stay on Tier 3.

For other prospective buyers: what you're buying today is a prompt enhancer with persistent storage. Useful at Tier 1 or Tier 2. Tier 3 only makes sense once the context slots actually behave the way the founder's reply describes.

Founder Team
Nafiul_PromptArchitects

Nafiul_PromptArchitects

May 8, 2026

Hey Madikis,
I totally understand your concern and thanks for the details review.

Would you mind to send us an email to [email protected] so that we can check our system and give you a solution of your problem.

I am promising you we will shorted the issue before MONDAY.

Thank you

Helpful?
9
Share
Ratings