Prompt Architects

Product details
ZevsMaticZevsMatic
ZevsMaticPLUS
May 6, 2026

Q: How are my saved prompts protected and used?

Hi,
I’m considering using Prompt Architects to save, organise, and reuse my own prompts across different AI platforms.

My main concern is how you protect user-created prompt libraries.
Could you clarify how Prompt Architects handles this specifically?

1. Are saved prompts private by default?
2. Can your team access a user’s saved prompts?
3. Are saved prompts, templates, or uploaded assets ever used to train or improve AI models?
4. When a prompt is used with ChatGPT, Claude, Gemini, or another provider, what data is sent to those services?
5. What controls do users have to export, delete, or permanently remove their prompt library?

I’m looking to understand how Prompt Architects keeps my prompts private, protected, and under my control and not train or access my own prompts.

Best,
Marcus

Founder Team
Parves_PromptArchitects

Parves_PromptArchitects

May 6, 2026

A: Hi ZevsMatic,

Thank you for your question regarding data protection and privacy in Prompt Architects.

1. Privacy of saved prompts: Yes, all user saved prompts are private by default.

2. Team access to user prompts: We do not encrypt prompts at rest, however, our team members have no access to user prompt libraries through any administrative panel, and we are committed to never accessing them as outlined in our privacy policy.

3. Use of user data for AI training: No, we do not use any user data for training or improving AI models. Each user has exclusive access to create, edit, and view their own saved prompts. Your prompt library is isolated and accessible only to you.

4. Data transmitted to AI providers: When you use a prompt with ChatGPT, Claude, Gemini, or another provider, only the final processed/modified prompt is sent to those services. They do not receive your original saved prompts, context, templates, or any other library contents. In the case of MCP integrations, since the original prompt is transmitted from the AI tool to our server for processing, the AI provider sees both the original and modified prompts, but they do not see how the transformation is processed, your context data, or any saved templates.

5. Data export and deletion controls: We currently do not offer bulk export functionality. However, we appreciate this feedback and will prioritize adding bulk export/import capabilities along with a bulk delete option for your prompt library. Individual prompt deletion is already available and performs permanent deletion—we do not use soft deletes.

I hope this addresses your concerns. Please don't hesitate to reach out if you have any additional questions.

Best regards

Share
Helpful?
3
Log in to join the conversation

Thanks for answering to all my questions in short notice!

One last question before I decide to jump onboard or not. Are my data encrypted, even for your devteam?

Founder

Verified purchaser

Yes, sensitive customer data like auth tokens, API keys are encrypted in database.

For operational data such as templates and history, we have strict access controls and role-based permissions. We maintain a zero-access policy. No team members can view customer data through any portal or interface. Our dev team never directly accesses production databases, ensuring your data remains private.

Thanks, but this is the blocker for me: “We do not encrypt prompts at rest.”
Access controls and zero-access policies are helpful, but they are not the same as cryptographic protection.
Since saved prompts, templates, history, and context slots are the core assets in a prompt library, I’d need those encrypted at rest before storing proprietary or client-related prompts.
Will you add this?

I would like evidence that you do not train on users prompts

Hey, as we are not building any LLM instead of we are just using an LLM to process your input and receiving the final output that we show you as the output.

So, we are not training anything from users input.

So we are not doing anything like that.

Hi, thanks for getting back to me.
I was thinking more along the lines of using my prompt as an input to help update your prompt library.

Related questions
View product details