Q: Queries
1. So your built in models like Gemini flash 2.0 and AgenticFlow o4 mini are built on top of famous LLM from open Ai, and Google? What do you mean by open router?
2. How can I use it for small different clients? How will I separate it?
3. Also with respect to new AI agent innovations, does that mean new use cases will be covered by you as well. For example handling inbound and outbound calls of call center. Can this solution handle it
4. Which programming language and/or frameworks you used to build it?
5. How to look at company and find AI agent use cases to get implementation work for your solutions. Who are your competitors

SeanP_AgenticFlowAI
Edited Apr 29, 2025A: Hey VK_2018,
Happy to provide more details on those points!
1. Built-in Models & OpenRouter:
Our built-in models, like "Gemini flash 2.0" and "GPT 4o-mini" are provided by Google or Open AI through our Enterprise agreement with them. Using these directly consumes your AgenticFlow credits.
"Pixel ML OpenRouter" is our system that lets you use other popular models (like certain OpenAI or Anthropic versions) without managing your own separate API keys for them. When you choose an OpenRouter model, we handle the call through our provider accounts, and we simply pass that usage cost through to you. This is different from "Bring Your Own Key" (BYOK), where you use your API key, pay the provider directly, and AgenticFlow doesn't charge variable credits for that specific LLM call (only the fixed platform cost per step).
2. Managing Different Clients:
For separating client work, the cleanest method is using separate AgenticFlow Workspaces, though this requires a plan/LTD for each.
Within a single workspace, you can manage multiple clients by using clear naming for agents/workflows/data (like ClientA_Bot). If you invite clients, user access management is currently basic but functional. A popular approach for agencies is building a custom front-end using our API and the Next.js boilerplate (find it at https://github.com/PixelML/agenticflow-boilerplate), using AgenticFlow as the powerful backend. We also plan to add better project-level organization within workspaces in the future – feel free to vote for that on our roadmap: https://agenticflow.featurebase.app/.
3. Future Innovations & Call Centers:
Yes, we aim to cover new use cases. We have a strong background in AI and infrastructure, partnering with major tech companies. Our platform's ability to connect thousands of tools via MCP is already advanced.
Regarding call centers, while we can handle text-based support channels (Chat, WhatsApp, Email) now, automating live voice calls isn't built-in yet. Telephony integration is on our radar based on demand. Please raise to our public roadmap.
4. Tech Stack:
Our user-facing front-end is built with Next.js/React. The backend uses a modern stack designed for scalable AI and workflow processing.
5. Finding Use Cases & Competitors:
To find clients, look for businesses needing automation in customer support (like website chatbots - our "Agent from URL" feature is great for demos), marketing (content generation, email personalization), or operations (data extraction from emails/PDFs, simple data entry). Sites like Upwork/Fiverr often list these needs.
Competitors include traditional workflow tools like Zapier/Make/N8N (we add AI agents and MCPs) and other AI agent platforms like Relevance AI/CrewAI/Mindpal (we offer a visual workflow builder and broader MCP integration). AgenticFlow bridges these by offering both structured workflows and intelligent agent capabilities.
Hope this helps!
Can you please explain Agent AI use cases which can be built with your tool and what's role of credits in them. Also what is high level process to accomplish Agentic AI use cases.
Hi, Agent Use Cases: Customer support chatbots (trained on your site), internal assistants (using tools like Slack/Drive via MCP), simple task executors ("find emails with X, add sender to Sheet").
Credits: Agents use credits per action (understanding, using a tool, replying). Fixed cost per step applies (3-4 credits), plus variable cost if using built-in LLMs. BYOK avoids variable LLM cost but not the fixed step cost.
Process: Create Agent -> Connect Tools (MCPs) -> Add Knowledge (optional) -> Instruct via Chat -> Embed (optional)