Hi I have been trying to explore the product if its fits different use cases of mine.
I did try website scraper. cloned the workflow and tried scraping a website. it didn't work and it showed me the message Failed to start workflow run. An Unexpected error occurred.
at the bottom right corner.
I did experience this in multiple other workflows.
I have been noticing your prompt customer support and thought of giving a try. I can see potential in product, but can you help us on how to fixthat.
also can your agents create videos that have avatar and also faceless infographic videos that explain concepts? Do the image editor agent also make presentations if we give the content? Do they run google ads and can auto adjust themselves on which campaign is working etc?
Lookin forward to upgrade to Tier 3 or 4. Need some cnfidence
Thanks for trying out AgenticFlow and for your patience. So sorry you're hitting that "Unexpected error" – that's definitely not what we want you to see!
1. Website Scraper Workflow Error:
I just quickly tested the standard "Web Scraping" node and a few website scraping templates, and they seem to be working on my end.
The error you're seeing often points to a specific configuration issue within your cloned workflow (like an incorrect URL, an issue with how a variable is being passed, or a problem with a subsequent node trying to process the scraped data incorrectly).
Action Needed: Could you please email us at support@agenticflow.ai with the Workflow ID (copy the URL from your browser when you have that workflow open)? We can then take a direct look at your setup and help pinpoint the configuration error.
2. Agents Creating Avatar/Faceless Infographic Videos: Avatar Videos: Not natively with an AI-generated avatar speaking. However, an agent could: - Generate a script (LLM node). - Generate a voiceover (text_to_speech node). - Assemble this with pre-existing avatar footage or images using our render_video node if you provide the visual assets.
For true AI avatar generation + lipsync, you'd currently need to use an external service (like HeyGen, D-ID) via its API using our API Call node.
3. Faceless Infographic Videos: Yes, more achievable. Agent generates script/key points (LLM node). Agent generates voiceover (text_to_speech). Agent generates a series of simple graphics/icons/text slides (generate_image node for each "scene" or concept). Agent assembles these into a video using render_video.
Relevant Template to adapt: "VIDEO - STORY THROUGH EMOJI" (https://agenticflow.ai/app/explore/workflow/40b8c32a-cdd4-45fd-b369-5864915f2ac4) shows the assembly part.
4. Image Editor Agent Making Presentations: Not directly in the sense of outputting a .PPTX or Google Slides file. An agent can: - Take your content. - Use an LLM to structure it for a presentation (e.g., "Create a 5-slide outline for this topic with a title and 3 bullet points per slide"). - Generate individual images or graphics for each "slide" concept (generate_image node). - Output the structured text and image URLs for you to then manually assemble in PowerPoint/Slides/Canva. - Direct creation into presentation software would require specific MCPs (e.g., Google Slides MCP: https://agenticflow.ai/mcp/google_slides can create new docs and append text/images).
5. Agents Running & Auto-Adjusting Google Ads: Running Ads (Partially): An agent can use the Google Ads MCP (https://agenticflow.ai/mcp/google_ads) or API Call node to create campaigns, ad groups, or fetch performance data. Auto-Adjusting (Advanced - Needs Multi-Agent): For an agent to autonomously analyze performance and then make decisions like "if CTR < X, pause ad set Y" or "if ROAS > Z, increase budget for campaign A," this requires more complex logic and ideally our Multi-Agent System (Tier 3/4 + Add-on). A single agent workflow could be built with many conditional steps, but a multi-agent approach (e.g., a Data Analyst Agent feeding insights to an Ad Operations Agent) would be more robust. This is an advanced use case.
I'm confident we can get your web scraping issue sorted with a closer look. Please send that workflow ID to support! Tier 3 or 4 would definitely give you more runway for these kinds of advanced AI content and automation tasks.
That's a really smart question about managing knowledge and storage, and you've hit on a key concept!
1. Understanding AgenticFlow Knowledge Storage (It's Not Just File Size):
You're right, the storage limits (e.g., 100MB on Tier 1/2, up to 2GB on Tier 4) might seem modest if you're thinking purely in terms of raw PDF or DOCX file sizes.
However, our "Knowledge Storage" refers to the space taken up by the vectorized embeddings of your content. When you upload a document or provide a URL, we process the text, break it into meaningful chunks, and then convert those chunks into these special numerical representations (embeddings) that the AI uses for fast, semantic searching (this is the RAG part).
A single embedding for a text chunk is quite small (e.g., around 6KB for 1536 dimensions). This means 1GB of our "Knowledge Storage" can actually hold a massive amount of textual information – think tens of thousands, or even hundreds of thousands, of text chunks. So, that 2GB on Tier 4 is indeed "a lot a lot" of actual, usable knowledge for your agents.
2. Dynamically Scraping URLs Before Each Task (Your Excellent Idea):
Yes, you can absolutely design your AgenticFlow agents or workflows to scrape a URL (or a couple of URLs) for fresh context before performing each task, rather than relying solely on pre-loaded, static knowledge. This is a great way to work with dynamic information or to augment a smaller persistent knowledge base.
Here's how:
Workflow/Agent Step 1: Web Scraping: When a task starts (e.g., user asks the agent a question), the first step can be to use our Web Scraping node or a more robust MCP like Firecrawl (https://agenticflow.ai/mcp/firecrawl) or Apify (https://agenticflow.ai/mcp/apify) to fetch live content from the specific URL(s) relevant to that task.
Workflow/Agent Step 2: AI Processing: The scraped text from these URLs is then passed as dynamic, just-in-time context to a subsequent LLM node along with the user's original query or the main task input.
The LLM uses this freshly scraped information (plus any information it retrieves from your persistent vectorized Knowledge Base, if you've also configured one) to generate its response or complete the task.
Advantages of This "Just-in-Time" Scraping: Always Fresh Info: The agent uses the most up-to-date content from the web for that specific task.
Optimizes Persistent Storage: You reserve your persistent Knowledge Storage (the 100MB-2GB) for core, foundational, or less volatile information, while highly dynamic info is fetched on demand.
Targeted Knowledge: You scrape only the pages most relevant to the immediate task, providing highly focused context to the LLM.
Considerations: - Scraping Time: Each live scrape adds a little to the task execution time. - Reliability: Success depends on the target site's accessibility and structure (robust scrapers like Apify/Firecrawl help here). - Credit Usage: Each web scraping step and LLM processing step will consume AgenticFlow credits.
This dynamic scraping approach is a very powerful way to keep your agents informed with the latest data without necessarily filling up all your persistent vectorized storage with content that changes daily. You're thinking exactly right!
Can this tool look at a Google sheet of listed keywords, take one keyword and then mark as used. Then go to youtube and search for videos on that keyword run analysis to find gap topics. Then get titles and outlines for topics. then writes scripts then communicates and sends scripts to tools to create the videos? Just trying to create the entire flow for an automated ai youtube channel. What tier would be needed for 2 videos per day? Looking at least tier 2 but I don't understand credits yet.
Yes, that YouTube automation flow is largely achievable with AgenticFlow!
- Google Sheet Keyword Fetch & Update: Use Google Sheets MCP (https://agenticflow.ai/mcp/google_sheets). - YouTube Search: Use YouTube Data API MCP (https://agenticflow.ai/mcp/youtube_data_api). - Gap Analysis, Titles, Outlines, Scripts: Use LLM nodes (built-in or your BYOK). - Send to Video Tools: Use API Call node (if tool has API) or Email/Drive MCPs for handoff.
Tier for 2 Videos/Day: - A rough estimate is ~120 to ~220 AgenticFlow credits per video if using our built-in LLMs. - Tier 2 (30,000 credits/month) could work, but using BYOK for your LLMs is highly recommended. This drops your AgenticFlow credit usage to mainly the fixed step costs (~40 credits/video), making Tier 2 very comfortable.
Credits Briefly: - Fixed cost per step (e.g., 4 credits on T2). - Additional variable cost only if using AgenticFlow's built-in LLMs. - BYOK for LLMs = you pay LLM provider directly, only fixed step cost to us. (More: https://docs.agenticflow.ai/get-started/faqs#what-is-a-credit)
Start with Tier 2 + BYOK. It's a powerful setup! — Sean
Thanks for that. Now based on my digging, would I need the multi agents add on or can I complete the a to z with what the regular tiers have? I was actually considering tier 3. In case I needed that now or if my flow(s) gets more complex as time goes on. I just don't fully understand what all that means yet. Lol
Q: Create Swipepages
Can agenticflow agent or work flow create landing page copy and then have a landing page platform like swipepages create the landing page? If so how? If not do you know of any landing page builders that will allow this? How can genetic flow communicate with the page builder to do this? Thanks
Yes, AgenticFlow can definitely help with creating landing page copy and then pushing it to a platform like Swipepages (if Swipepages has an API).
1. Generate Landing Page Copy: Use an LLM node in an AgenticFlow workflow or instruct an Agent. Provide your topic, target audience, offer, etc., and prompt the AI to write compelling landing page copy (headlines, body text, CTAs).
2. Push Copy to Swipepages (or other builders): Check Swipepages API: The key is whether Swipepages has an API that allows you to create or update landing page content programmatically. - If YES: Use AgenticFlow's API Call node. Your workflow would send the AI-generated copy to the Swipepages API endpoint for creating/updating a page. - If NO direct API: Look if Swipepages integrates with Zapier/Make/Pabbly. AgenticFlow could send the copy to a webhook in those tools, which then updates Swipepages.
3. Other Landing Page Builders: Many modern builders (e.g., Webflow via MCP: https://agenticflow.ai/mcp/webflow, or others with APIs like Unbounce, Leadpages, Instapage) can be integrated similarly using their APIs via our API Call node or a specific MCP if available.
4. How AgenticFlow Communicates: It communicates with page builders via their API. AgenticFlow doesn't "log in" and "drag-and-drop" elements. It sends structured data (the copy) to an API endpoint that the page builder provides for content creation/updates.
So, first step: check if Swipepages (or your preferred builder) has a developer API for content manipulation. If so, you're good to go with AgenticFlow! — Sean
Great questions, and thanks for considering Tier 3!
1. More Agents & Workflow Templates:
Yes, absolutely! We are continuously building out our library of pre-built Agent and Workflow templates. Our goal is to cover a wide range of common use cases to help you get started faster.
Community-Driven: We also take inspiration from our community roadmap (https://agenticflow.featurebase.app/) for new templates.
AI Co-Pilot (Future): We're also developing an "In-house AI Agent Co-pilot." The vision for this is that eventually, you'll be able to describe the agent or workflow you need in a simple prompt, and the co-pilot will help you build it, effectively creating unlimited customized templates on the fly!
2. Using AgenticFlow with Clients (Selling Agents/Services/Workflows):
Yes, 100%! This is a core use case we encourage. - You can build AI agents, automated workflows, or specific AI-powered services using AgenticFlow in your Tier 3 account. - You then offer these solutions to your clients and bill them directly however you see fit (e.g., setup fee, monthly retainer, per-use).
How clients use it: - Embed Agents: You can embed an agent (like a chatbot) directly on your client's website using our script. They don't need an AgenticFlow account. - Share Workflow Links: You can share a runnable link to a workflow for your client to use (e.g., a content generation workflow they can trigger). - API Integration: For more integrated solutions, you can build a custom frontend/portal for your client (using our Next.js boilerplate: https://github.com/PixelML/agenticflow-boilerplate) that calls your AgenticFlow agents/workflows via API.
So, Tier 3 is a great choice for building out these client solutions, especially with the higher credit limits and user seats it offers. And yes, expect many more templates and an even easier way to create custom ones soon!
Q: An Unexpected error occured.
Hi I have been trying to explore the product if its fits different use cases of mine.
I did try website scraper. cloned the workflow and tried scraping a website. it didn't work and it showed me the message
Failed to start workflow run.
An Unexpected error occurred.
at the bottom right corner.
I did experience this in multiple other workflows.
I have been noticing your prompt customer support and thought of giving a try. I can see potential in product, but can you help us on how to fixthat.
also can your agents create videos that have avatar and also faceless infographic videos that explain concepts?
Do the image editor agent also make presentations if we give the content?
Do they run google ads and can auto adjust themselves on which campaign is working etc?
Lookin forward to upgrade to Tier 3 or 4. Need some cnfidence
SeanP_AgenticFlowAI
Jun 4, 2025A: Hey Sup9!
Thanks for trying out AgenticFlow and for your patience. So sorry you're hitting that "Unexpected error" – that's definitely not what we want you to see!
1. Website Scraper Workflow Error:
I just quickly tested the standard "Web Scraping" node and a few website scraping templates, and they seem to be working on my end.
The error you're seeing often points to a specific configuration issue within your cloned workflow (like an incorrect URL, an issue with how a variable is being passed, or a problem with a subsequent node trying to process the scraped data incorrectly).
Action Needed: Could you please email us at support@agenticflow.ai with the Workflow ID (copy the URL from your browser when you have that workflow open)? We can then take a direct look at your setup and help pinpoint the configuration error.
2. Agents Creating Avatar/Faceless Infographic Videos:
Avatar Videos: Not natively with an AI-generated avatar speaking. However, an agent could:
- Generate a script (LLM node).
- Generate a voiceover (text_to_speech node).
- Assemble this with pre-existing avatar footage or images using our render_video node if you provide the visual assets.
For true AI avatar generation + lipsync, you'd currently need to use an external service (like HeyGen, D-ID) via its API using our API Call node.
3. Faceless Infographic Videos: Yes, more achievable.
Agent generates script/key points (LLM node).
Agent generates voiceover (text_to_speech).
Agent generates a series of simple graphics/icons/text slides (generate_image node for each "scene" or concept).
Agent assembles these into a video using render_video.
Relevant Template to adapt: "VIDEO - STORY THROUGH EMOJI" (https://agenticflow.ai/app/explore/workflow/40b8c32a-cdd4-45fd-b369-5864915f2ac4) shows the assembly part.
4. Image Editor Agent Making Presentations:
Not directly in the sense of outputting a .PPTX or Google Slides file.
An agent can:
- Take your content.
- Use an LLM to structure it for a presentation (e.g., "Create a 5-slide outline for this topic with a title and 3 bullet points per slide").
- Generate individual images or graphics for each "slide" concept (generate_image node).
- Output the structured text and image URLs for you to then manually assemble in PowerPoint/Slides/Canva.
- Direct creation into presentation software would require specific MCPs (e.g., Google Slides MCP: https://agenticflow.ai/mcp/google_slides can create new docs and append text/images).
5. Agents Running & Auto-Adjusting Google Ads:
Running Ads (Partially): An agent can use the Google Ads MCP (https://agenticflow.ai/mcp/google_ads) or API Call node to create campaigns, ad groups, or fetch performance data.
Auto-Adjusting (Advanced - Needs Multi-Agent): For an agent to autonomously analyze performance and then make decisions like "if CTR < X, pause ad set Y" or "if ROAS > Z, increase budget for campaign A," this requires more complex logic and ideally our Multi-Agent System (Tier 3/4 + Add-on). A single agent workflow could be built with many conditional steps, but a multi-agent approach (e.g., a Data Analyst Agent feeding insights to an Ad Operations Agent) would be more robust. This is an advanced use case.
I'm confident we can get your web scraping issue sorted with a closer look. Please send that workflow ID to support! Tier 3 or 4 would definitely give you more runway for these kinds of advanced AI content and automation tasks.
Best,
Sean
Share AgenticFlow
Q: Can it scrape a url or 2 for its knowledgebase before performing each task? As storage seems low is why i ask.
Just trying to figure out if there are ways to increase knowledge base without filling up in app storage.
SeanP_AgenticFlowAI
Jun 4, 2025A: Hey Jsamplesjr,
That's a really smart question about managing knowledge and storage, and you've hit on a key concept!
1. Understanding AgenticFlow Knowledge Storage (It's Not Just File Size):
You're right, the storage limits (e.g., 100MB on Tier 1/2, up to 2GB on Tier 4) might seem modest if you're thinking purely in terms of raw PDF or DOCX file sizes.
However, our "Knowledge Storage" refers to the space taken up by the vectorized embeddings of your content. When you upload a document or provide a URL, we process the text, break it into meaningful chunks, and then convert those chunks into these special numerical representations (embeddings) that the AI uses for fast, semantic searching (this is the RAG part).
A single embedding for a text chunk is quite small (e.g., around 6KB for 1536 dimensions). This means 1GB of our "Knowledge Storage" can actually hold a massive amount of textual information – think tens of thousands, or even hundreds of thousands, of text chunks. So, that 2GB on Tier 4 is indeed "a lot a lot" of actual, usable knowledge for your agents.
2. Dynamically Scraping URLs Before Each Task (Your Excellent Idea):
Yes, you can absolutely design your AgenticFlow agents or workflows to scrape a URL (or a couple of URLs) for fresh context before performing each task, rather than relying solely on pre-loaded, static knowledge. This is a great way to work with dynamic information or to augment a smaller persistent knowledge base.
Here's how:
Workflow/Agent Step 1: Web Scraping:
When a task starts (e.g., user asks the agent a question), the first step can be to use our Web Scraping node or a more robust MCP like Firecrawl (https://agenticflow.ai/mcp/firecrawl) or Apify (https://agenticflow.ai/mcp/apify) to fetch live content from the specific URL(s) relevant to that task.
Workflow/Agent Step 2: AI Processing:
The scraped text from these URLs is then passed as dynamic, just-in-time context to a subsequent LLM node along with the user's original query or the main task input.
The LLM uses this freshly scraped information (plus any information it retrieves from your persistent vectorized Knowledge Base, if you've also configured one) to generate its response or complete the task.
Advantages of This "Just-in-Time" Scraping:
Always Fresh Info: The agent uses the most up-to-date content from the web for that specific task.
Optimizes Persistent Storage: You reserve your persistent Knowledge Storage (the 100MB-2GB) for core, foundational, or less volatile information, while highly dynamic info is fetched on demand.
Targeted Knowledge: You scrape only the pages most relevant to the immediate task, providing highly focused context to the LLM.
Considerations:
- Scraping Time: Each live scrape adds a little to the task execution time.
- Reliability: Success depends on the target site's accessibility and structure (robust scrapers like Apify/Firecrawl help here).
- Credit Usage: Each web scraping step and LLM processing step will consume AgenticFlow credits.
This dynamic scraping approach is a very powerful way to keep your agents informed with the latest data without necessarily filling up all your persistent vectorized storage with content that changes daily. You're thinking exactly right!
— Sean
Share AgenticFlow
Q: Youtube automation tasks?
Can this tool look at a Google sheet of listed keywords, take one keyword and then mark as used. Then go to youtube and search for videos on that keyword run analysis to find gap topics. Then get titles and outlines for topics. then writes scripts then communicates and sends scripts to tools to create the videos? Just trying to create the entire flow for an automated ai youtube channel. What tier would be needed for 2 videos per day? Looking at least tier 2 but I don't understand credits yet.
SeanP_AgenticFlowAI
Edited Jun 4, 2025A: Hey Jsamplesjr,
Yes, that YouTube automation flow is largely achievable with AgenticFlow!
- Google Sheet Keyword Fetch & Update: Use Google Sheets MCP (https://agenticflow.ai/mcp/google_sheets).
- YouTube Search: Use YouTube Data API MCP (https://agenticflow.ai/mcp/youtube_data_api).
- Gap Analysis, Titles, Outlines, Scripts: Use LLM nodes (built-in or your BYOK).
- Send to Video Tools: Use API Call node (if tool has API) or Email/Drive MCPs for handoff.
Tier for 2 Videos/Day:
- A rough estimate is ~120 to ~220 AgenticFlow credits per video if using our built-in LLMs.
- Tier 2 (30,000 credits/month) could work, but using BYOK for your LLMs is highly recommended. This drops your AgenticFlow credit usage to mainly the fixed step costs (~40 credits/video), making Tier 2 very comfortable.
Credits Briefly:
- Fixed cost per step (e.g., 4 credits on T2).
- Additional variable cost only if using AgenticFlow's built-in LLMs.
- BYOK for LLMs = you pay LLM provider directly, only fixed step cost to us.
(More: https://docs.agenticflow.ai/get-started/faqs#what-is-a-credit)
Start with Tier 2 + BYOK. It's a powerful setup!
— Sean
Share AgenticFlow
Thanks for that. Now based on my digging, would I need the multi agents add on or can I complete the a to z with what the regular tiers have? I was actually considering tier 3. In case I needed that now or if my flow(s) gets more complex as time goes on. I just don't fully understand what all that means yet. Lol
Q: Create Swipepages
Can agenticflow agent or work flow create landing page copy and then have a landing page platform like swipepages create the landing page? If so how?
If not do you know of any landing page builders that will allow this? How can genetic flow communicate with the page builder to do this? Thanks
SeanP_AgenticFlowAI
Jun 4, 2025A: Hey cftrader01,
Yes, AgenticFlow can definitely help with creating landing page copy and then pushing it to a platform like Swipepages (if Swipepages has an API).
1. Generate Landing Page Copy:
Use an LLM node in an AgenticFlow workflow or instruct an Agent.
Provide your topic, target audience, offer, etc., and prompt the AI to write compelling landing page copy (headlines, body text, CTAs).
2. Push Copy to Swipepages (or other builders):
Check Swipepages API: The key is whether Swipepages has an API that allows you to create or update landing page content programmatically.
- If YES: Use AgenticFlow's API Call node. Your workflow would send the AI-generated copy to the Swipepages API endpoint for creating/updating a page.
- If NO direct API: Look if Swipepages integrates with Zapier/Make/Pabbly. AgenticFlow could send the copy to a webhook in those tools, which then updates Swipepages.
3. Other Landing Page Builders: Many modern builders (e.g., Webflow via MCP: https://agenticflow.ai/mcp/webflow, or others with APIs like Unbounce, Leadpages, Instapage) can be integrated similarly using their APIs via our API Call node or a specific MCP if available.
4. How AgenticFlow Communicates:
It communicates with page builders via their API. AgenticFlow doesn't "log in" and "drag-and-drop" elements. It sends structured data (the copy) to an API endpoint that the page builder provides for content creation/updates.
So, first step: check if Swipepages (or your preferred builder) has a developer API for content manipulation. If so, you're good to go with AgenticFlow!
— Sean
Share AgenticFlow
Q: Agents and Template Library
Hi, I'm currently interested in purchasing Tear 3, I'd like to know if you will be adding more Agents and More workflows?
2) Also, is there a way to use AgenticFlow with clients, to sell them agents, services or workflows made in AF?
Thanks,
Mario
SeanP_AgenticFlowAI
Jun 4, 2025A: Hey Mario!
Great questions, and thanks for considering Tier 3!
1. More Agents & Workflow Templates:
Yes, absolutely! We are continuously building out our library of pre-built Agent and Workflow templates. Our goal is to cover a wide range of common use cases to help you get started faster.
Community-Driven: We also take inspiration from our community roadmap (https://agenticflow.featurebase.app/) for new templates.
AI Co-Pilot (Future): We're also developing an "In-house AI Agent Co-pilot." The vision for this is that eventually, you'll be able to describe the agent or workflow you need in a simple prompt, and the co-pilot will help you build it, effectively creating unlimited customized templates on the fly!
2. Using AgenticFlow with Clients (Selling Agents/Services/Workflows):
Yes, 100%! This is a core use case we encourage.
- You can build AI agents, automated workflows, or specific AI-powered services using AgenticFlow in your Tier 3 account.
- You then offer these solutions to your clients and bill them directly however you see fit (e.g., setup fee, monthly retainer, per-use).
How clients use it:
- Embed Agents: You can embed an agent (like a chatbot) directly on your client's website using our script. They don't need an AgenticFlow account.
- Share Workflow Links: You can share a runnable link to a workflow for your client to use (e.g., a content generation workflow they can trigger).
- API Integration: For more integrated solutions, you can build a custom frontend/portal for your client (using our Next.js boilerplate: https://github.com/PixelML/agenticflow-boilerplate) that calls your AgenticFlow agents/workflows via API.
So, Tier 3 is a great choice for building out these client solutions, especially with the higher credit limits and user seats it offers. And yes, expect many more templates and an even easier way to create custom ones soon!
Cheers,
Sean
Share AgenticFlow