Q: Data enrichment at scale
We use a data enrichment platform to capture relevant data points at scale. Similarly, your platform offers workflow templates for researching companies. (Research a Company/Research Industry/Analyze Company/Website Brand Identity/Company Research w/Website)
Two questions:
1.) Can we run workflows at scale? Example: Running workflow sequences researching 500 companies concurrently, using the templates listed above?
2.) How many credits per sequence would we use up again using all the templates referenced above? Or for example, how many credits would be used when researching 500 companies in a bulk lookup)
SeanP_AgenticFlowAI
Apr 26, 2025A: Hey Sheldon,
Good questions about scaling research workflows!
Running Workflows at Scale: Yes, you can definitely run these research workflows at scale using our "Run on Table" (Batch Run) feature. You would typically:
Create a Table dataset in AgenticFlow.
Import a list of your 500 companies (e.g., from a CSV with company names or websites).
Select the desired workflow template (like "Research a Company" or a customized version).
Choose the "Run on Table" option, mapping the company name/URL from your table to the workflow's input.
AgenticFlow will then process each row in your table through the workflow. It handles the execution efficiently, running multiple instances in parallel based on system capacity, though not necessarily all 500 strictly concurrently.
Credit Usage Estimate: Credit usage is based on the number of steps executed within your workflow for each company.
Fixed Cost: On Tier 3 or 4, each step costs a fixed 3 credits. A typical research workflow (like Input -> Scrape -> LLM Analyze -> Output) might have 4-6 steps, but combining templates could easily reach 10+ steps.
Variable Cost: Additionally, any steps using AgenticFlow's built-in LLMs (like Gemini Flash/4o-mini for analysis/summarization) consume variable credits based on token usage. Built-in web scraping might also have variable costs (though it's often free currently for limited use).
Estimate per Company: Let's estimate conservatively 5 steps per company sequence. The fixed cost would be 5 steps * 3 credits/step = 15 credits per company. This excludes variable costs for LLM usage.
Estimate for 500 Companies: The minimum fixed cost would be 500 companies * 15 credits/company = 7,500 credits. If the workflow is more complex (say 10 steps), it's 500 * (10*3) = 15,000 fixed credits. Remember to add variable credits if using built-in LLMs.
BYOK Impact: If you use "Bring Your Own Key" (BYOK) for the LLM analysis steps (e.g., connecting your own OpenAI or Claude API key), you avoid the variable AgenticFlow credits for those specific LLM calls (you pay the provider directly). This significantly reduces your AgenticFlow credit usage, leaving mainly the fixed step costs.
Scraping Caveat: Our built-in web scraper might get blocked when running at scale across many different sites. For researching 500 companies, it's highly recommended to use a dedicated scraping service like Apify via its MCP (https://agenticflow.ai/mcp/apify) with your own Apify key (BYOK). This is more reliable and also avoids potential variable scraping costs on AgenticFlow.
In summary: Scaling is possible via Batch Run. Credits depend on steps per workflow and LLM usage. Using BYOK for LLMs and a dedicated scraper like Apify (also BYOK) is the most cost-effective and reliable way to run this at scale within your AgenticFlow credit budget.
Love that you actually answered the question asked with no funny business. Well done!
Thanks Scrittiwolf! Really appreciate the feedback. I try my best – this whole AI agent space is new for everyone, I just happen to have been wrestling with it daily for the past 2.5 years building this! Honestly, every question like yours helps us figure out what we need to build next to serve you all better.