BrowserAct

Product details
doctor.ajitsinghdoctor.ajitsingh
doctor.ajitsinghPLUS
Jan 6, 2026

Q: Specific use case

Hi, I have a multi-part question:
1. can I train the AI agent to return an N/A or error if it is not 100% sure of a certain data point rather than allowing it to hallucinate a value?
2. If the data is in structured format like a table, does BrowserAct extract the summary, or replicates the tabular format?
3. What is the maximum number of URLs I can run in a single workflow? How many credits does such a top-end workflow utilise?
4. Can the AI agent autonomously follow links to find data? Or do I need to provide every sub-link manually? For example, let's say I need minimum eligibility criteria for an application. These criteria are under 5 heads and each head opens as a new link (instead of content for all 5 appearing on the same page).
In short, trying to understand whether AI agents can be converted to manual workflows.
Thanks.

Founder Team
Claire_BrowserAct

Claire_BrowserAct

Jan 7, 2026

A: Great detailed questions! Let me address each one.
Q1: Force N/A instead of hallucination?
✅ Yes, you can use prompts to improve data extraction accuracy.
⚠️ However, AI Agent cannot guarantee 100% hallucination-free results - this is an LLM limitation.
Recommendation: If you need accurate, hallucination-free data extraction, use Workflow mode instead of AI Agent.
Workflow mode:

No hallucinations
Returns empty/null if data not found
Reliable and repeatable

Q2: Table data extraction format?
✅ BrowserAct outputs data based on your extraction needs in multiple formats:

CSV (tabular format)
JSON (structured data)
XML (structured data)
Markdown (formatted text)

You choose the output format - table data can be exported as CSV to maintain structure.

Q3: Maximum URLs in single workflow?
Currently, one workflow processes one URL at a time.
Credit consumption depends on workflow complexity:

Simple extraction: ~10-20 credits
Complex multi-step: 50-200+ credits

For multiple URLs: Run workflow multiple times or use API automation.

Q4: Extract data from sub-links automatically?
✅ Yes! Use Workflow mode with Loop List node.
How it works:
Step 1: Extract sub-link URLs
Navigate to main page → Extract all 5 eligibility criteria link URLs
Step 2: Add Loop List node

Describe the target list area (the 5 links you extracted)
Set "Max items to focus" (e.g., 5 for your 5 criteria)
Enable "Auto-click 'Load More'" if needed

Step 3: Add actions inside the loop

Click Element Item - Click each criteria link
Extract Data Item - Extract eligibility data from each page

Step 4: Results automatically combined

Visual workflow:
Loop List (iterate through 5 criteria links)

Click Element Item (open each link)

Extract Data Item (get eligibility data)

Loop continues for all items → Export combined results
No manual input needed - Loop List automatically processes all sub-links!

For Your Use Case (5 Eligibility Criteria):
Complete workflow:
1. Navigate to application page
2. Loop List node → Define the 5 criteria links area
3. Click Element Item → Open each criteria link
4. Extract Data Item → Get eligibility requirements
5. Loop automatically continues through all 5
6. Export combined data (CSV/JSON)
Fully automated - no manual sub-link entry required!

Summary:

Hallucination-free: Use Workflow mode (AI Agent not 100% reliable)
Output formats: CSV, JSON, XML, Markdown (your choice)
URLs per workflow: One URL at a time; credits depend on complexity
Sub-link extraction: Use Loop List node - fully automatic, no manual input

Loop List + Click Element Item + Extract Data Item = Automated sub-link extraction!
Claire & the BrowserAct Team

Share
Helpful?
Log in to join the conversation