BrowserAct

Product details
dokgu

Verified purchaser

Deals bought: 57Member since: Aug 2011
3 stars
3 stars
Oct 23, 2025

I don't know how to feel about this tool

I mean it does what it says it does. I tested it and it works great - a little slow on the scraping side of things but I don't think that's a big issue for me.

Even the API works, I managed to link it to my n8n workflow and I was able to trigger the BrowserAct workflow from there. I would prefer that I can schedule the workflow from inside BrowserAct instead and just send the output to an n8n webhook but that's probably a future iteration. For now, my n8n workflow will have to trigger BrowserAct and fetch the data when it completes.

I'm still on the fence about whether to keep this deal or initiate a refund. And the only reason for this is the credits. I already selfhost both n8n and ChangeDetection on my home server - basically ChangeDetection is what I use in place of BrowserAct. In this case I can run as much tasks as I need as long as my home server can handle it, I don't have to worry about credits. But the one thing I don't like about my current setup is that ChangeDetection doesn't support AI in terms of knowing what data to extract. I have to manually select the HTML elements or provide XPath selectors etc and when the web page structure changes, I have to modify my selectors as well - it's very brittle. The one thing I really liked about BrowserAct is that AI takes care of determining what data to extract. I just tell it what I need from the page and it does it for me without me having to specify any selectors.

I created a very simple workflow in BrowserAct (3 steps only):
- Visit a URL
- Extract Data
- Output Data (as JSON)

When this workflow finishes, it uses 15 credits only (I got the tier 3 with 90,000 credits every month). This looks fine and looks very cheap. However, I need to run this workflow every 3 minutes every day. If you do the calculations, this will total to about 216,000 credits every month. So a simple workflow that I have will run my credits dry simply because I need to run it very frequently.

The reason I am still on the fence is because maybe I could just use BrowserAct for those scraping needs where I don't have to run the workflows very frequently - I can use other tools for those. I really like being able to just tell AI what to extract and be done with it. Maybe I can build a tool like BrowserAct for myself since I'm a developer too, it doesn't seem that complicated.

Founder Team
Claire_BrowserAct

Claire_BrowserAct

Oct 23, 2025

Thank you for the detailed technical review! Your perspective as a developer is incredibly valuable.
You've Identified Our Core Value:
Exactly right—AI-powered extraction that eliminates brittle selectors and adapts to website changes automatically. No more XPath headaches when sites restructure!
Regarding Your High-Frequency Use Case (216K credits/month):
Your math is correct. Here are practical solutions:
1. Hybrid Architecture (Recommended) ✅ BrowserAct: Complex/protected sites, AI-needed extraction
✅ ChangeDetection: High-frequency stable monitoring
✅ n8n: Orchestrate conditional triggering
Strategy:
ChangeDetection (detects change)

n8n (triggers only when needed)

BrowserAct (AI extraction)
This dramatically reduces credit consumption—run BrowserAct only when changes detected.
2. Optimize Frequency
- Every 10 minutes → ~108K credits/month (fits Tier 3)
- Every 5 minutes → ~155K credits/month (needs Tier 4)
3. Get Technical Support Contact us at support@browseract.com or join our Discord (https://discord.com/invite/UpnCKd7GaU) for:
- Workflow optimization guidance
- Custom solutions for your specific use case
- Architecture recommendations
- Technical deep-dive assistance
We'd love to help you find the most efficient approach for your needs!
Regarding "Building It Yourself":
The scraper part is straightforward, but here's the critical challenge most small teams can't solve:
⚠️ Bot Detection Evasion
We have a dedicated kernel team that customizes and maintains a hardened Chromium core to bypass advanced bot detection (Cloudflare, DataDome, PerimeterX, etc.). This includes:
🔧 Custom Chromium modifications to eliminate automation fingerprints
🔧 Continuous updates countering new detection techniques (weekly)
🔧 Proprietary anti-detection layers at browser engine level
🔧 24/7 monitoring of evolving detection algorithms
Why this is hard:
- Detection systems evolve constantly
- Requires deep Chromium internals expertise
- Needs dedicated team for continuous adaptation
- Standard Puppeteer/Playwright get blocked immediately
Building the scraper is easy; defeating enterprise-grade bot detection requires continuous specialized investment.
Our Recommendation:
Option 1 (Best Value): Hybrid approach—BrowserAct for complex/protected sites + ChangeDetection for high-frequency monitoring
Option 2: Reduce frequency to 10 minutes (fits Tier 3)
Option 3: Contact us for technical support and optimization guidance
Your extreme use case (every 3 minutes) is valuable feedback. Remember—you have 60 days to test the hybrid approach risk-free!
Thanks for making our product better! 💪
Claire & the BrowserAct Team

Helpful?
Share
Ratings