Q: Hi A lot of cloud scrapers use page credits ( e.

g. https://webscraper.io/) but you use record credits which is more expensive. Can you justify or explain why you used this model? Thanks

justin162Aug 25, 2022
Founder Team
Ardy_BrowseAI

Ardy_BrowseAI

May 14, 2024

A: Hey Justin,

We thoroughly evaluated 3 different models a few months ago:
- Task credits
- Page credits
- Record credits

We landed on Record credits for many reasons some of which are:
- It's much easier for users we've talked to, to estimate the number of records they're going to extract rather than number of pages. So it is more clear what you would get with each plan.
- Number of pages you scrape can also change over time outside of your control. If a site changes their max list items in every page from 30 to 20, your page credit cost would suddenly go 50% higher. We wanted the cost to be as predictable as possible so organizations can easily budget for at least a year.
- (this one is more of a secret for now...!) We're working on adding more layers to the application, for example a data layer that will let you enter a URL to scrape and see the data you need appear next to it right away, essentially abstracting away the scraping part and just focusing on the input/output data. In that scenario, there will be no "pages" for the user, it will just be records of data.

Hope that answers your question.

Share
Helpful?
Log in to join the conversation
Verified Purchaser badge

Verified purchaser

Posted: Aug 25, 2022

Thanks for alot for your feedback. Appreciated