9d98a057cf1549d88ce14fa93d340cc99d98a057cf1549d88ce14fa93d340cc9
9d98a057cf1549d88ce14fa93d340cc9
Mar 4, 2022

Q: Curious about the "find influencers" and "find reviewers" functionality, which probably have the same resource requirements.

Can you give me an example of the amount/extent of data I could collect in a go, how much credit usage that would use, and any associated premium costs?

Secondly, can you describe what would happen if we wanted to run that same process repeatedly over time? For example, is it possible to avoid collecting the same data? Meaning we just want to expand our database without wasting credits and resources.

Thanks!

Share
Helpful?
1
Log in to join the conversation

Hi,

We are building a library of ready made workflows, these are templates you can use to get started in minutes.

For the find influencers / reviewers workflow, this loads the following:

-A Google search (you can specify 100-1000 results based on a query, for example "Zoho mail review" using 0.04 premium credits)
-Email scraper to look for email addresses on the review page (1 automation credit per page)
-Social links scraper to look for social media profiles (1 automation credit per page)
-Email discovery (to find all the email addresses shared online for that domain, this uses 0.16 premium credits per domain search)

So as an example, for returning 100 results and all their domains, landing pages, email addresses and social profiles it would use 202 automation credits and 16.04 premium credits.

You can add/remove elements in the workflow to suit your needs, for example only using email scraping or adding website audits or phone scraping to it for more complete data.

You can find all our ready made workflows here: https://hexomatic.com/ready-made-workflow-library

Or you can build your own using your own scraping recipes and our automations https://hexomatic.com/automations

Hope this helps

Sounds great, that was helpful and informative. I can see lots of possibilities here.

What could we do about digging deeper with multiple searches over time - to add to our result pool, without duplication? Is there any practical way to avoid pulling the same results?

Happy to help. With regards to preventing duplication, we have a "remove duplicates" automation which helps clean data. Also we are researching further improvements including conditional logic.

Hope this helps