Q: How does the platform handle the "False Positive" rate in AI-generated technical assessments?
Also, how do you calibrate the difficulties of the assessments when distinguishing between Jr/Sr roles? Lastly, regarding the 15,000+ talent network, what is the primary source of these candidates—are they sourced from existing job boards, direct applications, or proprietary partnerships—and how frequently is this database refreshed?
Sayem_AirworkAI
May 10, 2026A: Hey 👋 good technical questions, taking them in order.
1. We don't publish a specific FPR. The system isn't a single AI-scores-everything model. AI scores how a candidate performs on the auto-generated assessment, but resume fit, screening question responses, and your custom assignment work are filterable and reviewed by you, not auto-scored. You're never trusting one signal.
2. Assessments are generated using the role spec you provide (title, seniority, key skills, JD). The AI calibrates difficulty against the seniority you set. You can also create custom assessments and set the difficulty level before sending. The control stays with you.
3. Mix of three: direct sign-ups on airwork.ai, candidates who applied to client jobs and opted into the network, and outreach in our target geographies (Bangladesh, India, Pakistan, Nepal, plus 26 others). Candidates manage their own profiles, so freshness is continuous rather than scheduled.
If you want the actual data on (1) and (3) for evaluation, email [email protected] with a quick note on what you're optimizing for and I can pull it for you.
Hope this helps! 🙌