Pre-Employment Testing

Standardized assessments given to job candidates before hiring to evaluate their skills, cognitive ability, personality, or physical fitness for a role.

What Is Pre-Employment Testing?

Key Takeaways

  • Pre-employment testing uses standardized assessments to evaluate candidates before making a hiring decision.
  • 82% of companies now use some form of pre-hire assessment, up from 67% in 2019 (SHRM, 2024).
  • Cognitive ability tests have the highest predictive validity (0.54) for job performance among all assessment types (Schmidt, 2016).
  • Companies using validated assessments see a 36% reduction in first-year turnover (Aberdeen Group, 2023).
  • Tests must comply with EEOC guidelines and demonstrate job-relatedness to avoid legal challenges.

Pre-employment testing refers to any standardized assessment given to a job candidate before a hiring decision is made. These tests measure abilities, skills, traits, or behaviors that predict how well someone will perform in the role. The category is broad. It includes cognitive ability tests, personality assessments, skills tests (typing speed, coding challenges, Excel proficiency), physical ability tests, integrity tests, emotional intelligence assessments, and job simulations. The purpose is to add objective data to a hiring decision that would otherwise rely on resumes (which are self-reported and often inflated) and interviews (which are subject to interviewer bias). A 2016 meta-analysis by Frank Schmidt, published in the Annual Review of Psychology, found that cognitive ability tests have a predictive validity of 0.54 for job performance. That's higher than unstructured interviews (0.38), years of experience (0.18), and education level (0.10). It's the single strongest predictor available to employers. SHRM's 2024 survey found that 82% of companies use some form of pre-employment assessment. The growth has been driven by tighter labor markets, higher hiring costs (average cost-per-hire is $4,129 according to SHRM), and the availability of online assessment platforms that make testing scalable.

Why pre-employment testing matters

Resumes tell you what someone claims to have done. Interviews tell you how well they present themselves. Neither reliably predicts job performance. A candidate who interviews brilliantly might struggle with the actual work. A candidate who's nervous in interviews might be your top performer. Pre-employment tests close that gap by measuring the specific abilities and traits that matter for the role. They also create a level playing field. Every candidate takes the same test under the same conditions, producing scores that can be compared objectively. This reduces the influence of interviewer bias, networking connections, and pedigree-based filtering.

Pre-employment testing vs screening

Screening (background checks, reference calls, drug tests) verifies factual information about a candidate. Testing evaluates capability and fit. Screening asks "Is what this person told us true?" Testing asks "Can this person do the job well?" Both are pre-hire activities, but they serve different purposes and use different methods. Most hiring processes include both, with testing earlier in the funnel (after application, before interview) and screening later (after conditional offer).

82%Of companies use some form of pre-employment assessment (SHRM, 2024)
36%Reduction in turnover for companies using validated assessments (Aberdeen Group, 2023)
$4,129Average cost-per-hire that assessments help reduce (SHRM, 2023)
0.54Predictive validity of cognitive ability tests for job performance (Schmidt, 2016)

Types of Pre-Employment Tests

Each test type measures a different dimension of candidate capability. The right combination depends on the role's requirements and the competencies that predict success.

Test TypeWhat It MeasuresPredictive ValidityBest ForExample Platforms
Cognitive AbilityProblem-solving, logical reasoning, numerical ability, verbal comprehension0.51-0.54Roles requiring learning speed and analytical thinkingWonderlic, Criteria Cognitive Aptitude Test (CCAT)
Personality AssessmentBehavioral tendencies, work style, interpersonal patterns (Big Five model)0.22-0.31Culture fit, leadership potential, team-based rolesHogan, SHL OPQ, 16PF
Skills TestJob-specific technical abilities (coding, writing, Excel, typing)0.40-0.55Technical roles, administrative positions, content creationHackerRank, TestGorilla, Vervoe
Situational JudgmentDecision-making in realistic work scenarios0.26-0.34Customer service, management, any role requiring judgmentSHL, Cappfinity, Custom-built
Integrity TestHonesty, reliability, rule-following tendencies0.32-0.41Retail, finance, roles with access to assets or sensitive dataHogan Reliability Scale, PSI
Physical AbilityStrength, endurance, coordination, physical task performanceVaries by testManufacturing, warehousing, emergency services, tradesEmployer-designed, WorkSTEPS
Emotional IntelligenceSelf-awareness, empathy, social skills, emotion regulation0.20-0.30Leadership, client-facing, healthcare, coaching rolesEQ-i 2.0, MSCEIT
Job SimulationPerformance on realistic tasks that mirror actual job duties0.35-0.50Any role where task performance can be simulatedPymetrics (now Harver), Arctic Shores

Implementing Pre-Employment Tests in Your Hiring Process

Where you place the test in your hiring funnel, which test you choose, and how you communicate it to candidates all affect whether testing improves your outcomes or just adds friction.

Where to place tests in the hiring funnel

The most common placement is after the application screen but before the first interview. This position filters out candidates who look good on paper but lack the required abilities, saving interview time for both the hiring team and the candidate. For high-volume roles (customer service, warehouse, retail), testing can be the first step after application, reducing a pool of 500 applicants to 50 qualified candidates before any human review. For senior roles, testing typically comes after an initial recruiter screen, because executives may balk at taking a test before they've had a conversation. There's no universal right answer. Test too early and you may deter good candidates who see it as impersonal. Test too late and you've already invested interview time in candidates who would have been screened out.

Choosing the right test for each role

Start with a job analysis. What skills, abilities, and behaviors predict success in this specific role? A software engineering position might call for a coding test (skills) plus a cognitive ability test (learning speed). A sales role might use a personality assessment (extraversion, resilience) plus a situational judgment test (objection handling). A warehouse associate role might require a physical ability test plus an integrity test. Don't test for things that don't matter. If the role doesn't require advanced math, don't include a numerical reasoning section just because it's part of a standard battery. Every irrelevant test question adds friction and increases candidate dropout.

Setting cut scores and benchmarks

A cut score is the minimum test result required to advance. Setting it too high eliminates good candidates. Setting it too low makes the test meaningless. There are two approaches. The norm-referenced approach compares candidates against each other and advances the top 30 to 50%. The criterion-referenced approach sets a minimum score based on the results of current high performers in the same role. The criterion-referenced approach is more defensible legally, because it's directly tied to job performance data. Whatever method you use, document the rationale for the cut score. If the test is ever challenged, you'll need to show that the threshold is job-related and consistent with business necessity.

Pre-Employment Testing and Candidate Experience

Tests that are too long, irrelevant, or poorly communicated drive away qualified candidates. Balancing assessment rigor with candidate experience is a design challenge every employer faces.

Optimal test length and completion rates

Data from assessment platforms consistently shows that candidate completion rates drop significantly after 30 minutes. TestGorilla reports an 84% completion rate for assessments under 20 minutes, dropping to 62% for assessments between 30 and 45 minutes, and below 50% for anything over an hour. For most roles, 15 to 30 minutes is the sweet spot. If you need more assessment data, break it into two shorter sessions rather than one long marathon.

Communication and transparency

Candidates who understand why they're being tested are more likely to complete the assessment and view it positively. Before the test, explain what type of assessment it is, how long it will take, how results are used in the hiring decision, and that accommodations are available upon request. After the test, share results (even summary-level feedback) when possible. Candidates who receive feedback report higher satisfaction with the hiring process, even when they don't get the job. Transparency reduces the perception that testing is a gatekeeping exercise.

Mobile-friendly assessments

Over 60% of job seekers apply via mobile devices (Indeed, 2024). If your assessment doesn't work on a phone, you're losing candidates. Modern assessment platforms (TestGorilla, Criteria, Harver) offer mobile-optimized test experiences. Verify that your assessment is fully functional on iOS and Android before deploying it. Text that's too small, buttons that are too close together, or timed sections that penalize small-screen users will create adverse experiences and potentially introduce bias (candidates with access to computers have an advantage).

Test Validity and Reliability

A test is only useful if it actually predicts job performance (validity) and produces consistent results (reliability). Many commercially available tests have weak evidence for one or both.

Types of validity

Criterion validity measures how well test scores predict actual job performance (usually assessed by correlating test scores with performance ratings or turnover data). Content validity means the test covers a representative sample of the knowledge, skills, and abilities required for the job. A typing test for an administrative assistant has high content validity. A personality quiz for the same role has low content validity. Construct validity means the test measures the psychological construct it claims to measure (intelligence, conscientiousness, emotional stability). This is established through research, not through a single study. When evaluating a vendor's test, ask for criterion validity studies conducted with sample sizes of 100 or more, across multiple organizations and job types.

Red flags in assessment vendors

Be skeptical of vendors who can't provide peer-reviewed validity data, claim their test predicts performance for all job types without role-specific validation, use proprietary scoring algorithms they won't explain, don't report adverse impact statistics by demographic group, or market their tool as "AI-powered" without explaining what the AI actually does. The assessment industry includes both rigorously validated tools (SHL, Hogan, Wonderlic) and questionable products with little scientific backing. Due diligence matters. Ask for a technical manual, validation studies, and adverse impact analyses before signing a contract.

ROI of Pre-Employment Testing

Quantifying the return on pre-employment testing requires comparing the cost of testing against the cost of bad hires and turnover. The math usually works in testing's favor.

  • A bad hire costs 30% of the employee's first-year salary on average (U.S. Department of Labor). For a $60,000/year role, that's $18,000.
  • If pre-employment testing reduces your bad-hire rate by even 20%, the savings quickly exceed the cost of the assessment platform (typically $15 to $50 per candidate).
  • Companies using validated assessments report 39% lower turnover in the first year (Aberdeen Group, 2023).
  • The ROI multiplier increases for high-volume hiring: testing 1,000 candidates at $20 each ($20,000 total) that prevents 50 bad hires ($900,000 in avoided turnover cost) yields a 45x return.
  • Track three metrics to calculate your testing ROI: reduction in turnover (compared to pre-testing baseline), improvement in quality-of-hire scores, and time saved in interview screening.
82%
Of companies using pre-employment assessmentsSHRM, 2024
36%
Reduction in first-year turnover with validated testsAberdeen Group, 2023
$4,129
Average cost-per-hire that testing helps reduceSHRM, 2023
0.54
Predictive validity of cognitive ability testsSchmidt, 2016
84%
Assessment completion rate when test is under 20 minutesTestGorilla, 2024
$15,000+
Average cost of a bad hire in the first yearCareerBuilder, 2023

Frequently Asked Questions

Are pre-employment tests legal?

Yes, in most jurisdictions, as long as they're job-related, consistent with business necessity, and don't disproportionately screen out protected groups without justification. The EEOC's Uniform Guidelines on Employee Selection Procedures (1978) provide the US framework. Similar regulations exist in the EU (GDPR implications for personality data), the UK (Equality Act 2010), and Canada (Canadian Human Rights Act). Always validate that the test is relevant to the specific role and document the business justification for its use.

Which test type has the highest predictive validity?

Cognitive ability tests consistently score highest in meta-analyses, with a predictive validity of 0.51 to 0.54 for job performance (Schmidt, 2016). Work sample tests (0.35 to 0.54) and structured interviews (0.51) are close behind. The strongest prediction comes from combining cognitive ability with a structured interview (combined validity of 0.63). Personality tests alone have lower validity (0.22 to 0.31) but add incremental value when combined with cognitive tests.

How long should a pre-employment test be?

15 to 30 minutes for most roles. Completion rates drop sharply after 30 minutes. For technical roles requiring coding or simulation exercises, up to 45 minutes is acceptable if the candidate is informed in advance and understands the relevance. Never exceed 60 minutes for a single assessment session. If you need more data, use a staged approach: a short initial screen followed by a longer assessment for candidates who pass.

Can candidates cheat on online pre-employment tests?

They can try. Common cheating methods include looking up answers, having someone else take the test, or using multiple devices. Mitigation strategies include timed sections that are too fast for searching, randomized question order, webcam proctoring, and in-person follow-up verification for top scorers. Some platforms use AI-based proctoring that detects suspicious behavior (eye movement, multiple faces, tab switching). No system is cheat-proof, but combining timed questions with proctoring reduces the risk significantly.

Should we test all candidates or only finalists?

For high-volume roles, test all candidates early in the funnel to reduce the number reaching interview stage. This saves the most recruiter time. For low-volume or senior roles, test after an initial screen or phone interview. Senior candidates may see early-stage testing as disrespectful of their time. The decision depends on your hiring volume, the cost per test, and candidate expectations in your market.

What's the difference between pre-employment testing and psychometric testing?

Psychometric testing is a subset of pre-employment testing. Psychometrics specifically refers to standardized tests that measure psychological attributes: cognitive ability, personality traits, aptitudes, and behavioral tendencies. Pre-employment testing is broader and includes skills tests (typing, coding), physical ability tests, drug screening, and integrity tests that aren't strictly psychometric. All psychometric tests are pre-employment tests, but not all pre-employment tests are psychometric.
Adithyan RKWritten by Adithyan RK
Surya N
Fact-checked by Surya N
Published on: 25 Mar 2026Last updated:
Share: