An interview method that evaluates candidates through targeted questions designed to assess specific skills, behaviors, and competencies required for the role.
Key Takeaways
A competency-based interview (sometimes called a criterion-based interview) is a structured interview format where every question is designed to test a specific competency needed for the role. Instead of asking vague questions like "Tell me about yourself," the interviewer asks the candidate to describe a real situation where they demonstrated a particular skill or behavior. The candidate's response is then scored against a predefined rating scale. This approach is grounded in one simple principle: past behavior is the best predictor of future behavior. If a job requires conflict resolution skills, the interviewer asks the candidate to walk through a time they actually resolved a conflict at work. The answer reveals how the person thinks, acts, and performs under real conditions. Competency-based interviews differ from general behavioral interviews in one key way. Every question maps directly to a competency framework that the organization has defined for the role. There's no improvisation. The interviewer doesn't make up questions on the spot. The scoring criteria are decided before the first candidate walks in.
In an unstructured interview, the interviewer asks whatever comes to mind. Questions change from candidate to candidate. There's no scoring rubric. Research shows that unstructured interviews have a predictive validity of about 0.20 (on a 0-1 scale), meaning they're barely better than flipping a coin. Competency-based interviews score around 0.51 for predictive validity (Schmidt & Hunter, 1998). Behavioral interviews and competency-based interviews overlap, but they aren't identical. All competency-based questions are behavioral, but not all behavioral questions map to a formal competency framework. A behavioral interview might ask "Tell me about a time you failed" without connecting the question to a specific competency. A competency-based interview would tie that question to "resilience" or "adaptability" and score the response against defined performance indicators.
Three reasons. First, fairness. Every candidate gets the same questions and is scored using the same rubric. This reduces interviewer bias and makes it easier to compare candidates objectively. Second, legal defensibility. If a hiring decision is challenged, the company can show that it used job-relevant criteria applied consistently. Third, better hires. Aberdeen Group's 2023 research found that organizations using competency-based interviews are 3.5 times more likely to identify top performers during the hiring process compared to those using unstructured methods.
The interview is only as good as the competencies it measures. Before writing a single question, the hiring team needs to define exactly which competencies matter for the role and what "good" looks like at each level.
Start with the job description and job analysis. Pull out the 6 to 8 most critical competencies for success in the role. Don't try to assess everything in one interview. A typical 45-60 minute competency-based interview can realistically cover 4 to 6 competencies with enough depth to make reliable judgments. Common competencies include communication, problem-solving, teamwork, leadership, adaptability, decision-making, planning, and customer orientation. But these need to be contextualized. "Communication" means something different for a sales manager than it does for a data analyst.
For each competency, define 3 to 4 behavioral indicators that describe what the competency looks like in practice. For example, if the competency is "decision-making," behavioral indicators might include: gathers relevant data before reaching conclusions, considers multiple options and their trade-offs, takes ownership of decisions even under uncertainty, and explains the rationale behind choices to stakeholders. These indicators become the scoring criteria. Interviewers rate each candidate's response against them.
Not all roles require the same level of competency. A junior hire might need to demonstrate basic awareness and willingness to learn. A senior hire needs to show mastery with examples from complex situations. Define 3 to 5 proficiency levels for each competency (such as developing, competent, proficient, advanced, expert) and calibrate your expectations to the seniority of the role. This prevents the common mistake of holding entry-level candidates to the same standard as experienced managers.
STAR is the universal framework for structuring competency-based questions and responses. It stands for Situation, Task, Action, Result. Both interviewers and candidates use it.
The interviewer asks an open-ended question designed to prompt a STAR response: "Tell me about a time when you had to manage a project with competing deadlines." Then they listen for four elements. Situation: What was the context? Task: What was the candidate's specific responsibility? Action: What did they actually do? Result: What happened as a consequence? If the candidate skips any element, the interviewer uses probing follow-ups: "What was the outcome?" or "What specifically did you do, not the team?" Good interviewers spend about 60% of the time listening and 40% probing for missing STAR elements.
The most frequent problem is candidates giving vague "we" answers instead of describing their personal actions. Another issue: candidates who describe what they "would" do hypothetically instead of what they actually did. Interviewers need to redirect firmly but politely. "I appreciate that. Can you give me a specific example from your experience?" On the interviewer side, the biggest pitfall is accepting incomplete answers because time is running out. Incomplete STAR responses produce unreliable scores. It's better to assess 4 competencies thoroughly than 6 competencies superficially.
| STAR Element | What It Captures | Example Probe Question |
|---|---|---|
| Situation | Context and background of the scenario | "Can you set the scene? What organization, what team, what time period?" |
| Task | The candidate's specific responsibility or goal | "What was your role specifically? What were you accountable for?" |
| Action | Concrete steps the candidate personally took | "Walk me through exactly what you did. What was your thought process?" |
| Result | Measurable outcomes and what was learned | "What was the outcome? How did you measure success? What would you do differently?" |
Below are ready-to-use questions organized by competency area. Each question is designed to prompt a STAR-format response.
Questions include: "Describe a time you had to motivate a team through a difficult period. What did you do and what was the result?" and "Give me an example of when you had to give critical feedback to a direct report. How did you approach it and what happened?" and "Tell me about a time you had to make an unpopular decision. How did you communicate it to your team?" These questions test delegation, communication, conflict resolution, and the ability to influence without relying solely on authority.
Questions include: "Walk me through a complex problem you solved at work where the solution wasn't obvious" and "Describe a situation where you had to make a decision with incomplete information. What did you do?" and "Tell me about a time your initial approach to a problem didn't work. How did you adapt?" Strong answers show structured thinking, comfort with ambiguity, and the ability to learn from failed approaches.
Questions include: "Give me an example of a successful cross-functional project you contributed to. What was your role?" and "Describe a time you disagreed with a colleague on a work approach. How did you resolve it?" and "Tell me about a time you had to work with someone whose work style was very different from yours." These reveal how the candidate operates in team settings, handles conflict, and balances personal preferences with group needs.
Questions include: "Tell me about a time your priorities changed suddenly. How did you handle the shift?" and "Describe a professional setback and how you recovered from it" and "Give me an example of when you had to learn a new skill or tool quickly to meet a deadline." Look for candidates who describe concrete coping strategies rather than generic statements about being "flexible" or "resilient."
Without a structured scoring system, competency-based interviews lose their primary advantage over unstructured formats. The scoring rubric is what turns subjective impressions into comparable data points.
Most organizations use a 4 or 5-point scale. A common approach: 1 (No evidence) means the candidate couldn't provide a relevant example. 2 (Limited evidence) means the example was vague, incomplete, or showed basic-level competency. 3 (Good evidence) means the candidate gave a clear STAR response demonstrating the competency at the expected level. 4 (Strong evidence) means the response showed the competency at a level above what the role requires. 5 (Exceptional evidence) is reserved for responses that demonstrate mastery with measurable, significant impact. Avoid even-numbered scales (like 1-4) because interviewers default to the middle. A 5-point scale forces a clearer distinction between "adequate" and "strong."
If multiple interviewers are assessing candidates, they need to calibrate their scoring before interviews begin. Run a practice session where all interviewers evaluate the same sample response and compare their ratings. Discuss disagreements until the group reaches a shared understanding of what a "3" versus a "4" looks like. Without calibration, one interviewer's "4" is another's "2," and the scores become meaningless. SHL Group recommends a minimum 90-minute calibration workshop before launching a competency-based interview cycle.
Organizations that switch from unstructured to competency-based interviews see measurable improvements across several hiring metrics.
Competency-based interviews aren't perfect. They work best when combined with other assessment methods.
The STAR method is well-known. Candidates can prepare polished answers that sound impressive but don't reflect their actual capabilities. Counter this by asking follow-up probes that go deeper than the rehearsed narrative: "What would you do differently now?" or "Who else was involved and how did they see the situation?" Drill into specifics. Prepared candidates can handle the initial question. Probing questions reveal whether the experience is genuine.
A thorough competency-based interview takes 45 to 75 minutes per candidate. Multiply that by 5 to 8 candidates per role, add interviewer preparation and calibration time, and the investment is significant. For high-volume roles where you're screening hundreds of applicants, consider using competency-based interviews only in later stages, with automated screening or phone screens handling the initial filter.
Candidates with limited work experience struggle to provide detailed examples of workplace competencies. They may have strong potential but lack the history needed for compelling STAR responses. For graduate and entry-level roles, consider supplementing competency-based interviews with situational judgment tests, work samples, or hypothetical scenario questions that test thinking rather than requiring years of examples.
Rolling out competency-based interviews requires planning, training, and ongoing quality checks.
Work with the hiring manager to identify the 6 to 8 core competencies for the role. Use the existing job description, interview the current role holder, and consult the team lead. Prioritize competencies that differentiate high performers from average ones. Generic competencies like "communication" are less useful than specific ones like "translating technical concepts for non-technical stakeholders."
Draft 2 questions per competency. Each question should prompt a STAR response. Write the scoring rubric alongside the question, defining what a 1, 3, and 5 response looks like for that specific competency. Have the hiring manager review and approve all questions before use.
Interviewers need to understand STAR probing techniques, the scoring rubric, and how to avoid common biases (halo effect, confirmation bias, recency bias). A 2 to 3 hour training workshop with role-play exercises is the minimum. Follow up with a calibration session using sample candidate responses.
After the first hiring cycle, debrief with interviewers. Review the data: did scores differentiate candidates effectively? Were any questions consistently unclear? Did any competency fail to produce useful signal? Refine the question bank and rubric based on this feedback. The best competency-based interview programs are updated every 6 to 12 months.
Different interview formats serve different purposes. Here's how competency-based interviews compare to the alternatives.
| Format | Predictive Validity | Best For | Main Limitation |
|---|---|---|---|
| Competency-Based | 0.51 | Roles with defined skill requirements at mid/senior level | Time-intensive, requires interviewer training |
| Unstructured | 0.20 | Informal culture-fit conversations | Low validity, high bias, legally risky |
| Situational (Hypothetical) | 0.44 | Entry-level roles and candidates with limited experience | Tests intent, not proven behavior |
| Panel Interview | 0.46 | Senior roles requiring multiple stakeholder input | Scheduling difficulty, can intimidate candidates |
| Work Sample Test | 0.54 | Technical and creative roles | Expensive to design and administer at scale |
| Assessment Centre | 0.37 | Graduate programs and leadership assessment | Very high cost and setup effort |