A systematic process for identifying the gaps between current employee skills and the skills required to meet business objectives, used to prioritize and design targeted training interventions.
Key Takeaways
Most training fails because it solves the wrong problem. A manager notices low sales numbers and requests "sales training." But the real issue might be a broken CRM workflow, unclear pricing guidelines, or a territory assignment problem. No amount of training fixes a process issue. TNA is the discipline that stops organizations from jumping to solutions before understanding problems. It uses data collection methods like surveys, interviews, focus groups, performance data analysis, observation, and job task analysis to build an evidence-based picture of where gaps exist. The process forces L&D teams to ask uncomfortable questions. Is training even the right solution? Sometimes the answer is no. Sometimes employees have the skills but lack the tools, the motivation, or the management support to apply them. A good TNA surfaces these root causes before anyone builds a PowerPoint deck.
A complete TNA examines needs at three distinct levels. Skipping any level produces an incomplete picture that leads to misaligned training programs.
This level answers the question: where does the business need to go, and what capabilities are required to get there? Review strategic plans, annual goals, market changes, technology adoption plans, and regulatory shifts. Identify the organizational-level skill requirements that flow from these priorities. For example, a company expanding into the EU market needs employees with GDPR knowledge, multilingual communication skills, and cross-cultural management capabilities. If the company is adopting AI tools across departments, every team needs baseline AI literacy. Organizational analysis also examines whether the company's culture supports learning. If managers don't give employees time for training, or if the organization penalizes mistakes rather than treating them as learning opportunities, even the best-designed program will fail.
This level breaks each role into its component tasks and identifies the knowledge, skills, and abilities required for each task. Use job descriptions, competency frameworks, and subject matter expert input to create a detailed task inventory. For each task, document the performance standard (what "good" looks like), the frequency (daily, weekly, quarterly), and the criticality (what happens if this task is done poorly). Compare these requirements against what employees currently demonstrate. The gap between required and actual performance at the task level tells you exactly what training content to develop. Don't rely solely on job descriptions. They're often outdated. Observe employees performing the work and interview high performers to understand what the role actually requires versus what the paperwork says.
This level identifies which specific employees need training and in what areas. Use performance review data, 360-degree feedback, skills assessments, certification records, and manager input to assess each person's capabilities against their role requirements. Individual analysis prevents blanket training where everyone sits through content that half the room already knows. It enables personalized learning paths where each employee focuses on their specific gaps. It also identifies high performers who can serve as mentors or subject matter experts for training development. Be careful with self-assessments at this level. Research from Dunning-Kruger studies shows that the least skilled individuals consistently overestimate their abilities, while high performers tend to underrate themselves. Validate self-assessments with objective measures.
Each method has strengths and limitations. Use multiple methods to triangulate findings and build a reliable picture of training needs.
| Method | Best For | Time Required | Data Quality | Limitations |
|---|---|---|---|---|
| Surveys/questionnaires | Large populations, standardized data | Low (1-2 weeks) | Medium | Response bias, superficial answers |
| One-on-one interviews | Deep insights, sensitive topics | High (2-4 weeks) | Very high | Time-intensive, small sample size |
| Focus groups | Exploring shared challenges, generating ideas | Medium (1-2 weeks) | High | Groupthink risk, dominant voices |
| Performance data review | Objective gap identification | Low (1 week) | Very high | Only captures what's measured |
| Direct observation | Behavioral gaps, process issues | High (2-4 weeks) | Very high | Hawthorne effect, observer bias |
| Job task analysis | Role-specific skill requirements | High (3-6 weeks) | Very high | Requires SME involvement |
| Customer feedback | Service quality, product knowledge gaps | Low (ongoing) | High | Indirect, may not isolate training issues |
| Competency assessments | Standardized skill measurement | Medium (2-3 weeks) | High | Assessment design quality varies |
Follow this sequence to move from business problem identification through to a training plan that stakeholders will actually approve.
Start with the outcome, not the training request. When a manager says "my team needs communication training," ask: what's happening that shouldn't be? What's not happening that should be? What would success look like in measurable terms? Convert vague requests into specific, observable performance gaps. "Customer complaints about unclear project timelines increased 35% in Q2" is a problem statement. "My team needs to communicate better" is not.
Determine which roles, departments, or individuals are affected. A company-wide training rollout is rarely the right answer. Narrow the scope to the people whose skill gaps are actually causing the business problem. Consider tenure, experience level, and prior training when defining the target audience. A new hire with six months of experience has different needs than a ten-year veteran.
Use at least two data collection methods to validate findings. Combine quantitative data (performance metrics, assessment scores, survey results) with qualitative data (interview themes, observation notes, focus group insights). Look for patterns across data sources. If surveys say employees feel confident in a skill but performance data shows errors, investigate the disconnect. Analyze data by subgroup (department, location, tenure) to identify whether the gap is universal or concentrated in specific populations.
Not every gap requires training. Prioritize based on business impact, urgency, and feasibility. A skill gap that costs the company $500K per year in errors deserves immediate attention. A gap that affects one person in a non-critical role can wait. For each priority gap, determine whether training is the right solution. If employees know what to do but don't do it, the problem might be motivation, tools, or management. If they genuinely don't know how, training is appropriate. Recommend specific interventions: what type of training, for whom, delivered how, measured by what.
The TNA report is the deliverable that gets stakeholder buy-in and funding approval. A weak report means good analysis gets ignored.
Even experienced L&D professionals fall into these traps. Recognizing them early saves time and preserves credibility.
Jumping straight to individual skill gaps without understanding business context produces training that's technically accurate but strategically irrelevant. Always start by confirming what the business needs before investigating what employees lack. If you can't explain how closing a particular skill gap moves a business metric, the training probably isn't worth building.
Using only surveys, or only manager opinions, gives a partial picture. Managers often attribute team performance issues to skill gaps when the real problem is workload, tools, or unclear expectations. Surveys capture perception, not reality. Always triangulate with at least two methods, ideally combining self-report data with objective performance measures.
Business needs change. Skills decay. New technologies emerge. A TNA conducted in January may be partially outdated by July. Build continuous needs sensing into your L&D operations through regular check-ins with business leaders, ongoing skills assessments, and real-time performance data monitoring. Annual TNAs catch the big strategic shifts. Quarterly pulse checks catch emerging gaps before they become problems.
Software can accelerate data collection and analysis, but it doesn't replace human judgment in interpreting results and designing solutions.
| Tool Category | Examples | What It Does | Price Range |
|---|---|---|---|
| Survey platforms | SurveyMonkey, Qualtrics, Google Forms | Distribute questionnaires, analyze responses | Free to $1,500/yr |
| Skills assessment | Skillsoft Percipio, Pluralsight Skills, iMocha | Test technical and soft skills with validated assessments | $5-$30 per user/yr |
| LMS with analytics | Cornerstone, SAP SuccessFactors, Docebo | Track completion, scores, and skill profiles at scale | $6-$36 per user/mo |
| Competency mapping | TalentGuard, Avilar, HRSG | Map skills to roles and identify gaps systematically | $4,000-$50,000/yr |
| Performance analytics | Visier, Crunchr, One Model | Correlate training data with business outcomes | $5-$12 per employee/mo |
Data showing why TNA matters and how organizations currently approach it.