Performance feedback provided by colleagues at the same organizational level, capturing perspectives on collaboration, teamwork, and day-to-day behaviors that managers may not directly observe.
Key Takeaways
A peer review collects feedback from the people who work beside an employee every day. Not above them. Not below them. Beside them. Managers see presentations, deliverables, and meeting behavior. Peers see everything else: how someone handles a last-minute request, whether they share credit, how they communicate under pressure, whether they help when there's nothing in it for them. That's why peer feedback is so valuable. It fills blind spots. A manager might rate a developer highly because their code ships on time, but peers know that developer refuses to review anyone else's pull requests and creates bottlenecks during code freezes. The manager sees output. Peers see behavior. There's a reason 90% of Fortune 500 companies use multi-rater feedback systems that include peer input (Forbes, 2023). Single-source evaluations (manager-only) miss too much. But peer reviews require careful design. Without clear guidelines and psychological safety, they can become popularity contests, weapons for workplace politics, or empty exercises where everyone gives everyone else a polite "meets expectations."
Peer reviews are one input into the broader 360-degree feedback model. Here's how the two relate.
| Dimension | Peer Review (Standalone) | 360-Degree Feedback |
|---|---|---|
| Feedback sources | Same-level colleagues only | Manager, peers, direct reports, self, sometimes external stakeholders |
| Scope | Collaboration and team behavior | Full performance across all relationships |
| Complexity | Low to moderate | High: requires coordination across multiple rater groups |
| Cost | Low: can be run with simple surveys | Higher: often requires specialized software or consultants |
| Time investment | 15-20 minutes per reviewer | 30-60 minutes per rater group |
| Best for | Team dynamics, project-based evaluation | Leadership development, senior role assessment |
| Anonymity | Usually anonymous | Usually anonymous across all rater groups |
| Frequency | Can be quarterly or per-project | Typically annual or semi-annual |
A well-designed peer review process produces actionable insights. A poorly designed one produces politics and anxiety.
Aim for 3-5 peer reviewers per employee. Fewer than 3 doesn't provide enough perspectives. More than 5 creates excessive workload. Selection methods vary: the employee nominates peers (risk: they pick friends), the manager selects peers (risk: they pick allies), or a combination where the employee nominates and the manager approves (balancing both perspectives). For cross-functional roles, include at least one reviewer from outside the employee's immediate team to capture how they collaborate across boundaries.
Keep peer review questionnaires focused on behaviors peers can actually observe. Good questions: "How effectively does this person collaborate with the team?" "How does this person handle disagreements or conflicting priorities?" "Would you want to work with this person on your next project? Why or why not?" Bad questions: "How would you rate this person's technical expertise?" (unless the peer has relevant technical knowledge to judge). "How productive is this person?" (peers often can't measure output directly). Use 5-7 questions maximum. Include at least 2 open-ended questions for qualitative feedback.
Most organizations make peer reviews anonymous to encourage candor. This works well when combined with clear guidelines about constructive feedback. The trade-off: anonymity can enable harsh or unhelpful comments. A middle ground: make feedback anonymous to the employee but visible to HR, who can flag inappropriate responses. Some high-trust cultures use attributed (non-anonymous) peer feedback and report richer, more actionable input. This only works in environments with genuine psychological safety.
These questions are organized by competency area and designed to capture behaviors peers observe directly.
| Competency | Rating Question (1-5 scale) | Open-Ended Follow-Up |
|---|---|---|
| Collaboration | How effectively does this person work with others to achieve shared goals? | Describe a specific example of how this person collaborated on a project or task. |
| Communication | How clearly does this person communicate ideas and expectations? | Is there a time when their communication was particularly effective or could have been better? |
| Reliability | How consistently does this person follow through on commitments? | Can you recall a specific situation where their reliability stood out? |
| Problem-solving | How well does this person handle unexpected challenges or setbacks? | Describe how this person approached a difficult problem. |
| Supportiveness | How willing is this person to help team members when they're struggling? | Give an example of when this person went out of their way to support a colleague. |
| Conflict resolution | How constructively does this person handle disagreements? | Have you observed this person in a disagreement? How did they handle it? |
Raw peer feedback needs context and analysis before it's useful for the performance conversation.
One negative review from five peers is an outlier. Three negative reviews on the same topic is a pattern. Focus on themes that appear across multiple reviewers. If four out of five peers mention communication challenges, that's a signal. If one peer mentions it and four don't, it might reflect a specific relationship dynamic rather than a systemic issue. Aggregate quantitative ratings to see averages and ranges, but always read the qualitative comments for context.
Peer feedback is influenced by the reviewer's relationship with the employee. Close collaborators give more detailed, often more favorable feedback. Peers who've had conflicts may skew negative. Peers who rarely interact may give uninformed middle-of-the-road ratings. This is why selecting the right mix of reviewers matters. The manager should consider who provided the feedback (even when anonymous, the manager often knows the reviewer pool) when interpreting results.
Share peer feedback themes with the employee during the review meeting. Frame it as: "Several of your peers highlighted your willingness to help during the Q3 product launch. They also mentioned that action items from your meetings sometimes aren't followed up on." Summarize themes rather than reading individual comments verbatim. Never reveal which peer said what, even if the employee asks. Protecting anonymity is essential for the process to work in future cycles.
Peer reviews have real limitations that organizations need to manage proactively.
Beyond individual performance evaluation, peer reviews can improve team dynamics and collaboration.
When you aggregate peer review data across an entire team, patterns emerge that reveal team health. If multiple team members receive low scores on "communication clarity," the issue might be a team process problem (unclear documentation standards), not individual performance. If one person consistently scores low on collaboration while everyone else scores high, it's likely an individual issue. This aggregate view helps managers distinguish between team-level interventions and individual coaching needs.
Peer reviews work particularly well as part of project retrospectives. After completing a major initiative, each team member provides brief feedback on their collaborators. This captures observations while they're fresh and ties feedback to specific work rather than abstract competencies. Keep it to 3 questions: What did this person do well? What could they improve? Would you want to work with them again? The result is actionable, timely feedback that drives improvement on the next project.
Research data on peer review practices and their effectiveness.