Performance reviews are how engineering managers formally assess, communicate, and develop their team members' contributions. When done well, they provide clarity, motivation, and a path forward. When done poorly, they are dreaded rituals that demoralise the team and provide little value. This guide shows engineering managers how to run performance reviews that drive growth and build trust.
Foundations of Effective Performance Reviews
Effective performance reviews are not surprises. If an engineer learns for the first time in their review that their performance is below expectations, the manager has failed - not the engineer. Continuous feedback throughout the review period ensures that the formal review is a structured summary of conversations that have already happened, not a reveal of previously hidden judgements.
Define evaluation criteria that are specific to engineering roles and aligned with your career ladder. Common dimensions include: technical skills (code quality, architecture, debugging), delivery (reliability, estimation accuracy, meeting commitments), collaboration (code review quality, mentoring, cross-team work), and impact (the significance and scope of problems solved). Each dimension should have clear descriptions of what 'meeting expectations' and 'exceeding expectations' look like.
Separate the development conversation from the compensation conversation. When performance feedback and salary adjustments are discussed simultaneously, people focus on the money and miss the growth feedback. If possible, hold the development discussion first, allow time for reflection, and discuss compensation separately. This separation ensures that the most valuable part of the review - actionable feedback for growth - receives the attention it deserves.
- Performance reviews should contain no surprises - continuous feedback makes the review a summary
- Define evaluation criteria specific to engineering with clear descriptions for each performance level
- Separate development feedback from compensation discussions to ensure growth feedback is heard
- Gather evidence throughout the review period - do not rely on memory at review time
- Reviews should be conversations, not lectures - create space for the engineer's perspective
Gathering Evidence and Avoiding Recency Bias
The biggest threat to fair performance reviews is recency bias - overweighting the most recent weeks of the review period. An engineer who delivered exceptional work for five months but had a rough final month may receive an unfairly negative review, while one who coasted for five months but finished strong may receive an unfairly positive one.
Combat recency bias by keeping a running log of observations throughout the review period. Note significant achievements, challenges overcome, feedback themes, and areas for growth as they happen. Many managers use a private document for each direct report where they jot brief notes after notable events. When review time comes, this log provides balanced evidence spanning the full period.
Gather input from multiple sources. Peer feedback from code review partners, feedback from the product manager and designer, and input from other teams the engineer worked with provides a more complete picture than the manager's observations alone. Use a structured feedback request with specific questions rather than an open-ended 'tell me about this person's performance.'
Writing Performance Reviews That Drive Growth
Structure the written review around the defined evaluation dimensions, with specific examples for each. 'Meets expectations in technical skills' without elaboration is useless. 'Meets expectations in technical skills - consistently delivers well-structured code as evidenced by low defect rates and positive code review feedback. Area for growth: system design skills, particularly around database schema design for high-write-volume scenarios' provides actionable clarity.
Be honest about areas for improvement. Sugarcoating or omitting development areas does the engineer a disservice - they cannot improve what they do not know about. Frame development areas as growth opportunities and provide specific suggestions for how to develop the skill: 'To strengthen your system design skills, I recommend leading the design review for the upcoming reporting service and pairing with Maria, who has deep expertise in this area.'
Include a forward-looking component that outlines expectations and development goals for the next review period. This creates a clear contract between manager and engineer about what success looks like going forward. Review this forward-looking plan at the start of the next period to ensure alignment and adjust if priorities have changed.
Delivering Reviews as Productive Conversations
Start the review conversation by asking the engineer to share their self-assessment first. This reveals how their perception aligns with yours and identifies areas of agreement and divergence. When perceptions align, you can move quickly. When they diverge, you can explore the gap with curiosity: 'I see you rated yourself highly on delivery. Can you share what you are basing that on? I had a somewhat different observation.'
Spend proportionally more time on growth areas than on areas of strength. Engineers who are performing well appreciate being told so, but they benefit most from guidance on how to reach the next level. 'Your technical execution is strong - here is what I would need to see for you to be considered for a senior role' is more valuable than a detailed recitation of everything they did well.
End the review conversation with clear, agreed-upon next steps. What are the development goals for the next period? What support will the manager provide? When will the next check-in happen? A review without actionable next steps is a wasted opportunity. Follow up within a week with a written summary of the conversation and agreed actions.
Calibration and Ensuring Fairness Across Teams
Calibration is the process where multiple managers review each other's assessments to ensure consistent standards across teams. Without calibration, different managers may have different bars for 'meets expectations', leading to unfair outcomes for engineers who happen to have a strict or lenient manager.
In calibration sessions, each manager presents their assessments with supporting evidence. The group discusses whether the evidence supports the rating and whether similar contributions in other teams received similar ratings. This process surfaces discrepancies and helps managers calibrate their standards. It also catches cases where bias (positive or negative) may be influencing a manager's assessment.
Be open to adjusting your assessments based on calibration feedback. If the group convincingly argues that a rating you gave is too high or too low relative to the standard, change it. Calibration only works if managers are willing to be influenced by their peers. The goal is fair and consistent evaluation across the entire engineering organisation, not defending your original ratings.
Key Takeaways
- Performance reviews should contain no surprises - continuous feedback throughout the period is essential
- Keep a running log of observations to combat recency bias and provide balanced evidence
- Include specific examples and actionable growth suggestions for each evaluation dimension
- Ask for the engineer's self-assessment first to identify alignment and areas for deeper discussion
- Participate in calibration to ensure consistent standards across teams and fair outcomes
Frequently Asked Questions
- How often should formal performance reviews happen?
- Formal reviews should happen at least twice per year - every six months. Some organisations do quarterly reviews, which provides more frequent feedback but can feel burdensome for both managers and engineers. The key insight is that the formal review frequency matters less than the continuous feedback frequency. If you are giving regular feedback in one-on-ones, the formal review becomes a structured summary rather than a high-stakes event.
- How do you review engineers whose work you cannot fully evaluate technically?
- Gather input from people who can evaluate the technical work - peer engineers, tech leads, and architects. Ask specific questions: 'How would you rate the quality of their code? How does their system design compare to what you would expect at their level?' Combine this technical input with your own observations of collaboration, delivery, communication, and leadership. You do not need to be the technical expert on your team to provide a fair and useful review.
- How do you handle a performance review when you have to deliver a poor rating?
- If you have been providing continuous feedback, a poor rating should not be a surprise. Be direct, specific, and compassionate. Present the evidence clearly, acknowledge what the engineer does well, and focus the conversation on what needs to change and how you will support the change. Provide a clear improvement plan with specific goals, timeline, and check-in schedule. Make sure the engineer understands the consequences if improvement does not occur, but also convey genuine commitment to their success.
Explore Engineering Manager Templates
Download performance review templates, self-assessment guides, calibration frameworks, and evidence-tracking tools designed specifically for engineering managers.
Learn More