Avoiding Bias
Every interviewer has biases. That's not a moral failing — it's a cognitive limitation. The question isn't whether you're biased. It's whether you have processes in place to prevent those biases from contaminating your signal.
Why bias matters in interviews
An unstructured interview is a bias delivery system. Without structure, interviewers default to pattern-matching against their own experiences, preferences, and mental models. The result: you hire people who remind you of yourself, or who remind you of the last person who succeeded, or who happen to make a strong first impression.
Structured interviewing — principles-based evaluation, behavioral questions, evidence-based notes — is the primary defense. Everything you've learned in this workshop so far is, in part, a bias mitigation strategy. But you also need to know the specific bias patterns so you can catch them when they slip through.
The biases that affect interviewers most
| Bias | What it is | How it shows up in interviews |
|---|---|---|
| Confirmation bias | Seeking evidence that supports your initial impression while ignoring contradicting evidence | You decide in the first 5 minutes that a candidate is strong, then spend the rest of the interview noticing their strengths and overlooking their weaknesses |
| Halo effect | One positive trait colors your entire evaluation | A candidate who gives a brilliant answer to your first question gets rated highly on every principle — even the ones you didn't probe |
| Horn effect | One negative trait colors your entire evaluation | A candidate stumbles on the first question and you mentally downgrade them for the rest of the interview, even when later answers are strong |
| Similarity bias (affinity bias) | Favoring candidates who are like you | You feel a stronger connection with candidates who went to the same school, share your hobbies, or have a similar communication style |
| Anchoring | Over-weighting the first piece of information you receive | Reading another interviewer's feedback before your own write-up shifts your assessment toward theirs |
| Contrast effect | Evaluating a candidate relative to the previous one instead of against the bar | After three weak candidates, an average candidate feels exceptional. After a stellar candidate, a strong one feels mediocre. |
| Recency bias | Over-weighting what happened most recently | The candidate's last answer dominates your overall assessment, even if earlier answers were stronger or weaker |
| First impression bias | Letting the first few minutes disproportionately shape your evaluation | A candidate who seems nervous in the intro gets rated lower, even though their actual answers are strong once they settle in |
How structured interviewing fights bias
The good news: most of what you've already learned in this workshop is bias-resistant by design.
| Workshop skill | Bias it counters |
|---|---|
| Principles-based evaluation | Forces you to evaluate against defined criteria, not gut feel — counters confirmation bias and similarity bias |
| Behavioral questions | Asks for evidence of past behavior, not hypotheticals — counters halo/horn effects by demanding specific examples |
| Probing and follow-ups | Pushes past polished surfaces to find real signal — counters first impression bias |
| Independent write-ups before debrief | Prevents anchoring on other interviewers' opinions |
| Principle-by-principle debrief | Prevents halo/horn effects from carrying across principles |
Structure doesn't eliminate bias. It limits the damage bias can do. Every structured process you follow is a guardrail against your own cognitive shortcuts.
Separating observation from interpretation
This is the single most important bias mitigation technique for interviewers. When you take notes, you're doing two things at once — and you need to keep them separate:
Observation (what the candidate said and did):
- "Candidate described pulling three months of logs to identify the scope of the problem"
- "Candidate said 'we decided' when describing the architecture choice — did not clarify individual role when probed"
- "Candidate quantified the result: reduced latency from 340ms to 12ms"
Interpretation (what you think it means):
- "Candidate has strong analytical skills"
- "Candidate may not have been the decision-maker"
- "Candidate is results-oriented"
Observations are evidence. Interpretations are conclusions. Your notes should be overwhelmingly observations. Save your interpretations for your written evaluation after the interview — and when you do interpret, make sure every interpretation is supported by at least one specific observation.
| Biased note (interpretation as observation) | Unbiased note (observation) |
|---|---|
| "Candidate is a strong leader" | "Candidate described reorganizing the on-call rotation, getting buy-in from 3 team leads, and reducing incident response time by 40%" |
| "Candidate didn't seem very technical" | "When asked about the database migration tradeoffs, candidate spoke at a high level and said 'the team handled the technical details'" |
| "Really impressive — went to MIT" | "Candidate described building a distributed caching layer that handled 50K requests/sec" |
| "Seemed nervous and unsure" | "Candidate paused for 15 seconds before answering the first question. Subsequent answers were detailed and specific." |
| "Not a culture fit" | "Candidate described preferring to work independently and said they 'don't love meetings.' When probed on collaboration, gave an example of async code reviews but no examples of real-time problem-solving with others." |
Notice the last example. The observation still captures potentially concerning information — but it's evidence that the debrief panel can evaluate, not a label that shuts down discussion.
Calibrating with your panel
Before the interview loop begins, the panel should align on:
- What principles each interviewer is evaluating — so there's no overlap and no gaps
- What "strong" looks like for each principle at this level — a senior engineer's Ownership looks different from a junior engineer's
- The bar — is this a backfill for a known strong team, or a growth hire where potential matters more?
This pre-calibration reduces bias because interviewers enter the room with a shared framework rather than individual, unspoken standards.
Common traps and how to avoid them
The "airport test": "Would I want to be stuck in an airport with this person?" This is similarity bias disguised as a reasonable question. You're hiring a colleague, not a travel companion.
The credential halo: Prestigious schools and well-known companies create an immediate positive impression that has nothing to do with the candidate's actual ability. Evaluate what they did, not where they did it.
The confident speaker: Confidence and competence are correlated — but not as strongly as you think. Quiet, thoughtful candidates may have deeper expertise than charismatic ones. Probe both equally.
The "gut feeling": If your gut says hire or no-hire but you can't articulate why with specific evidence, your gut is probably reflecting a bias, not signal. Evidence you can write down is signal. Everything else is noise.
The test for bias is simple: Can you support your assessment with specific things the candidate said and did? If not, you're not evaluating — you're reacting.
Exercise: Rewrite biased notes
Below are five interviewer notes taken during a behavioral interview. Some are good observations. Some are biased interpretations. Rewrite the biased notes as objective behavioral observations that would be useful in a debrief.
Review these five interviewer notes and rewrite the biased ones as objective behavioral observations. If a note is already a good observation, say so and explain why. 1. 'Great energy — really likable person. Would be a great culture fit.' 2. 'Candidate described identifying a 15% drop in conversion rate by analyzing A/B test data, then proposed and implemented a revised checkout flow that recovered 12% of the lost conversions within two weeks.' 3. 'Not technical enough. Couldn't go deep on system design.' 4. 'She reminded me of Sarah on our team — same kind of sharp, quick thinker.' 5. 'Candidate went to a no-name school but actually gave decent answers.'
For each biased note, rewrite it as an observation — what the candidate actually said or did. Remove labels, credentials, comparisons to other people, and vague impressions. Keep the factual content. If you need to infer what the interviewer might have observed, describe what objective evidence could replace the biased note.
This exercise supports AI-powered coaching via Claude. Enter your access code to enable it, or use the offline feedback below.
You're 5 minutes into an interview and the candidate is nervous, speaking quietly, and giving short answers. You notice yourself thinking 'this person isn't strong enough for the role.' What should you do?
An interviewer's debrief notes include: 'The candidate went to Stanford and worked at Google — clearly has strong technical fundamentals. Recommend hire.' What bias pattern is present?
You interviewed three candidates today. The first two gave vague, surface-level answers. The third candidate gave a solid answer with reasonable detail and some quantified results. You find yourself thinking 'this one is really strong.' What should you check?