Hiring a junior developer is easy. You're not buying expertise — you're betting on trajectory. Attitude, curiosity, how they talk about problems they don't understand yet. You're looking for someone who wants to learn fast, communicates clearly, and won't poison the team's culture. That's a conversation. You can do it over coffee.
Then you try to hire a senior engineer.
Suddenly, nobody knows what they're doing.
The whiteboard comes out. The LeetCode problems appear. Someone schedules a five-round "system design interview" and nobody agrees on what passing even looks like. You spend three weeks putting a candidate through a gauntlet — and at the end of it, you're still not sure if they can actually do the job.
I've watched this happen at company after company. The irony is brutal: the more experience we require, the less reliable our hiring process becomes.
Why Juniors Are Easier to Hire Than You Think
When someone is new to the field, the bar is conceptually simple. You're not evaluating what they know — you're evaluating who they are. Can they learn? Do they ask good questions? Do they own their mistakes or deflect them? Are they energized by problems or exhausted by them?
These things are legible. A one-hour conversation surfaces them. You don't need to trick anyone. You ask them to walk you through something they built, watch how they explain it, notice whether they're honest about what broke and why.
Soft skills aren't the consolation prize for technical inexperience. For juniors, they're the whole signal. Because everything else — the specific languages, frameworks, patterns — can be taught. What can't be taught is the disposition to learn it.
That's not a low bar. That's just the right bar for the right level.
Where It Falls Apart
The problem starts when we try to apply the same intuition to mid-level and senior engineers — and we can't. Because now, disposition isn't enough. We need to know if they can actually do the thing.
Can they design a system that won't collapse under load? Can they identify the failure mode before it happens? Can they read a codebase they didn't write and improve it without breaking it? Can they make hard tradeoffs and explain why?
These are hard skills. And they're genuinely hard to evaluate without either wasting everyone's time or accidentally filtering for the wrong person.
We reach for proxies. Algorithmic puzzles, because they're measurable. Architecture diagrams on a whiteboard, because they feel impressive. Previous companies, because brand is easier to assess than capability. Years of experience, because seniority has to mean something, right?
None of these actually measure what we think they measure.
I've seen senior engineers with fifteen years of experience fail a binary search problem on a whiteboard — not because they didn't understand binary search, but because nobody thinks in that register under pressure, in a silent room, with someone watching them. I've seen others pass those same exercises and then struggle to ship anything that wasn't supervised.
The proxy becomes the test. And the proxy is broken.
The Real Problem Is Signal
What we're really trying to answer is: has this person solved hard problems before, and can they do it again in a new context?
That's a question about judgment, not just knowledge. It's about whether they've been in enough situations where the right answer wasn't obvious — and figured it out anyway. Whether they've failed in interesting ways and learned the right things from it.
You can't get that from a whiteboard. You can sometimes get it from a deep, honest conversation about the hardest thing they've ever debugged. About a decision they made that looked right and turned out to be wrong. About how they'd design something from scratch — not perfectly, but honestly, including the parts they'd get wrong first.
The best technical interviews I've been part of felt like a conversation between two engineers trying to figure something out together. The worst ones felt like an exam where nobody agreed on the answer key.
We keep building processes that optimize for consistency when what we actually need is accuracy. A consistent process that measures the wrong thing is just reliably bad hiring.
The Gap Nobody Talks About
Here's what makes this hard: evaluating soft skills scales easily. Almost anyone on the team can run a culture conversation. But evaluating hard skills at the senior level requires someone who has the hard skills themselves — someone senior enough to know what good actually looks like, experienced enough to separate fluency from theater.
Most companies don't have enough of those people available to interview. So they fall back on structure — rubrics, coding platforms, standardized questions — because structure feels fair and can be run by anyone.
It's not unfair. It's just not measuring what it claims to measure.
We haven't solved the senior hiring problem. We've just industrialized our workaround.
What Better Looks Like
The companies that hire well at the senior level do something uncomfortable: they slow down. They do fewer interviews, but deeper ones. They put their best engineers in the room, not their most available ones. They design for real signal — take-home projects scoped to real problems, pair programming sessions, conversations about tradeoffs instead of trivia.
They accept that evaluating hard skills is expensive, and they don't pretend otherwise.
Because the alternative is faster and cheaper and consistently wrong. And a bad senior hire costs you more than the money — it costs you the six months before you realize the mistake, the credibility of the team that trusted your process, and the engineers you lose in the meantime because the wrong person is in a critical seat.
Hiring is easy when you're buying potential. It's hard when you're buying proof.
If your process treats both the same way, you're not really measuring either.
