Evaluating Software Development talent is an incredibly difficult process. To make it even harder, there's no widely accepted industry standard or best practice. Plus, every individual has their own opinions based on their own experiences.
When we evaluate talent we're really evaluating two separate but highly related things -
- Technical Talent. This is the ability to deliver on requested functionality in a timely manner. For example, could you trust this person to build a bug-free JSON API in one quarter?
- Behavioral Talent. This is everything they bring to the table beyond just their technical skills. Are they easy to work with? Do they enjoy mentoring, or leading design discussions, or working across teams?
Behavioral talent is a bit "easier" to evaluate since it can be evaluated by both technical and non-technical people and it's a much more human (qualitative) assessment that people are familiar with evaluating. The biggest difficulty is in evaluating technical talent.
Ways to measure Technical Talent
There's traditionally been only a handful of ways to evaluate technical talent.
- On-screen / Pairing Exercise
- Whiteboarding or Talking about a problem
- Take-home test
The first two have the big disadvantage of putting people out of their comfort zone and having someone watch them work. It's usually at someone else's keyboard with a foreign terminal setup and an over-eager interviewer encouraging you to "just think out loud" instead of letting you think in peace.
But ultimately the biggest problem is that they often don't evaluate anything of relevance. In other words, they don't mimic how you would work and what you would work on if hired. I've helped applications scale to billions of table rows, why am I drawing out a bubble sort for you?
That leaves the take-home test, which to me holds the most promise in truly evaluating the quality of someone's work. It lets them work in the comfort of their own home, at their pace, and away from prying eyes. The only problem is giving them the right task to work on.
A successful take-home
I've had great success implementing take-home tests in the following way.
First, have your organization set up a minimal application that somewhat resembles your actual product. Do you build python microservices? Then initialize a small Flask app with some super basic fixtures. If you build UIs using ReactJS, set up a simple todo list app with
Second, ask the candidate to "add some functionality or feature of your choice to the application". Of course, it may help to prompt them with a few guiding examples:
- Implement a really basic email + password sign in
Add form validations to some
<button>that triggers a plaintext email with an invitation link
- Implement a multi-select dropdown on some page
Lastly, it's important to convey the following:
- "Take your time". I usually emphasize there's no hard deadline. It would be nice to have it back "in a week or so", but they can take longer if needed. This counteracts one of the main arguments against take-homes, that it cuts into people's personal time. This empowers candidates to work on their schedule and it's a good look for your organization to show that it values people's time.
- Some form of "Feel free to get creative". I always emphasize that the questions are just a guide and that they're not restricted to just those. The candidate is free to implement anything they'd like to show off, or even go outside of the bounds of the stated prompts. This is really an opportunity to put them in the driver's seat to show off something they're passionate about.
When complete, the candidate submits a diff or pull request for members of your organization to review. This framework even accommodates different skill levels easily: you might expect a junior to submit more functional code but you'd expect a Senior to submit the same feature with a bit more polish, organization, documentation, etc...
I've found this system to be successful in that candidates usually enjoy the interview process more and those in your organization appreciate reviewing code instead of looming over people during interviews.
And speaking of interviews, this now lets the in-person interviews become a discussion of the code that was written instead of a pressure-filled window to write and evaluate.