Main illustration: Kevin Whipple
Interviews are incredibly expensive, for both parties. Great screeners are vital to ensure that no one is wasting anyone’s time. Here’s how we do them in Intercom.
It’s easy to underestimate the true cost of an interview. Your team spends hours preparing, interviewing, summarizing, and making a decision on each candidate you bring in-house. The candidate takes a day off work, maybe even flies in with an overnight stay. The whole process is a large commitment that can often easily be avoided or enhanced with a good screener.
Phone-screening vs email screening
Should the first step be a phone interview or an email screener? A call gives an opportunity to sell the role, dig deep on cultural contribution potential and take technical questions in interesting directions. It’s expensive though — coordinating a time that works for both sides, preparing for it, and writing it up.
Email screening is a better use of your time. No scheduling is required, review time is short, even if depth is limited. The lower cost means much less filtering at the earliest stage of your process. For example, no guessing at technical ability from an incomplete and out of date LinkedIn profile. This suits us well. We hire ambitious people, we’re willing to take a risk on high potential. A well groomed resumé isn’t always a good match.
Even at later stages of our hiring process the right email screener still gives us useful information that we can’t easily collect elsewhere. We see an example of real world code. There’s no time pressure, no tool or technology restrictions and candidates answer on familiar hardware in a familiar development environment. We see them coding at the standard they hold themselves to. In comparison coding at a whiteboard doesn’t even come close.
How we screen
Our screener evolves over time. We continue to find better questions, and refine how we phrase them. As I’m writing this post, the questions section of the screener for our product engineering role looks like the following, verbatim.
How we evaluate candidate answers
Questions 1 to 3 are non-technical and require good written communication. If a candidate is a non-native English speaker they should still get their key ideas across. We certainly don’t mind if syntax or grammar are a little off.
Question 4 tests for basic coding competency. It’s our current FizzBuzz. When someone struggles with this question we immediately reject. Coaching and mentoring are part of our culture but this skill is too fundamental.
We grade answers as poor, good or excellent. Candidates must have good answers generally and demonstrate at least one area of excellence to move to the next hiring stage.
1. What’s your proudest achievement? It can be a personal project or something you’ve worked on professionally. Just a short paragraph is fine, but I’d love to know why you’re proud of it.
Great candidates are motivated to solve real problems for our customers. They don’t care what tools or technologies they need to use, but pick the best for the job. Sometimes they’ll have to convince others that their way is right.
Poor answers focus only on technology. “I rewrote our billing system in Ruby because it’s really cool and I always wanted to play with it.” They fit the basic requirements of the role, “I was asked to implement customer reviews so I added the feature.”
Good answers focus on customer benefits. “I rewrote our billing system in Ruby because the old code was buggy and I figured it’d be the best way to improve things. Customer billing complaints are a thing of the past.”
They’ll often feature some personal growth. “I was asked to implement customer reviews because another developer wasn’t getting anywhere with it. I coached that engineer, he was having difficulty managing interruptions. We got the feature out on time but people really hated it.”
Excellent answers may involve fighting for the customer, going above and beyond. “Our billing system was a mess of untested Visual Basic 6.0. It never worked correctly. I convinced my manager and my manager’s manager to let me rewrite it in something modern. I put together a quick Ruby prototype to show it could be done. Now it runs smoothly and billing complaints are a thing of the past.”
Excellent answers show someone stepping outside their role. “I was asked to implement customer reviews but I didn’t think the design was right. I talked to our product manager, did some research and spoke with customers. We built it but without the ability to review anonymously, people really love it.”
2. Tell me about a technical book or article you read recently, why you liked it, and why I should read it.
Good candidates care about their craft. They’re learning all the time — they didn’t stop at graduation. These candidates can write a convincing argument to get their ideas across.
Poor answers tend to be how-to books or articles, focused on a specific technology with no compelling argument. “Learn Ruby in 14 Days, because then you’ll know Ruby in 2 weeks.”
Good answers discuss the craft of software or building product. “The Design of Everyday Things, because it highlights how small details can make something easier or more difficult to use.”
Consider it a good answer if you bookmark the article for later or add the book to your wishlist.
Excellent answers will take this a step further and include a well reasoned review. “The Algorithm Design Manual, I feel better prepared to tackle a wider range of problems now. It does have a very academic focus which is unfortunate as it’s a very practical subject, I think this is a problem with books that double as course textbooks generally.”
It’s an excellent answer if you read the article immediately, or order the book, and learn something new.
3. Tell me about one feature of Intercom you really like, and why.
Good candidates see what we’re doing and want to be part of it. They understand why we believe our work is important.
Poor answers for us are answers that could equally apply to Twitter or Facebook. “Intercom makes it easy for people to communicate with each other.”
Good answers discuss specific features. “Targeted messaging looks very useful. You could send users a personalised message about your new feature.”
Excellent answers discuss a feature in the context of the larger product. “Targeted messaging looks very useful. You could send a message to users who hadn’t used a new feature after a month, and get feedback as to why, that would tell you so much.”
4. Write some code, that will flatten an array of arbitrarily nested arrays of integers into a flat array of integers. e.g. [[1,2,],4] -> [1,2,3,4].
A straightforward programming question, we expect a solid solution with a complete test suite. Most candidates solve with recursion, but it’s not a requirement.
Poor answers have no tests or can only flatten a fixed amount of nesting. They’ll use poorly named variables, classes or functions (e.g. “v”, “resultList” or “execute()”, “reduce()”). If this code was in a code review on your team you’d suggest starting again.
Good answers will have well named variables, classes and functions. The test suite will cover some negative cases. If this code was in a code review on your team you’d have some minor recommendations or style guide comments.
Excellent answers have a complete test suite covering empty lists, invalid input, null-hostile checks. If this code was in a code review on your team you’d spend time picking a well deserved “ship it” emoji or GIF.
5. Write a program that will read the full list of customers and output the names and user ids of matching customers (within 100km), sorted by user id (ascending).
Take a well defined problem and produce working code, with enough room to demonstrate how to structure components in a small program.
Poor answers will be in the form of one big function. It’s impossible to test anything smaller than the entire operation of the program, including reading from the input file. Errors are caught and ignored. If this code was in a code review on your team you’d take the submitter to a whiteboard to explain the problem we’re solving and how we build software.
Good answers are well composed. Calculating distances and reading from a file are separate concerns. Classes or functions have clearly defined responsibilities. Test cases cover likely problems with input data.
It’s an excellent answer if you learned something from reading the code.
How we proceed after the screener
One of our engineers reviews the candidate’s answers. A good set of screener questions will be strong and opinionated enough to make this decision easy and clear, with just a few minutes of review. This is why we like them.
As soon as the engineer decides we let candidates know right away. For successful candidates we schedule a phone screen to cover culture contribution potential and go deeper on something technical (more on how we do that in our next hiring post). The final stage is an in-house interview covering multiple areas.
Unsuccessful candidates will, of course, get a fast response too. We’re always happy to provide feedback about our decision and people are welcome to reapply in the future.
If that sounds like a fun challenge, why not try our screener yourself? We’re hiring.
Keep reading – Part 2: Culture Contribution