A couple years ago, I wrote about the Hatchways Assessment and our philosophy around having a practical skill-based evaluation that simulates tasks that are actually used on the job. At the time, we focused more on encouraging companies to hire developers as interns or on contract-to-hire. This approach helped evaluate their on-the-job skills without subjecting them to rigorous interview processes. In fact, many software engineers actually preferred this "contract-to-hire" process over the traditional 5-6 round interview process.
Since, we've run a whole lot more assessments and have worked with a ton more companies. I thought it would be useful to write an update about our philosophy surrounding the evaluation of engineers for jobs and how it's evolved with all of our learnings. We've also recently re-launched on the YC launchpad here, check it out if you want to learn more about our evolution!
The Hatchways Assessment Philosophy
At Hatchways, we've always aimed to provide an equitable, inclusive, and fair interview experience. From the beginning, we've strived to create a process that accurately measures the skills of each candidate while maintaining their comfort and encouraging their best performance. As we've evolved over the years, our understanding of what makes a good assessment has matured and we've made crucial adjustments to continue delivering the best candidate experience.
Same, But Better
While we've evolved in our approach to assessments, some key beliefs continue to form the bedrock of our evaluation process and we've gotten even better at it!
The Blend of Human Review and Automation
We firmly believe that assessments should not be fully automated. While automation brings efficiency, a human touch ensures that the nuances of a candidate's skills and potential are properly considered. By combining human review with automation, we aim to provide the most accurate and fair evaluation possible.
Our toolset now allows for companies to more efficiently assess candidates in a human way but use automation on areas that are more human-prone errors (e.g. creating private git repos, creating custom candidate links, sending invites and reminders etc.)
Comfort of Working in a Familiar Environment
We understand the comfort and efficiency that comes from working in a familiar environment, using tools that one is comfortable with. Therefore, our candidates are allowed to work in their own environments and use workflows they're most familiar with (i.e. submitting a pull request on GitHub). We believe this is closest to the real-world scenario they will be working in and thus provides a more accurate representation of their abilities.
Our platform has continued to build seamless integration with GitHub and has made it easy for companies to offer the use of any technologies to their applicants.
Not Proctoring or Recording Screens
At Hatchways, we champion the privacy and comfort of our candidates during the assessment process. This is why we choose not to proctor or record the screens of applicants. We firmly believe that this approach not only upholds their privacy, but it also fosters a more conducive candidate experience, mirroring real-world work conditions more closely.
We design our assessments to be sufficiently challenging and sufficiently open-ended, intentionally avoiding a single "correct" answer. This strategy enables companies to compare applicant submissions more effectively, providing a broader understanding of each candidate's unique problem-solving approach.
And finally, the nature of our assessments not only gauges a candidate's skills but also fosters the foundation for an insightful follow-up discussion in a live call. This two-fold approach not only tests technical proficiency, but it also allows us to delve deeper into the candidate's thought process, enhancing the overall evaluation and understanding of their capabilities.
Achieving Consistency and Standardization in Grading Rubrics
One area where we've made significant strides over time is the creation of consistency and standardization in our grading rubrics. Early on, we recognized the value of a consistent, standardized grading system, but our ability to establish this across all companies, regardless of size, was somewhat limited.
Today, we've not only developed a deeper understanding of the importance of such a system, but also have the tooling and industry benchmarks to help implement it effectively with our partner employers. These standardized grading rubrics help companies remove subjectivity from the hiring process and allow for more equitable comparisons between candidates.
The Learnings and Improvements
Balancing Candidate Experience and Assessment Quality
An assessment that is both thorough and concise is a hard balancing act. We've learned that a too long, complex assessment can be a drain on a candidate's time and energy, potentially diminishing their performance. Conversely, an overly simplified evaluation might fail to give us an accurate depiction of their abilities. Striking the right balance has been crucial.
Our assessments now find the right balance by focusing on a set of core competencies that are most relevant to the role. We ensure that the duration of the task is reasonable, but long enough to allow us to get a meaningful signal about a candidate's capabilities.
Adjusting Assessment Type for Senior Candidates
We've also noticed that senior candidates benefit from a slightly different approach. They typically have a vast experience base, with varying areas of expertise. Therefore, part of our assessment includes reading code, a critical aspect of most senior tech roles. This provides a more accurate measure of their ability to understand, analyze, and provide solutions in the context of existing codebases.
Our assessment catalogue now includes read-code and write-code assessments for companies to choose and modify off of.
Focusing on a Few Skills at a Time
The ambition to measure as many skills as possible during the assessment was something we initially pursued. However, we soon realized that this approach often led to a less accurate measurement of a candidate's capabilities. We found that trying to assess a laundry list of skills often resulted in a subpar read on those skills.
Our current strategy is to focus on assessing the top 2-3 skills that are most important for the role during each evaluation step. This way, we get a deep, accurate insight into those particular skills, which helps us make more informed decisions.
At Hatchways, we continue to refine our assessment process with the evolving industry trends, candidate preferences, and our growing understanding of effective evaluation. As we move forward, we are excited to continue learning and evolving, and we promise to keep you updated along the way.