The job interview hasn’t changed in 100 years… It’s time.
We’ve upgraded everything else about hiring—except the part that matters most.
💡 From Lightbulbs to Zoom Calls—Interviews Haven’t Evolved Much
The job interview wasn’t always part of hiring. Before the 20th century, people mostly got jobs through family ties, apprenticeships, or by simply showing up. If someone vouched for you—or you looked the part—you were in. Hiring was informal, biased, and completely unstructured.
Then came Thomas Edison. In the 1920s, frustrated by engineers who looked good on paper but underperformed, he created a standardized 146-question exam covering geography, history, physics, and general knowledge. Think:
- “What countries border France?”
- “What’s the speed of sound?”
- “Who was Hannibal?”
His goal was to add objectivity to the hiring process—to measure raw intellect, not just charisma. It was built for a world of predictable jobs, fixed roles, and long-term employment.
Fast-forward 100 years: Work today is collaborative, fast-changing, and context-driven. We hire for adaptability, judgment, and communication—not for who remembers the capital of Bulgaria. We've automated sourcing, filtering, and scheduling. AI writes job descriptions and scans résumés.
But the interview?
Still mostly a loose conversation. Still often unprepared. And now, instead of just googling last-minute questions, interviewers might ask ChatGPT to “give me 10 smart questions for a product manager.”
No context. No standard. No real support.
Edison was solving a real problem. But we inherited the method—and forgot to ask if it still makes sense.
🎯 Why Interviews Still Matter (More Than Ever)
Even with résumé-scanning AI, automated scheduling, and algorithmic skill filters, the job interview remains the most influential moment in hiring—where human judgment makes the final call.
The Interview Evolved—But Only for Some
Industrial-organizational psychology has reshaped interviews into rigorously tested tools:
- Structured interviews—where candidates are asked the same job-relevant questions and scored against benchmarks—consistently outperform unstructured ones. A meta-analysis by Wiesner & Cronshaw (1988) found validity coefficients of 0.63 for structured interviews vs. 0.20 for unstructured.
- McDaniel et al. (1994) found situational and behavior-based interviews further improve predictive accuracy.
- Companies like Google, Amazon, and McKinsey have built systems around interview science—complete with interviewer training, scorecards, bar-raisers, and rubrics.
But Most Companies Haven’t Caught Up
That level of sophistication is still rare. Most companies don’t provide formal training in structured interviewing. A 2012 study by Oh, Postlethwaite, and Schmidt confirmed that while structured interviews are more effective, unstructured ones remain the default.
And it shows. Interviews are still often run on gut instinct. Bias creeps in—first impressions, halo effects, even the interviewer’s mood.
A 2019 Yale study showed that hiring managers frequently form class-based judgments within seconds of hearing a candidate speak, impacting perceived competence and hireability—even with identical résumés.
The Nielsen Norman Group also notes that unstructured interviews overemphasize subjective criteria, leading to inconsistent evaluations and poor decisions.
In short: we know how to run better interviews. But knowing isn’t the same as doing.
Why It Still Matters
- It’s where instinct meets data. Soft skills, leadership, and nuance don’t live in a résumé—they come out in conversation.
- It’s the tiebreaker. AI can rank résumés, but the interview decides who gets the offer.
- It sets the tone. Candidates remember how interviews feel. The experience shapes how they view the company.
A landmark meta-analysis by Schmidt and Hunter (1998) found structured interviews were among the most predictive hiring methods—second only to work sample tests. They also reduce bias and promote consistency.
And yet, despite all the progress around it, the interview itself remains the most under-supported part of hiring.
🧠 Software Eats the Funnel—But Skips the Interview Room
Hiring software has come a long way.
ATSs now handle job postings, résumé parsing, and candidate tracking. AI matches profiles to job descriptions, filters candidates, and even writes rejection emails. Assessments, video interviews, and personality tests help scale evaluation.
For many roles—especially high-volume ones—this has dramatically improved efficiency and reduced bias.
But once the actual interview begins— Once a human sits down with another human to decide who gets the job— the software steps out.
The interview remains surprisingly manual:
- Interviewers show up with little or no prep
- Questions vary wildly between interviewers
- Notes are scattered or missing
- Scoring is subjective and inconsistent
It’s like the hiring process drives up in a Tesla—automated, optimized, AI-assisted—and then the interview shows up in a horse cart, asking for directions.
Despite being the most critical part of hiring, the live interview is still the least supported. Not because people don’t care—but because the tooling hasn’t caught up.
🤖 We Get the Agent Trend—But Not Everything Can Be Automated
AI agents are transforming work—scheduling meetings, triaging support tickets, filtering applications. In high-volume, repeatable workflows, automation makes sense.
And in hiring, top-of-funnel tools—AI sourcing, screening, and assessments—are now standard.
But interviews, especially for knowledge work, are different.
Evaluating a candidate’s communication style, judgment, adaptability, or leadership potential isn’t formulaic. It requires context, nuance, and improvisation. A software engineer might ace a coding test but struggle to collaborate. A PM might shine on paper but fumble in a discussion about tradeoffs.
These signals show up in tone, follow-ups, and real-time decisions.
AI can help—suggesting questions, transcribing answers, supporting prep—but the evaluation still belongs to humans.
The goal isn’t to replace the interviewer. It’s to equip them.
🔍 Everyone We Spoke to Feels It
The interview is where hiring decisions are made—and where processes often break down.
In dozens of conversations with TA leaders, hiring managers, and founders, the same frustrations came up:
“We could definitely be doing this better.”
“The process is fragile—if one person drops the ball, everything slows down.”
“We’re not always evaluating the right things.”
“Interviewers show up unprepared.”
“The feedback is messy or missing.”
“We rely too much on gut feel.”
Even teams with formal processes said that reality doesn’t match the playbook. Prep gets skipped. Questions repeat. Feedback lags or never arrives. TA teams have little visibility. And hiring decisions—arguably the most important ones—are made with incomplete or inconsistent data.
🚧 Three Painful Gaps We Keep Hearing About
These aren’t theoretical issues. They showed up in every conversation.
Prep is inconsistent or missing
Interviewers often walk in cold. Some rely on recycled questions. Senior interviewers may trust their instincts, but that can lead to overconfidence—skipping prep and missing red flags. Junior interviewers, meanwhile, often lack guidance altogether.
In both cases, there’s no shared foundation.
The interview is often wasted
Without structure, interviews become repetitive or shallow. Key competencies go unexplored. Follow-ups are weak. It’s not that interviews aren’t happening—it’s that they aren’t producing the insight needed to make confident decisions.
Scoring is slow, inconsistent, and unclear
Scoring fails in three common ways:
- Timing – Feedback comes in late, slowing down decisions
- Consistency – Standards vary wildly between interviewers
- Indecision – Middle-ground candidates create the most drag, often leading to loops or punting
Even when tools like scorecards exist, they’re underused. Notes are vague. Rubrics are ignored. And decisions often default to whoever speaks last or loudest in the debrief.
🔁 It’s Time to Fix the Interview
The interview is still where the real hiring decision happens. It’s where skills meet judgment, where context matters, and where instincts are tested. And yet, it's also where the process is most exposed—underprepared, inconsistent, and hard to improve.
Here’s the irony: we actually do know how to interview well. Decades of research have shaped what good interviewing looks like—structured questions, behavior-based evaluation, clear rubrics, fast feedback. Some organizations have adopted it. But most haven’t.
And the software? It hasn’t kept up. The best practices—the science—aren’t built into the tools most teams use. Which means the people doing the interviews are left to rely on memory, gut feel, or a last-minute list of questions generated by AI.
This isn't about removing the human side of hiring. It’s about giving interviewers better tools—so they can apply good judgment with structure, clarity, and consistency.
There’s been incredible innovation around the interview. But not in it.
That’s the gap. And it’s time to close it.
We’re building something to help. But for now, we’re just starting the conversation.
👉 If this resonates, we’d love to hear how you’re thinking about interviews today. What’s working? What’s not? What do you wish existed?
Let’s talk.